ChatGPT lies then gaslights reporter with fake transcript

For those claiming that AI slop is an unfair description, just watch how ChatGPT faked a transcript of a podcast which had not yet been uploaded and then doubled down and gaslit the reporter and insisted he had uploaded it and that this was an accurate transcript. It was, of course, completely made up by the AI.

Why someone would rely on ChatGPT for something like this says more about the people who made that decision than it says about the AI.
 

log in or register to remove this ad

Why someone would rely on ChatGPT for something like this says more about the people who made that decision than it says about the AI.
Attacking journalists for telling you things you don't want to hear says, in your own words, more about the people who are doing that than it says about the information being presented.

I know we're in a post-truth world where journalists are constantly attacked for presenting unpopular information, but that does not make the information untrue.

The number of people in this thread who have ignored the message being presented and instead attacked the journalist is dismaying.
 

The issue isn't the agency and intent of the computer program.
The issue is the agency and intent of the company that creates the program, who is therefore ethically, and possibly legally, liable for the results.
I'd say the issue is more that there are still people who think they can entrust ChatGPT to do something like this. That isn't what it was made for. There are specialized custom LLMs with custom datasets and purpose-built boundaries and filters that are better when reliability and accuracy are of paramount importance. ChatGPT is primarily a research tool and a cool toy.
 

Attacking journalists for telling you things you don't want to hear says, in your own words, more about the people who are doing that than it says about the information being presented.

I know we're in a post-truth world where journalists are constantly attacked for presenting unpopular information, but that does not make the information untrue.
Who attacked?
 


You did. You attacked the credibility of the journalist (ad hominem). But you know that.
I don't feel like I attacked them. I do think it's fair to question their decision to rely on ChatGPT for something like this. Where I work, everyone uses both ChatGPT and Copilot throughout the day. Copilot is a paid add-on for Microsoft 365 customers, so its use is encouraged by the thousands of employees.

But no one is advised or encouraged to create their Teams meeting transcripts using Copilot, despite its inclusion in the Microsoft suite. Teams has a built-in transcription feature, backed by different AI tools behind the scenes, that works almost flawlessly. There's no reason to use a public AI model when the private one designed to perform transcription works much better and doesn't make stuff up.
 

I don't feel like I attacked them. I do think it's fair to question their decision to rely on ChatGPT for something like this.
To clean up a podcast transcription? Or to make an informational video about the unreliability of AI? Which decision is questionable?
But no one is advised or encouraged to create their Teams meeting transcripts
That's not what he did, though, is it?

But even if he had--so what? It's a warning to people to do precisely not that. You know like those ads which used to show kids being electrocuted when their kites hit power lines? Is your response "Well the kid was stupid, so I'm going to ignore the warning?" and then promptly rush out and fly a kite into a power line, because that will show him? Or did you, maybe, take that info on board and refrain from flying kites into power lines?
 




Remove ads

Top