ChatGPT lies then gaslights reporter with fake transcript

Morrus

Well, that was fun
Staff member
For those claiming that AI slop is an unfair description, just watch how ChatGPT faked a transcript of a podcast which had not yet been uploaded and then doubled down and gaslit the reporter and insisted he had uploaded it and that this was an accurate transcript. It was, of course, completely made up by the AI.

 

log in or register to remove this ad

Very irresponsible for Sky to call it a lie and imply the program has intentions.

The program isn't alive. It's a text predictor that generates bullsh!t*,has no intention and has zero understanding of what it's doing. It's why LMMs should primarily be called LMMs and not 'AI'; it muddies the water, considering that these companies REALLY want you to think it's even approaching artificial general intelligence when... it's a program that crap at even getting the data it trained on straight.

*Bullsh!t is a more scientific term; it doesn't assume deliberateness like lying,

However, it does show why this technology is worthless; bullsh!t text predictors are not useful and makes everything worse, and just decreases the amount of knowledge in the world.
 


Very irresponsible for Sky to call it a lie and imply the program has intentions.

The program isn't alive. It's a text predictor that generates bullsh!t*,has no intention and has zero understanding of what it's doing.

You are correct.

However, so long as the company who makes it presents it as "speaking" and "apologizing", then "lying" is the word with the closest emotional truth, and is probably the best word choice for impressing the risk on the audience.
 


"Lie" and "gaslight" seem like terms that give ChatGPT far more agency and intention than it is capable of. ChatGPT does not know anything, not can it decide anything.
Then its feelings can't be hurt. The verbiage is obviously chosen to get attention, and isn't the point.

If it can 'run' a TTRPG or 'write' a poem or 'answer' a question, or 'apologise', it can 'lie'. Technically in can do none of these things (or all of them). The semantics are distracting from the actual point--that AI, as it currently stands, produces slop. Whether it has agency or intention is irrelevant--the results are what matters, not any intention or lack thereof.
 

"Lie" and "gaslight" seem like terms that give ChatGPT far more agency and intention than it is capable of. ChatGPT does not know anything, not can it decide anything.

The issue isn't the agency and intent of the computer program.
The issue is the agency and intent of the company that creates the program, who is therefore ethically, and possibly legally, liable for the results.
 


Then its feelings can't be hurt. The verbiage is obviously chosen to get attention, and isn't the point.

If it can 'run' a TTRPG or 'write' a poem or 'answer' a question, or 'apologise', it can 'lie'. Technically in can do none of these things (or all of them). The semantics are distracting from the actual point--that AI, as it currently stands, produces slop. Whether it has agency or intention is irrelevant--the results are what matters, not any intention or lack thereof.

The issue isn't the agency and intent of the computer program.
The issue is the agency and intent of the company that creates the program, who is therefore ethically, and possibly legally, liable for the results.

Sure. But there is ethical use, too, and I don't have much sympathy for a "journalist" that tried to circumvent doing their actual job by using ChatGPT instead of real research. ChatGPT isn't inherently dangerous -- it is dangerous because the world is full of lazy, dumb chuds.
 

Sure. But there is ethical use, too, and I don't have much sympathy for a "journalist" that tried to circumvent doing their actual job by using ChatGPT instead of real research. ChatGPT isn't inherently dangerous -- it is dangerous because the world is full of lazy, dumb chuds.
I'll leave you to attack the journalist. Knock yourself out.

I'm here to talk about the AI slop.
 

Remove ads

Top