ChatGPT lies then gaslights reporter with fake transcript

To clean up a podcast transcription? Or to make an informational video about the unreliability of AI? Which decision is questionable?

That's not what he did, though, is it?
I want to reduce the temperature of this, but yes, again, I can't imagine trusting ChatGPT to perform a podcast transcription, especially not without doublechecking it and asking it to try again, repeatedly, until it gets it right. I do that throughout the day. When I'm asking it for help producing computer scripts, providing CLI commands to perform things, specific terms that should appear in logs I'm analyzing, it routinely provides incorrect info on its initial tries. I almost always have to ask it to, effectively, try harder, which is always does.

It may not always get it correct in the end, but sometimes these kinds of anti-AI arguments are biased. Many people have decided that AI is bad, and there's deep confirmation bias out there about it.

Is it perfect? No. Is it better for some things than others? Yes. Can most of its mistakes be corrected by simply rephrasing a prompt? Also yes.

Will it eventually lead to our destruction? Unknown.... Possibly also yes. But not before everyone with a white-collar job has to use some version of it just to remain employed.
 

log in or register to remove this ad


No, no, no. Read it again. Or watch it again. Or both.
Got it. I honestly didn't even watch the full video before I replied.

Did I lie? No, I didn't say I'd watched it. I mistakenly assumed I knew what the video was about and created a narrative from there. I was wrong, but I'm not an AI. Just a regular human. Being wrong is what ChatGPT does a billion times a day. It anticipates what a human would say in similar situations based on other things humans have said a billion times. It needs to be nudged and guided, just like people.

Yeah, what this guy did and the point he was trying to make about how insidious ChatGPT was...sorry, but his findings just aren't surprising to me at all because I use it 10 times per day. Never trust anything it says the first time around when the answer really matters. When absolute truth doesn't matter? So what. But when it does, doublecheck its work and call it out. Tell it ahead of time, "OK, I need you to be absolutely certain and accurate before you provide a response to my next question, alright?" It positively responds to things like that.
 

Got it. I honestly didn't even watch the full video before I replied.

Did I lie? No, I didn't say I'd watched it. I mistakenly assumed I knew what the video was about and created a narrative from there. I was wrong, but I'm not an AI. Just a regular human. Being wrong is what ChatGPT does a billion times a day. It anticipates what a human would say in similar situations based on other things humans have said a billion times. It needs to be nudged and guided, just like people.

Yeah, what this guy did and the point he was trying to make about how insidious ChatGPT was...sorry, but his findings just aren't surprising to me at all because I use it 10 times per day. Never trust anything it says the first time around when the answer really matters. When absolute truth doesn't matter? So what. But when it does, doublecheck its work and call it out. Tell it ahead of time, "OK, I need you to be absolutely certain and accurate before you provide a response to my next question, alright?" It positively responds to things like that.
And did you instantaneously come to your healthy distrust of the reliability of AI content the moment you began using it, or is it something you had to learn through experience?

Do you think it's valueless for journalists to highlight such issues so that people who don't have your wide -ranging experience can become aware of them before falling afoul of them?
 

It's not a philosophical question.

"Can AI lie?" is just shorthand for "Can AI output false information?"

To me, lying is having an intent to deceive. If someone ask his way to the nearest bank and you give him, mistakenly, bad information, I wouldn't say you lied to them. You outputted false information, but you didn't lie, because it was without intent. You need to know very well that he must turn right and tell him to turn left to be lying.

If you consider all incorrect information a lie, then I agree with you, of course AI lie.


The answer is yes, and banging on about the word 'lie' is just a distraction--bordering on misdirection--from the actual conversation, and the purpose of the video--to alert users to this fact using non-technical language that the average person easily understands.

I think, on the other hand, that using a word implying that AI can think and have intent is a misdirects readers into thinking that LLM are something more than they are -- more Skynet than tools -- that they shouldn't trust blindly. I was actually thinking lie means an intent to deceive, not an error, even silly as in the example of the video. But I won't argue with you over the definition of lie in a foreign language.

It doesn't matter that he used the word 'lie'. Or that it used the word 'apologise' (it can't do that either, but nobody's jumping down it's throat for using that word). The point is that AI outputs are, at least at present, not to be relied upon.

Of course AI can't apologize. That would require a feeling of being contrite, something tools don't do. Nobody expect their d4 to be sorry if they walk on it and it hits their toe.

And of course LLM output shouldn't be trusted and relied upon, as written on the output on many LLMs. That's an important restriction that everyone should know before using it. Much like one must know that books aren't necessarily true, AI output isn't necessarily true. We're not disagreeing here. But I feel the message could be conveyed more efficently by telling: "it's a tool. It doesn't know true or false, right or wrong, it output things for the human operator to sort out. Do not trust it for anything vaguely important without double checking it" than by using other wording implying guilt, or intent, or apology.
 
Last edited:

And did you instantaneously come to your healthy distrust of the reliability of AI content the moment you began using it, or is it something you had to learn through experience?

Do you think it's valueless for journalists to highlight such issues so that people who don't have your wide -ranging experience can become aware of them before falling afoul of them?
Not instantaneously, but pretty darned quickly! At least 4 years ago! Are you suggesting that you haven't heard someone ever say before, "Be careful with its answers. Sometimes it's wrong. Sometimes it 'hallucinates'." You've been warned about that before, right? Are there still many people out there who don't know that ChatGPT can fabricate answers or be wrong?

So here's another story with someone pointing out how untrustworthy ChatGPT is? Why not do a story about how easy it is to modify ChatGPT's behavior to reduce its flaws? I think that would actually be a new spin. How about a story on how to refine prompts to use it better?
 

Got it. I honestly didn't even watch the full video before I replied.

Did I lie? No, I didn't say I'd watched it. I mistakenly assumed I knew what the video was about and created a narrative from there.
Wow. You attacked the reporter's credibility and dismissed his report, despite not watching it... because you didn't like the conclusion you imagined he had made? And doubled down when challenged, still not having watched it?

sorry, but his findings just aren't surprising to me at all because I use it 10 times per day.
So? Who cares whether you are surprised by it? What does that have to do with anything? The fact that you aren't personally surprised by something is not the litmus test for whether an informational piece is valid.
 

Not instantaneously, but pretty darned quickly! At least 4 years ago! Are you suggesting that you haven't heard someone ever say before, "Be careful with its answers. Sometimes it's wrong. Sometimes it 'hallucinates'." You've been warned about that before, right? Are there still many people out there who don't know that ChatGPT can fabricate answers or be wrong?
Pretty much only in discussions like this specifically about AI. Not so much in everyday life.
So here's another story with someone pointing out how untrustworthy ChatGPT is? Why not do a story about how easy it is to modify ChatGPT's behavior to reduce its flaws? I think that would actually be a new spin. How about a story on how to refine prompts to use it better?
What you're describing isn't a news article, it's an instructional video. Those are not the same, nor published by the same people.
 


And did you instantaneously come to your healthy distrust of the reliability of AI content the moment you began using it, or is it something you had to learn through experience?

While there is value in repeating it (much like we apparently still need to tell people not to phone and drive at the same time, despite it being rather obvious than phoning will lower you attention), I don't think it requires a lot of inside knowledge at this point. ChatGPT or Grok both directly state it on the answer area of their online LLM so anyone with access to the technology is informed of that.
 
Last edited:

Remove ads

Top