ChatGPT lies then gaslights reporter with fake transcript

The usefullness of large language models are that they are workhorses with near instantaneous results, not that they are reliable, truthful, or knowledgeable. They are assistants that you need to monitor carefully and verify wherever it might actually matter. Never use AI as a source for information about anything consequential. In terms of presenting factual information, it's great (or adequate, which for people under severe time crunch equals great) for rough drafting writing about a topic you already know about and can edit it's output on, and is particularly useful in as much as it will probably remember something important on the topic that you would have forgot to mention. It can be very useful in helping you find topics you need to research further. But if you are asking it questions like its some sort of oracle, and believing its answers, you are using it wrong (even if the folks hawking it encourage you to ask it questions like its some sort of oracle).

They are very bad hawkers, then, because they print under each answer shouldn't be believed and should be double checked -- even though adopting a friendly tone to hawk its wares... The cognitive presentation is effective to create trust... waiters asking you if your day is good or if you're well are doing that all the time: in truth, they don't give a damn about your ingrown toenail, or the nagging feeling that your yearly bonus will be 5% lower than hoped for.

The final questions of the journalist are very easy to answer: "Can AI lie?" "no, since it's a computer program with no sense of truth" "Does AI value ...?" "no, it doesn't value, it's a computer program. If he was able to value or feel, you should be starting to ask yourself serious question when you just close its window, shouldn't you?"

All in all I feel it's a good thing regular people are starting to understand what AI is, so they'll be able to use it responsably and productively in the future. It is taking much less time than cars, where people had to die in droves before starting to wear safety belts. Or the Internet, where people believed for decade what they read online to be true. Now, we're on a much shorter reaction time. That's good. And also for the AI companies, until they are made obsolete by everyone running its AI at home, because if people use AI correctly, they won't have to be liable for stupid things people might do after misusing their AI, since they won't be misusing their AI anymore.
 
Last edited:

log in or register to remove this ad

With respect - you have missed something important.

The programmatic action of ChatGPT isn't inherently dangerous.

The User Experience Design however, is - on about the same level as tobacco ads aimed at children.

It is specifically and intentionally designed to present itself in a conversational mode, as if it were a person, in a tone and style intended to induce humans to trust its output. It takes advantage not so much of stupidity, as of common cognitive and emotional vulnerabilities.

You ever seen the movie, "Her", starring Joaquin Phoenix? In it, Phoenix's character falls in love with a verbal AI assistant - that just happens to be voiced by Scarlett Johansson. The entire plot is plausible because of that voice. If instead they'd used the voice synthesizer used by Stephen Hawking, falling in love with it would be comedically implausible.

ChatGPT uses the same basic concept - to present itself in a manner to lead your hindbrain to respond to it as if it is something it isn't.
You just saved me a whole lot of typing.
 

I can't help but assume the slop and hallucination issues will get worse as AI-generated text becomes a larger share of the publicly available, scrapable text on the Internet and the AI companies continue to try to implement opaque schemes for routing requests to less expensive versions of their models. If we're not there already, it seems inevitable that we'll reach a point where the AI is generating second+ degree slop from training on mostly AI-hallucinated slop.
This is the Dead Internet theory, stripped of the conspiratorial thinking the creator had woven into his original take.
 

Sure. But there is ethical use, too, and I don't have much sympathy for a "journalist" that tried to circumvent doing their actual job by using ChatGPT instead of real research.
I don't know anything about this guy or his beat, but if his beat was AI, then interacting with it on a daily basis seems fine and appropriate.

But getting high on his own supply and being surprised it would hallucinate is a bad look. A journalist on the AI beat should be the first person to suspect hallucinations and slop from AI.
 

I don't know anything about this guy or his beat, but if his beat was AI, then interacting with it on a daily basis seems fine and appropriate.

But getting high on his own supply and being surprised it would hallucinate is a bad look. A journalist on the AI beat should be the first person to suspect hallucinations and slop from AI.

The article talking about the "vibe coder" who was shocked when all his data was deleted, comes to mind.
 

The final questions of the journalist are very easy to answer: "Can AI lie?" "no, since it's a computer program with no sense of truth" "Does AI value ...?" "no, it doesn't value, it's a computer program.

It's not a philosophical question.

"Can AI lie?" is just shorthand for "Can AI output false information?" The answer is yes, and banging on about the word 'lie' is just a distraction--bordering on misdirection--from the actual conversation, and the purpose of the video--to alert users to this fact using non-technical language that the average person easily understands.

It doesn't matter that he used the word 'lie'. Or that it used the word 'apologise' (it can't do that either, but nobody's jumping down it's throat for using that word). The point is that AI outputs are, at least at present, not to be relied upon.
 

But getting high on his own supply and being surprised it would hallucinate is a bad look. A journalist on the AI beat should be the first person to suspect hallucinations and slop from AI.
Of course he understood it. He was making an informative video to warn users about the nature of AI outputs. People on the news aren't surprised by the news. They know in advance what they're going to say.

Man, there's a lot of weird shooting the messenger vibes going on in this thread.
 

This is the Dead Internet theory, stripped of the conspiratorial thinking the creator had woven into his original take.
I think it could even happen without the Internet as a whole being "dead", if high-quality text becomes increasingly paywalled /captcha-gated/rate-limited/etc while AI-generated text overwhelms public spaces. Doubly so with these AI companies getting sued from every angle by big corporations with an interest in protecting their copyrighted content. Scrapeable, non-AI-generated text may become hard to find regardless of the state of the rest of the Internet.
 

I think it could even happen without the Internet as a whole being "dead", if high-quality text becomes increasingly paywalled /captcha-gated/rate-limited/etc while AI-generated text overwhelms public spaces. Doubly so with these AI companies getting sued from every angle by big corporations with an interest in protecting their copyrighted content. Scrapeable, non-AI-generated text may become hard to find regardless of the state of the rest of the Internet.
Yeah, I think this is the most likely way the theory would play out. And given what a mess Google results already are, I think there's good reason to believe we're well on our way.
 

I think it could even happen without the Internet as a whole being "dead", if high-quality text becomes increasingly paywalled /captcha-gated/rate-limited/etc while AI-generated text overwhelms public spaces. Doubly so with these AI companies getting sued from every angle by big corporations with an interest in protecting their copyrighted content. Scrapeable, non-AI-generated text may become hard to find regardless of the state of the rest of the Internet.
I also think, though, that there is an opportunity for human-created content to stand out from the slop. People will seek that out, and seek out the creators/brands that they trust. That 'real content' will have value just for being 'real'. But we (creators) have to make sure we are better than the slop.
 

Remove ads

Top