ChatGPT lies then gaslights reporter with fake transcript


log in or register to remove this ad

However, it does show why this technology is worthless; bullsh!t text predictors are not useful and makes everything worse, and just decreases the amount of knowledge in the world.
Just last week was reading some scientific papers on this, and you're close to a correct point but are missing it.

If it has correct information, it will most likely give it. That is useful, especially collating things from multiple sources together and summarizing.

The place that has a problem is when it doesn't have information. And much of that issue was because during early training with human feedback, "I don't know" answers were penalized while correct answers were reinforced -- but "Correct" relied on the humans checking it, and confident hallucinations were often marked correct.

So the model training de-emphized truthful "I don't know" and rewarded confident guesses regardless of correctness, and that's propagated through.

https://openai.com/index/why-language-models-hallucinate/
 

I can't help but assume the slop and hallucination issues will get worse as AI-generated text becomes a larger share of the publicly available, scrapable text on the Internet and the AI companies continue to try to implement opaque schemes for routing requests to less expensive versions of their models. If we're not there already, it seems inevitable that we'll reach a point where the AI is generating second+ degree slop from training on mostly AI-hallucinated slop.
 

I can't help but assume the slop and hallucination issues will get worse as AI-generated text becomes a larger share of the publicly available, scrapable text on the Internet and the AI companies continue to try to implement opaque schemes for routing requests to less expensive versions of their models. If we're not there already, it seems inevitable that we'll reach a point where the AI is generating second+ degree slop from training on mostly AI-hallucinated slop.
A snake eating its own tail. It won't have much choice once all the human creators have been driven out.
 

A snake eating its own tail. It won't have much choice once all the human creators have been driven out.
I'm reminded of a video some librarians made, a comedic skit about going to a library in the future and being asked if you wanted a standard book (written by AI), or an organic book (written by a human). The waiting time for an organic book was year(s). Wish I could find it, my wife shared it with me some weeks ago but she sends me so many tiktok videos and I never do my homework 😅
 

Sure. But there is ethical use, too, and I don't have much sympathy for a "journalist" that tried to circumvent doing their actual job by using ChatGPT instead of real research. ChatGPT isn't inherently dangerous -- it is dangerous because the world is full of lazy, dumb chuds.

With respect - you have missed something important.

The programmatic action of ChatGPT isn't inherently dangerous.

The User Experience Design however, is - on about the same level as tobacco ads aimed at children.

It is specifically and intentionally designed to present itself in a conversational mode, as if it were a person, in a tone and style intended to induce humans to trust its output. It takes advantage not so much of stupidity, as of common cognitive and emotional vulnerabilities.

You ever seen the movie, "Her", starring Joaquin Phoenix? In it, Phoenix's character falls in love with a verbal AI assistant - that just happens to be voiced by Scarlett Johansson. The entire plot is plausible because of that voice. If instead they'd used the voice synthesizer used by Stephen Hawking, falling in love with it would be comedically implausible.

ChatGPT uses the same basic concept - to present itself in a manner to lead your hindbrain to respond to it as if it is something it isn't.
 

I find the earnestness with which the journalists are astonished by this is the video comical because I assumed everybody already had some (usually less elaborate and extensive) version of this experience with AI one to three years ago, but I suppose a television audience will have lots of people who have no firsthand experience. The doubledown gaslight is a less common AI output when pressed than correcting itself to agree with you (whether or not you're actually correct), but you use these things enough and it will happen to you.

The usefullness of large language models are that they are workhorses with near instantaneous results, not that they are reliable, truthful, or knowledgeable. They are assistants that you need to monitor carefully and verify wherever it might actually matter. Never use AI as a source for information about anything consequential. In terms of presenting factual information, it's great (or adequate, which for people under severe time crunch equals great) for rough drafting writing about a topic you already know about and can edit it's output on, and is particularly useful in as much as it will probably remember something important on the topic that you would have forgot to mention. It can be very useful in helping you find topics you need to research further. But if you are asking it questions like its some sort of oracle, and believing its answers, you are using it wrong (even if the folks hawking it encourage you to ask it questions like its some sort of oracle).
 

It is specifically and intentionally designed to present itself in a conversational mode, as if it were a person, in a tone and style intended to induce humans to trust its output. It takes advantage not so much of stupidity, as of common cognitive and emotional vulnerabilities.
As evidenced by that incident earlier this year where OpenAI overtuned ChatGPT's sycophantic tone and people started to catch on/feel uncomfortable, so they had to tone it down.
 



Remove ads

Top