ChatGPT lies then gaslights reporter with fake transcript


log in or register to remove this ad

Humans are often hallucinating Shakespeare's and other famous person's quotes.
I really hope this will be fixed in the next version of Humans.
 
Last edited:

It's a video saying, with a lot of emphasis and storytelling, how the reporter discovered that AI hallucinations are a thing, proceed to illustrate it to its co-host who seem to discover warm water as well as him, leading to the conclusion that it will change our lives and workplace but we need to be careful, accompanied by a commentary saying : "AI slop do exist, this is one of the examples".

This isn't a thread about the evil of AI in general: it ends with the reporter implying he'll keep using it (possibly more carefully than before and maybe he'll be educating himself on how to use the tool to lessen occurences of AI errors -- one he had obviously no trouble detecting and correcting despite his professed total ignorance of the topic). A video about the evil of AI would end with a warning like: "do not use it". I agree that the storytelling about the information is certainly pointing to a message like "look, AI is baaaaad" in a click-baity way, but that's reporting nowadays.

The accompanying commentary "here is a single example of AI slop" proves that AI hallucinations exist, which is probably denied by noone. It isn't implying this is a (-) thread on AI, but a discussion about AI hallucinations, in which an opinion like "sure they exist, but their prevalence is uncommon enough given adequate precaution that AI is still making a significant value offering for specific uses" sounds perfectly adequate.

If I were posting an illustration of using a skill challenge in my D&D game where it went badly, and concluding that skill challenge is certainly a part of the popular rules but it's not great all the time, wouldn't you think that someone saying "my experience with skill challenge is overall better than yours and despite the flaw you experimented, it's nonetheless a good mechanics" on topic?
It's about the reason why AI is producing slop ("hallucinations") with a video pointing to one example. Also, as has been discussed elsewhere, there are no - threads on enworld and I feel keeping on topic should demand neither a + nor a -. If someone says "look at this example of AI producing slop", answering with "the journalist should have known better", or "it helps me with coding" isn't really keeping to topic. Should you instead anser "these hallucinations has helped me in several ways and is part of a design choice", that would have been an opposing view that kept on topic, and it is also closer to the example that you ended with there.
 

Humans are often hallucinating Shakespeare's and other famous person's quotes.
I really hope this will be fixed in the next version of Humans.
This is a cute quip that fails to take into account that human hallucinations are also, oftentimes, seen as problems by that greater majority.
 

It's about the reason why AI is producing slop ("hallucinations") with a video pointing to one example. Also, as has been discussed elsewhere, there are no - threads on enworld and I feel keeping on topic should demand neither a + nor a -. If someone says "look at this example of AI producing slop", answering with "the journalist should have known better", or "it helps me with coding" isn't really keeping to topic. Should you instead anser "these hallucinations has helped me in several ways and is part of a design choice", that would have been an opposing view that kept on topic, and it is also closer to the example that you ended with there.
The issue is that the word 'slop' is generic. And examples like these are the equivalent of shining a spotlight on a bricklayer because they performed piss poor and killed the patient as a neurosurgeon. It's about the wrong people using the wrong tool for a job. Not that we're all acting surprised about the results and suddenly all bricklayers are crap. This is about clickbait, be it a 'journalist' or a site owner...

Back in the early days, certain people claimed that search engines like Google Search would replace experts, as the 'common person' could look up everything themselves. The problem was that the 'common person' couldn't find what they were looking for or found the wrong things. It isn't the first time that people got to me after trying to fix their PC for half a day by googling, only to make it worse. So much worse that fixing it would take WAY more time then just calling me initially and I could have fixed the issue in less then 5 min. This is why companies often lockdown PCs of users so they can't make things worse. Search engines didn't make people smarter, or more skilled, they made information more accessible. But it was the same as with huge libraries, if you didn't know or understand the library system, good luck finding what you're looking for!

Now people (and companies) are making AI/LLM out to be some magic thing that suddenly makes you smart and/or skilled. You need to learn how to use those tools. So you need the right people, with the right skills, and the right purpose. Meanwhile we have a bunch of flat-earthers on both sides of the isle claiming all kinds of stuff: It's slop! It's magic! I'm right! No, I'm right! And that's more of a problem then anything else.
 

The issue is that the word 'slop' is generic. And examples like these are the equivalent of shining a spotlight on a bricklayer because they performed piss poor and killed the patient as a neurosurgeon. It's about the wrong people using the wrong tool for a job. Not that we're all acting surprised about the results and suddenly all bricklayers are crap. This is about clickbait, be it a 'journalist' or a site owner...

But it isn't click-bait. It's about a real thing that happened as a part of the journalists workday. He found it relevant to inform upon and did so as that is his job.

Back in the early days, certain people claimed that search engines like Google Search would replace experts, as the 'common person' could look up everything themselves. The problem was that the 'common person' couldn't find what they were looking for or found the wrong things. It isn't the first time that people got to me after trying to fix their PC for half a day by googling, only to make it worse. So much worse that fixing it would take WAY more time then just calling me initially and I could have fixed the issue in less then 5 min. This is why companies often lockdown PCs of users so they can't make things worse. Search engines didn't make people smarter, or more skilled, they made information more accessible. But it was the same as with huge libraries, if you didn't know or understand the library system, good luck finding what you're looking for!

I agree with you here, I really do, and people acting upon wrong information is the very problem being reported on. The journalist brought to light that LLMs can give false information so that those watching would know as well to be cautious. Calling it click-bait is adding intention to the report that there is no evidence for.

Another thing relevant to this is that many educations nowadays can be found on-line and uses on-line information as part of their curriculum which proves that the correct information is out there to find as well. Just like libraries hold books that are used in educations as well.

Now people (and companies) are making AI/LLM out to be some magic thing that suddenly makes you smart and/or skilled. You need to learn how to use those tools. So you need the right people, with the right skills, and the right purpose. Meanwhile we have a bunch of flat-earthers on both sides of the isle claiming all kinds of stuff: It's slop! It's magic! I'm right! No, I'm right! And that's more of a problem then anything else.

And this gets to the root of the reported problem. The AI companies sell their programs to be this great thing, knowing that there are problems that might/will cause problems for the end user. Adding a little note that the answers might not be accurate doesn't fix that problem as long as they keep touting AI as the great problem solver. I would also go as far as to claim that the reporter in the clip used the AI in a way that it was supposed to be used and still it produced a false result showing that it was in fact the tool that was at fault.

As far as the term 'slop' being an issue, I'll leave that to @Morrus to respond to, but I think he's already shown several examples of AI producing what could be considered 'slop' and explaining why this kind of "hallucination" leads to further 'slop' being created. In the video it had written an entire podcast that never took place for instance. The journalist managed to make content of it by reporting the problem, otherwise it would just have been slop.
 

And this gets to the root of the reported problem. The AI companies sell their programs to be this great thing, knowing that there are problems that might/will cause problems for the end user. Adding a little note that the answers might not be accurate doesn't fix that problem as long as they keep touting AI as the great problem solver. I would also go as far as to claim that the reporter in the clip used the AI in a way that it was supposed to be used and still it produced a false result showing that it was in fact the tool that was at fault.
Let me ask you the question: Did we really need to sue Red Bull to realize that Red Bull doesn't give you real physical wings? I hope the answer is "Of course not!", because people aren't stupid (only happened in the US btw). So why are so many people suddenly acting, like "But we didn't know!" with AI/LLM? Because people act stupid, by choice.

Companies sell product. They advertise, just like pnp RPG companies. WOiN has been advertised as "A toolkit oriented game system to create your own worlds...", now step back from a pnp RPG fan perspective. Would a person that wants to create their own world in real life be able to use WOIN to do so? No, of course not. But if some Sovereign Nation person saw this without any context and bought the product, doesn't know what a pnp RPG is and has zero interest in fantasy. What would happen? How would we treat a YouTube video condemning the product because it doesn't actually allow you to create your own world in real life? We would ridicule it and criticize it. And this is from my perspective exactly what is happening here, we're ridiculing and criticizing people for not realizing that they are not capable of using the tool correctly, not using it correctly, using it in the wrong capacity, etc.

"The only tool you'll ever need!" is a great marketing slogan for tools, but the amount of people that truly would believe that are few... So why are people claiming the opposite with AI/LLM? Similar stuff happened with cryptocurrency, one side claimed it would replace currency xyz in no time, others said it was doomed to fail, both were wrong. The hype has quieted down, it's still here, it's still being used and the dollar/euro are also still here.
 

Let me ask you the question: Did we really need to sue Red Bull to realize that Red Bull doesn't give you real physical wings? I hope the answer is "Of course not!", because people aren't stupid (only happened in the US btw).

The same stunt was actually attempted in Germany, with the added idea that lack of familiarity with the English language would increase the risk of the consumer being misled (unlike France, Germany doesn't have consumer protection laws forcing advertizement to be written in the country's official language).

The result was, unsurprisingly, a dismissal with the plaintiff bearing the costs of procedure, as noone could realistically think a slogan should be interpreted other than evocatively about the properties of caffeine, irrespective of the actual content of this "energy drink" which doesn't provide more energy than other drinks.
 

Let me ask you the question: Did we really need to sue Red Bull to realize that Red Bull doesn't give you real physical wings? I hope the answer is "Of course not!", because people aren't stupid (only happened in the US btw). So why are so many people suddenly acting, like "But we didn't know!" with AI/LLM? Because people act stupid, by choice.

Companies sell product. They advertise, just like pnp RPG companies. WOiN has been advertised as "A toolkit oriented game system to create your own worlds...", now step back from a pnp RPG fan perspective. Would a person that wants to create their own world in real life be able to use WOIN to do so? No, of course not. But if some Sovereign Nation person saw this without any context and bought the product, doesn't know what a pnp RPG is and has zero interest in fantasy. What would happen? How would we treat a YouTube video condemning the product because it doesn't actually allow you to create your own world in real life? We would ridicule it and criticize it. And this is from my perspective exactly what is happening here, we're ridiculing and criticizing people for not realizing that they are not capable of using the tool correctly, not using it correctly, using it in the wrong capacity, etc.

"The only tool you'll ever need!" is a great marketing slogan for tools, but the amount of people that truly would believe that are few... So why are people claiming the opposite with AI/LLM? Similar stuff happened with cryptocurrency, one side claimed it would replace currency xyz in no time, others said it was doomed to fail, both were wrong. The hype has quieted down, it's still here, it's still being used and the dollar/euro are also still here.
Like a lot of times here on enworld, comparing one thing to another thing often times turns out to be a moot point. Taking a hyperbolic slogan literally is not what has happened here. AI is being touted, not as a slogan, that they can gather information faster and more accurately than humans, that it will increase work efficiency and that it can create things with naught but a prompt. These aren't slogans but features of the product the AI companies are trying to sell to the general public. It would be more comparable to the ingredients of an energy drink. If it claims to have coffein but tests show that it does not, or not at the amounts specified, then the label would have false information which is unlawful in many places. However, like I stated in the beginning comparing one thing to another thing is unhelpful and even my version of this analogy will fall apart upon closer scrutiny.

What should be discussed is every likely or unlikely problem with LLMs and AIs so that these can be addressed in proper ways. Perhaps future AI can help civilization move forwards, but I would like this to be in a form that is ethically, ecologically, and economically sound, AND I would prefer it risks for malfunctions where as close to zero as possible. That's why I would like the negative points raised, illuminated and discussed.

People have been talking about the dangers of AI for a long time and still science has decided to go forward with it. Nothing we say or do seem to be able stop this. Therefore, derailing discussions on forums because someone finds them too negative would seem completely unnecessary.
 

People have been talking about the dangers of AI for a long time and still science has decided to go forward with it. Nothing we say or do seem to be able stop this. Therefore, derailing discussions on forums because someone finds them too negative would seem completely unnecessary.

It would seem completely unnecessary for the purpose of preventing actual efficient action against AI research, yes, or to enact support for AI, yes, you're right on both counts. But I don't think it's the goal of people engaging in discussion on a board such as this one. Their goal might modestly be discussing "geek talk & media".
 

Remove ads

Top