ChatGPT lies then gaslights reporter with fake transcript


log in or register to remove this ad

Humans are often hallucinating Shakespeare's and other famous person's quotes.
I really hope this will be fixed in the next version of Humans.
 
Last edited:

It's a video saying, with a lot of emphasis and storytelling, how the reporter discovered that AI hallucinations are a thing, proceed to illustrate it to its co-host who seem to discover warm water as well as him, leading to the conclusion that it will change our lives and workplace but we need to be careful, accompanied by a commentary saying : "AI slop do exist, this is one of the examples".

This isn't a thread about the evil of AI in general: it ends with the reporter implying he'll keep using it (possibly more carefully than before and maybe he'll be educating himself on how to use the tool to lessen occurences of AI errors -- one he had obviously no trouble detecting and correcting despite his professed total ignorance of the topic). A video about the evil of AI would end with a warning like: "do not use it". I agree that the storytelling about the information is certainly pointing to a message like "look, AI is baaaaad" in a click-baity way, but that's reporting nowadays.

The accompanying commentary "here is a single example of AI slop" proves that AI hallucinations exist, which is probably denied by noone. It isn't implying this is a (-) thread on AI, but a discussion about AI hallucinations, in which an opinion like "sure they exist, but their prevalence is uncommon enough given adequate precaution that AI is still making a significant value offering for specific uses" sounds perfectly adequate.

If I were posting an illustration of using a skill challenge in my D&D game where it went badly, and concluding that skill challenge is certainly a part of the popular rules but it's not great all the time, wouldn't you think that someone saying "my experience with skill challenge is overall better than yours and despite the flaw you experimented, it's nonetheless a good mechanics" on topic?
It's about the reason why AI is producing slop ("hallucinations") with a video pointing to one example. Also, as has been discussed elsewhere, there are no - threads on enworld and I feel keeping on topic should demand neither a + nor a -. If someone says "look at this example of AI producing slop", answering with "the journalist should have known better", or "it helps me with coding" isn't really keeping to topic. Should you instead anser "these hallucinations has helped me in several ways and is part of a design choice", that would have been an opposing view that kept on topic, and it is also closer to the example that you ended with there.
 

Humans are often hallucinating Shakespeare's and other famous person's quotes.
I really hope this will be fixed in the next version of Humans.
This is a cute quip that fails to take into account that human hallucinations are also, oftentimes, seen as problems by that greater majority.
 

It's about the reason why AI is producing slop ("hallucinations") with a video pointing to one example. Also, as has been discussed elsewhere, there are no - threads on enworld and I feel keeping on topic should demand neither a + nor a -. If someone says "look at this example of AI producing slop", answering with "the journalist should have known better", or "it helps me with coding" isn't really keeping to topic. Should you instead anser "these hallucinations has helped me in several ways and is part of a design choice", that would have been an opposing view that kept on topic, and it is also closer to the example that you ended with there.
The issue is that the word 'slop' is generic. And examples like these are the equivalent of shining a spotlight on a bricklayer because they performed piss poor and killed the patient as a neurosurgeon. It's about the wrong people using the wrong tool for a job. Not that we're all acting surprised about the results and suddenly all bricklayers are crap. This is about clickbait, be it a 'journalist' or a site owner...

Back in the early days, certain people claimed that search engines like Google Search would replace experts, as the 'common person' could look up everything themselves. The problem was that the 'common person' couldn't find what they were looking for or found the wrong things. It isn't the first time that people got to me after trying to fix their PC for half a day by googling, only to make it worse. So much worse that fixing it would take WAY more time then just calling me initially and I could have fixed the issue in less then 5 min. This is why companies often lockdown PCs of users so they can't make things worse. Search engines didn't make people smarter, or more skilled, they made information more accessible. But it was the same as with huge libraries, if you didn't know or understand the library system, good luck finding what you're looking for!

Now people (and companies) are making AI/LLM out to be some magic thing that suddenly makes you smart and/or skilled. You need to learn how to use those tools. So you need the right people, with the right skills, and the right purpose. Meanwhile we have a bunch of flat-earthers on both sides of the isle claiming all kinds of stuff: It's slop! It's magic! I'm right! No, I'm right! And that's more of a problem then anything else.
 

Remove ads

Top