ChatGPT lies then gaslights reporter with fake transcript

It would seem completely unnecessary for the purpose of preventing actual efficient action against AI research, yes, or to enact support for AI, yes, you're right on both counts. But I don't think it's the goal of people engaging in discussion on a board such as this one. Their goal might modestly be discussing "geek talk & media".
Sure but me engaging in discussions about why people post on boards like these would again derail the topic of this particular thread about AIs "hallucinating" and producing 'slop'.
 

log in or register to remove this ad

I'm glad for artists that all the naysayers about AI taking jobs were wrong. I still feel job replacement will happen at some point, but I am glad it isn't happening yet for them as I understand that my views about a society freeed from the buden of having to work being an utopia isn't shared by all.
Agreed. Sooner or later, it's coming for many of those remaining creative jobs, IMO. Sadly. It's deeply worrisome and disappointing, but I've consistently said from the start that I believe it's happening regardless. The quality of AI generative art has dramatically, steadily improved over the past three years. We were just talking, at length, about the six-fingered hands and dogs with five legs and two tails. Three years ago you almost could not get one of these tools to produce an anatomically correct human hand. Now it's trivial.

People have a tendency in this debate to refer to AI's current or past failings, with little regard for its trajectory and how quickly it's advancing. Try to follow the trajectory out another three years. Are you still so sure about what it will/won't be?
 


My issue (Among many) is that the program attempted to cover it's tracks. I honestly suspect this is a feature, not a bug, ie, programmers are instructing the LLM to double down. The program can't lie, but the programmers can through the program.
 


Deloitte Australia will partially refund the 440,000 Australian dollars ($290,000) paid by the Australian government for a report that was littered with apparent AI-generated errors, including a fabricated quote from a federal court judgment and references to nonexistent academic research papers.

Season 7 Oops GIF by Workaholics
 

My issue (Among many) is that the program attempted to cover it's tracks. I honestly suspect this is a feature, not a bug, ie, programmers are instructing the LLM to double down. The program can't lie, but the programmers can through the program.
To the extent they are via fine tuning and such, it is a tricky balance. You don't want it to agree with whatever the user tells it, so it has to be somewhat stubborn. But this can lead to it doubling down on false information. Similar to people in that way.
 

My issue (Among many) is that the program attempted to cover it's tracks. I honestly suspect this is a feature, not a bug, ie, programmers are instructing the LLM to double down. The program can't lie, but the programmers can through the program.
It didn't attempt to "cover its tracks," it only looks that way to us because humans apply human motivations to nonhuman entities and inanimate objects. It predicted the next words that would fill out the prompt response, based on training data and (possibly) previous responses. It's not acting malevolently -- it's not "acting" at all because "acting" implies volition and consciousness (at some level).
 

Deloitte Australia will partially refund the 440,000 Australian dollars ($290,000) paid by the Australian government for a report that was littered with apparent AI-generated errors, including a fabricated quote from a federal court judgment and references to nonexistent academic research papers.
While we could blame AI/LLM for that, but that's par for the course for Deloitte, even before LLM became generally available: Deloitte - Wikipedia

I also wonder why the Australian government would go into business with a company with such... Shady history, in the first place?

My issue (Among many) is that the program attempted to cover it's tracks. I honestly suspect this is a feature, not a bug, ie, programmers are instructing the LLM to double down. The program can't lie, but the programmers can through the program.
I have serious questions about that, is that an outlier or is it due to prior prompting and/or due to the users profile? Generally when I ask stuff to an LLM (at least the ones I tried) and I know they are wrong, they backpedal faster then a threatened rabbit.
 



Remove ads

Top