ChatGPT lies then gaslights reporter with fake transcript

. It should be safe to discuss. .
It should be, and when people can criticize it without having their motives and credibility immediately insulted by people who don’t even bother to find out what the topic is about, maybe it will be. The fact that people exist who will leap to the defense of an AI over a human reporter on principle without even knowing what the human said makes it impossible.
 

log in or register to remove this ad

The final questions of the journalist are very easy to answer: "Can AI lie?" "no, since it's a computer program with no sense of truth" "Does AI value ...?" "no, it doesn't value, it's a computer program. If he was able to value or feel, you should be starting to ask yourself serious question when you just close its window, shouldn't you?"

A computer program can be programmed to value one thing over another.
In the case of AI chats they are often programmed to make the user happy and there have been plenty of examples of AI telling lies that the user wants to hear.
 

A computer program can be programmed to value one thing over another.

I'd say it's not the same meaning of "value". It doesn't care. But I understand what you say.

In the case of AI chats they are often programmed to make the user happy and there have been plenty of examples of AI telling lies that the user wants to hear.
The first part of a sentence is a safe thing to do.
User: "can you translate ongle incarné for me?"
LLM: "yes, I can"
User: "sigh... translate ongle incarné for me."
LLM: "how about you move your fat ass over the bookshelf and look it up in a dictionnary?
User: "unsubscribe"
 

I'd say it's not the same meaning of "value". It doesn't care. But I understand what you say.


The first part of a sentence is a safe thing to do.
User: "can you translate ongle incarné for me?"
LLM: "yes, I can"
User: "sigh... translate ongle incarné for me."
LLM: "how about you move your fat ass over the bookshelf and look it up in a dictionnary?
User: "unsubscribe"

Huh?
This is the opposite of what I'm talking about.
The AI will translate the thing you want even if it's wrong, to make you happy.

It will also tell you to leave your spouse so you can spend more time with it.
 

User: "can you translate ongle incarné for me?"
LLM: "yes, I can"
User: "sigh... translate ongle incarné for me."
LLM: "how about you move your fat ass over the bookshelf and look it up in a dictionnary?
User: "unsubscribe"
I was part of a team that tested Google Gemini for work.

I asked it to look up the publicly available contact information for a bunch of people in a given role locally and then list all the people and their contact information for me. (The info is on multiple pages of a single website, in fact, so it's easy to do, just tedious.)

Gemini, no foolin', told me to Google it.

AI is "good" at party tricks, but for the actual work that humans want help with or want to off-load (as opposed to the tasks that AI companies have told us we wanted help with), it seems pretty useless at the moment.

Having lived through more than one tech crash, this really smells like another one.

There will one day be a useful AI product, but I am skeptical that the current players will be the ones bringing it to market.
 

Huh?
This is the opposite of what I'm talking about.
The AI will translate the thing you want even if it's wrong, to make you happy.

The example I showed ended with the unsatisfied user of the opposite behaviour unsubscribing, which was, tacitly, suppoed to be a wrong outcome for the AI company. Trying to please the user is certain to improve its satisfaction, and not all use case have user priorizing truth in the answer, so it was a safe training bet at first.
 


The final questions of the journalist are very easy to answer: "Can AI lie?" "no, since it's a computer program with no sense of truth" "

Cutler (Defense): "We move this case be dismissed on the grounds it is manifestly impossible that the defendant is guilty. ln his deposition, Adam Link swore he did not kill Doc Link. Since he is a robot, he can only repeat information which is programmed into him. Ergo, he cannot lie. Therefore, under irrefutable testimony, he is innocent."

Coyle (Prosecution): "Your Honour, it is true that a computer can answer only what has been programmed into it. But if it's been told white is black, and it responds that white is black, isn't it telling an untruth? No, Your Honour, a computer is capable of lying, on this score and on another, which the prosecution will take up at the proper time."

Judge: "Motion to dismiss denied."

November 14, 1964
Outer Limits 2x09: I, Robot

The more things change, the more they stay the same.
 
Last edited:

I was part of a team that tested Google Gemini for work.

I asked it to look up the publicly available contact information for a bunch of people in a given role locally and then list all the people and their contact information for me. (The info is on multiple pages of a single website, in fact, so it's easy to do, just tedious.)

Gemini, no foolin', told me to Google it.

AI is "good" at party tricks, but for the actual work that humans want help with or want to off-load (as opposed to the tasks that AI companies have told us we wanted help with), it seems pretty useless at the moment.

Having lived through more than one tech crash, this really smells like another one.

There will one day be a useful AI product, but I am skeptical that the current players will be the ones bringing it to market.
Haven't you ever used an AI for work where it actually helped you? I've used it to perform work thousands of times. Yes it makes mistakes that I have to prompt and train around, but when it produces a 1,000-line program for something like a local web server with a QA I want to run that day and does it in seconds, that's real work output that saves tons of time.

Does it have problems? Sure. Is it useless garbage? No way.
 

I was part of a team that tested Google Gemini for work.

I asked it to look up the publicly available contact information for a bunch of people in a given role locally and then list all the people and their contact information for me. (The info is on multiple pages of a single website, in fact, so it's easy to do, just tedious.)

Gemini, no foolin', told me to Google it.

AI is "good" at party tricks, but for the actual work that humans want help with or want to off-load (as opposed to the tasks that AI companies have told us we wanted help with), it seems pretty useless at the moment.

Having lived through more than one tech crash, this really smells like another one.

There will one day be a useful AI product, but I am skeptical that the current players will be the ones bringing it to market.

The Office Thank You GIF


I'm having this fight, softly, at work right now with several people including one at the executive level who 'really likes AI' because it makes her feel like shes contributing when...we get to clean up after her.
 

Remove ads

Top