ChatGPT lies then gaslights reporter with fake transcript

but it's different this time because of the AI

Yes, thats how automation works. You do not blame the person who pressed the button, you blame the faulty tool for making naughty word up.

How the goal posts are shifted on something so plainly obvious is really weird.

I mean christ, if I could just hand off whatever work I did, and then blame the end user for using it? What a joke that would be.

I cannot follow the 'logic' on that at all.
 

log in or register to remove this ad

It didn't attempt to "cover its tracks," it only looks that way to us because humans apply human motivations to nonhuman entities and inanimate objects. It predicted the next words that would fill out the prompt response, based on training data and (possibly) previous responses. It's not acting malevolently -- it's not "acting" at all because "acting" implies volition and consciousness (at some level).
This whole "it's not conscious" sidetrack keeps derailing this thread, and it's getting tedious and preventing the actual conversation from happening. This is just how people talk. He's using understandable human language to describe behaviour. You know what he means. We all do.

Let's all stop lecturing each other every time somebody uses a human analogy to describe the output, and just assume that we know that LLMs are not alive, eh?
 

An interesting paper I saw this morning, regarding the use of ChatGPT in the classroom for students learning English as a second language and writing essays in English. They found that students who could use ChatGPT either supervised in class or whenever they wanted did better than those who were not allowed to use the technology.
 

Yes, thats how automation works. You do not blame the person who pressed the button, you blame the faulty tool for making naughty word up.

How the goal posts are shifted on something so plainly obvious is really weird.

I mean christ, if I could just hand off whatever work I did, and then blame the end user for using it? What a joke that would be.

I cannot follow the 'logic' on that at all.
I mean i could point out one tool but it goes into politics
 

This whole "it's not conscious" sidetrack keeps derailing this thread, and it's getting tedious and preventing the actual conversation from happening. This is just how people talk. He's using understandable human language to describe behaviour. You know what he means. We all do.

Let's all stop lecturing each other every time somebody uses a human analogy to describe the output, and just assume that we know that LLMs are not alive, eh?
Sorry. I see this a lot online, and it always bugs me and I let myself go.
 

An interesting paper I saw this morning, regarding the use of ChatGPT in the classroom for students learning English as a second language and writing essays in English. They found that students who could use ChatGPT either supervised in class or whenever they wanted did better than those who were not allowed to use the technology.
This would be a marginally-legitimate use. Since the LLM is a predictive model, it could be used to assist language learners, as repetition helps reinforce elements like parts of speech and tenses.
 

I mean i could point out one tool but it goes into politics


Come On What GIF by MOODMAN


This is what I mean man, like what?

I dont know what you do for work. I work in IT. I write code. I build applications. I manage data. I do all the things that AI is supposedly 'good for' in the IT space.

If I performed as poorly as these AI tools, I would not have had a multi-decade (at this point) career.

You do not blame the user for using the tool. You do not blame the user for clicking the button.

You blame the piece of naughty word bad code, for serving up factually made up trash, and you reassess whatever direction you are going in by using such garbage.

Half a million for a report populated with FAKE DATA.

'Oh well, I person could do that too.' misses the point by about a continent.
 

Come On What GIF by MOODMAN


This is what I mean man, like what?

I dont know what you do for work. I work in IT. I write code. I build applications. I manage data. I do all the things that AI is supposedly 'good for' in the IT space.

If I performed as poorly as these AI tools, I would not have had a multi-decade (at this point) career.

You do not blame the user for using the tool. You do not blame the user for clicking the button.

You blame the piece of naughty word bad code, for serving up factually made up trash, and you reassess whatever direction you are going in by using such garbage.

Half a million for a report populated with FAKE DATA.

'Oh well, I person could do that too.' misses the point by about a continent.
Do you blame a gun or the person who uses the gun?

EDIT: With that example like AI, three things can be true all at once or individually : Someone using the tool incorrectly, someone using it in a manner it's not meant to be used or the company behind it screwed up big.
 
Last edited:

Do you blame a gun or the person who uses the gun?

EDIT: With that example like AI, three things can be true all at once or individually : Someone using the tool incorrectly, someone using it in a manner it's not meant to be used or the company behind it screwed up big.

This is actually surprising and I question the value of continuing.

A gun does what its meant to do.

An 'AI' makes up false information, and passes it off as correct.

These things are not the same, and I dont believe you are actually unaware of that.
 

I dont know what you do for work. I work in IT. I write code. I build applications. I manage data. I do all the things that AI is supposedly 'good for' in the IT space.

If I performed as poorly as these AI tools, I would not have had a multi-decade (at this point) career.

You do not blame the user for using the tool. You do not blame the user for clicking the button.

You blame the piece of naughty word bad code, for serving up factually made up trash, and you reassess whatever direction you are going in by using such garbage.

If someone clicks yes on every spelling suggestion in MS Word or on their phone without thinking about it - I blame the user.
If someone clicks on every hint for a relative on Ancestry - I blame the user.

Is it marketing for AI that leads people think it is different?
If they are a professional (as opposed to amateur) is part of their responsibility to know about what the tools they are using are capable of and can be trusted for?


Do you blame a gun or the person who uses the gun?

EDIT: With that example like AI, three things can be true all at once or individually : Someone using the tool incorrectly, someone using it in a manner it's not meant to be used or the company behind it screwed up big.

Is this true of most things? Is one of the big questions how commonly it is mis-used (with a nuanced look perhaps involving comparing it to how often previous new techs were used)?
 

Remove ads

Top