ChatGPT lies then gaslights reporter with fake transcript

I seem to be deeply offensive to many people on this subject, which isn't my intention.

It is a hot topic, on which people have strong feelings.

If I may: Your approach is... a bit blithe? To me, you come across as dismissive, and maybe a touch arrogant, on a topic that many people find terribly important. That's a recipe for cheesing people off.

Enter into the discussion as if 1) the people and how they feel matter, and 2) even if you don't fully agree, they might have a point you haven't fully understood, and your results might be better.

I understand what marketing is, but I'm past being marketed to. I am a consumer of the product. It sold itself after I started using it.

If you think you are past marketing, you are probably kidding yourself. Sorry, but human cognitive function is well-built for being constantly marketed to, especially if you are already a customer. Digging you even deeper into the product is a marketing goal, and is a large part of how product loyalty is built.

But, as for the rest, you realize that this discussion isn't about you, personally, right? We are talking about the impacts of the product, its design, and marketing on people in general. To which your personal thought that you are now "past it" is not material.

The company will still market it though, spin the benefits and try to manipulate people into buying it, spending more on it, as will every other AI company. That's less about AI and more about human greed and capitalism.

Yep, exactly.

You are, in effect, restating the point you appeared to be trying to dismiss before. You are now making my point for me. Thanks much.
 

log in or register to remove this ad

I seem to be deeply offensive to many people on this subject, which isn't my intention. I think I've reached a point of acceptance and comfort using AI that many people here haven't yet, and so my blasé references are setting people off. I think most will also get to that point though, because they won't have a choice. Pandora's Box has already been opened.
Don't let the haters get you down, man. I thought your posts on this topic were much more rational and pragmatic than most of the others here.
 

Most work-related writing is not creative, nor about creating new knowledge. Predictive text is highly useful in a large number of applications. Which is why, for example, law firms have been using it for some years to produce most of their rote paperwork. It also has plenty of intriguing creative applications in the hands of a skilled human. For example, I recently had Notebook LM do a podcast of a study guide that I created, which my students rated as both useful and entertaining; I thought it was a very accurate discussion of my ideas.
The podcast idea I've used just a little but seems pretty cool. The ability to take a scientific paper and have it explained to me during a commute would be valuable.

Since I was doing thesis research on what we now call "generative AI" for use in tuning software for large particle colliders before the term "generative AI" was even coined, I feel I am well-educated on what it is actually good at. Thanks.
Umbran, if you don't mind me asking, which types of models were used in your thesis work? I'm curious because there is an argument, (which I think you reject) that transformers changed the game in this regard by accounting for context in a way that previous architectures didn't, and that has led to some exciting emergent properties. Certainly the ANNs of yore were not scoring well on the math Olympiad. Or would you be willing to say more about why you don't think new architectures have an impact here?
 

LLMs are great at search, assuming you have the presence of mind to check the references.

But... that's the point, isn't it? That's even exactly what the video shows - if someone cannot trust the results, you actually aren't great at search! If you return things that don't exist, that's being BAD at search.

Especially when you are unreliable, AND several times more costly in energy use/computing power than regular search is. That's pretty much the opposite of "great" now isn't it?

It is important to note that LLMs don't actually "search". Where a traditional search engine is a combination of an exhaustively created and maintained catalog and lookup, an LLM is basically a very complicated text predictor. If the pieces of information you want happen to have been given sufficient weight when the thing was trained, you'll get your information. But if not, you will get whatever did happen to have the weight, with no regard whatsoever to what the content really is - which is where "hallucinations" come from.
 

It is a hot topic, on which people have strong feelings.

If I may: Your approach is... a bit blithe? To me, you come across as dismissive, and maybe a touch arrogant, on a topic that many people find terribly important. That's a recipe for cheesing people off.

Enter into the discussion as if 1) the people and how they feel matter, and 2) even if you don't fully agree, they might have a point you haven't fully understood, and your results might be better.
Understood and noted. Thank you. I'd said blasé, but blithe is more accurate. I think we agree though.

If you think you are past marketing, you are probably kidding yourself. Sorry, but human cognitive function is well-built for being constantly marketed to, especially if you are already a customer. Digging you even deeper into the product is a marketing goal, and is a large part of how product loyalty is built.
I didn't mean that I was literally immune to marketing, but I don't think I'm more susceptible to it than most. I became a user of ChatGPT after using the product, not before using the product based on the company's marketing. But their marketing did get the attention of a colleague, who then mentioned it to me. Similar to how I got into Apple iPhones. Lots of marketing, also a good product. (Also a big, scary manipulative megacorporation.)

But, as for the rest, you realize that this discussion isn't about you, personally, right? We are talking about the impacts of the product, its design, and marketing on people in general. To which your personal thought that you are now "past it" is not material.
I want to believe that...but I'm not so sure based on the tone of many of the responses I've gotten on threads about AI. They seem pretty biting and personal from the start.
 
Last edited:

But... that's the point, isn't it? That's even exactly what the video shows - if someone cannot trust the results, you actually aren't great at search! If you return things that don't exist, that's being BAD at search.

Especially when you are unreliable, AND several times more costly in energy use/computing power than regular search is. That's pretty much the opposite of "great" now isn't it?

It is important to note that LLMs don't actually "search". Where a traditional search engine is a combination of an exhaustively created and maintained catalog and lookup, an LLM is basically a very complicated text predictor. If the pieces of information you want happen to have been given sufficient weight when the thing was trained, you'll get your information. But if not, you will get whatever did happen to have the weight, with no regard whatsoever to what the content really is - which is where "hallucinations" come from.
That's why I added the qualifier about checking the references. I'm speaking as a user here--I used scholar for years, I use LLMs now, and LLMs are better.

You can choose to believe that I'm getting false information or not verifying things or tricking myself into thinking the results are better (etc.) if you want. All I can say is that has not been my experience.

Edit: Well, I think I should add to this that there may be some disconnect in how 'search' is thought of. I'm in agreement with those earlier in the thread that the reporter did a bad job in that they used LLMs in a fail state and then were surprised when it failed. If you are using LLMs for search, you should never ask them to repeat a significant amount of information verbatim. They will fail and you need to look at the original reference.

Where LLMs would be useful: "I remember a podcast from site X that talked about Y but I can't remember the date or guest. Can you search the transcripts and return some that may be relevant"?
 
Last edited:


Before AI:
"Hahaha this guy can't do hands"
"I'm still working on it"

and with AI
"Look at how messed up the hands are with AI"

and now AI is depending where you look doing realistic hands and other actions.
I mean, if you want to see the Singularity, just look at "Will Smith eating spaghetti" from 2023 compared to "Will Smith eating spaghetti" now.
 

Before AI:
"Hahaha this guy can't do hands"
"I'm still working on it"

and with AI
"Look at how messed up the hands are with AI"

and now AI is depending where you look doing realistic hands and other actions.
Good point. Some of the folks who've dabbled with AI over the past few years appear to be missing how far it's come in that time. ChatGPT is leaps and bounds better than it was when I first started using it.

Like you said with the hands as one common refrain I still hear, but have you asked DALL-E to create a photorealistic closeup image of hands lately?? Pretty incredible. Better than 99% of the students in art of human anatomy classes.

Another one, web browsing. ChatGPT has gotten so much better at conducting online research and providing source links. It did not used to do that.

Or how about math? Remember when it really did struggle with word problems and basic arithmetic? Yeah, those days are gone. Now when you ask it a math problem, it returns the correct result and the python it used to compute the calculation!

I've always been fascinated by the tendency to characterize AI by its current failings without reasonable regard for what it CAN do or the trajectory it's on. Like, the fact that it can write almost flawless code snippets now or write a pretty good novella in 10 seconds but, yes, still makes a few mistakes you'll have to edit or rerun, means that it's total garbage? Really? Why not something like, "This is quite impressive, still flawed, but clearly advancing at a blistering pace"?
 

Before AI:
"Hahaha this guy can't do hands"
"I'm still working on it"

and with AI
"Look at how messed up the hands are with AI"

and now AI is depending where you look doing realistic hands and other actions.
Reminds me of this post

1759500958787.png
 

Remove ads

Top