ChatGPT lies then gaslights reporter with fake transcript

I've noticed that the marketing department where I work has stopped all mention of "AI" or "Artificial Intelligence" at the start of this quarter. This is very different from last quarter, when they were mentioning both in practically every paragraph of ad copy they could find. Suddenly, at the start of this quarter, it's all crickets. Instead they say something generic like "software tools" or "algorithms," if they say anything at all.

Not sure what changed, but it sure changed in a hurry.

Ah good, my work will catch up in a few quarters then. :LOL:
 

log in or register to remove this ad

"LLMs often return untrustworthy results" is a pretty unobjectionable statement. The jump to "Therefore LLMs have no use cases/LLMs only produce slop" is what I see as getting pushback.
I'll try again. Consider the truth of statements like "LLMs have no safeties to prevent, or even tag, possible slop, and so they aren't worth my time and effort in production environments." When that insight (which must be true because a statement of judgement) is combined with listening to hype their boosters have produced, and the irresponsible way that LLMs are being used and monetised, people might also say "To save my time and attention for important matters, I'll presume that those who use LLMs are either incompetent or untrustworthy, and feel comfortable saying so to people who use them in practical life."

Who are we to dissuade them? There's nothing untrue in those sentences.

What @Sacrosanct pointed out is that the key issue is whether it's worth the time and energy to find out if you can believe what a computer tells you is "said," and whether you know "who" actually "said" it. That is what the article the initial post was also about - Is this worth the time and effort? I'd say not right now, and not tomorrow either.

As for supposed use cases: LLMs don't fold proteins, do math, analyse medical images, etc. What are their use cases? Is there any other use for them other than making speech which wasn't thought?
 


What @Sacrosanct pointed out is that the key issue is whether it's worth the time and energy to find out if you can believe what a computer tells you is "said," and whether you know "who" actually "said" it. That is what the article the initial post was also about - Is this worth the time and effort? I'd say not right now, and not tomorrow either.

That's where the disconnect comes from. I very well knew that Paul Deschanel is the 10th president of the Third Republic. But I might have a lapse of memory and asking ChatGPT will give me a quicker result than going to the library or checking websites for that information. Do I need to check it? No, it is just recalling common knowledge, the recollection is enough. For more specific knowledge, it might be that I don't know exactly when he fell from his train ; asking ChatGTP for the answer with sources gives me the Wikipedia link and the correct answer, much quicker than googling for it and reading the whole article. Research time + checking time is quicker than research time without using this tool. It saved me a few seconds to a minute -- not great, but still.. Of course, you're free to think I am untrustworthy and incompent because my experience differs from yours. Such wouldn't be true or false, it would be a judgement of value of yours. What would be pertinent for assessing the usefulness of AI isn't what we "feel" about things but conducting extensive analysis on productivity gains. Anecdotes like the one in the original post or mine have no bearing on the answer. Pretending they have by virtue of generalization will be met with pushback. Not from people who might think otherwise, but also by people who value logical thinking.
 
Last edited:

Yes, if one has to double check everything it does, that is 2x the workload. So like if I am doing structural steel calculations for an elevator tower, it's pointless, I am not touching that. I do see where people just doing useless document generating can get away with it, because nobody reads those anyways. I do my share of that also, though the important stuff I often triple check anyway. Making it double workload, also increases costs by double, nobody is doing anything for free.
AI tasks tend to excel in areas where the workproduct takes a lot of effort (or requires a certain level of creativity) but can be validated quickly.

For example, I can look at an AI art piece and tell very quickly that its good, decent, or garbage. But the time to create that piece could be hours or days. That is a use case where AI does well.

And when I say creativity, before people jump on that word, I mean using its incredible vast knowledge base to generate answers I would never even consider. For example I recently used it to give me suggestions on software products that might be good for my team. It came up with some answers just completely outside my experience, and it was easy to go look them up in the internet and confirm they were real products that would be useful. So that's what I mean by "creative", as compared to my limited "box" it can think very outside the box.


Another key aspect of AI is...it is good for small pieces rather than full projects. In software terms, you have it write a specific function, it can actually do quite well a lot of the time (and functions are relatively easy to test and bug fix). But when you have it write an entire application in one go it stumbles a lot.

In writing terms, if you have the AI focus on a specific chapter, and do some editing and cleanup to make it sharp, then repeat that chapter by chapter you can churn out a decent product that was still a lot faster than writing it yourself. But AI use still takes work, the idea you can just write in a prompt and generate a novel of quality is a fallacy with the current technology.
 


My position is that AI is good at something, not everything, but marketing types are trying to sell it as an everything tool.

Really? I am pretty sure they do try, but in my professional and personal experience, I wasn't exposed to such claims, but to more moderate ones. In my job we've been pitched a dedicated legal AI to boost productivity by searching databases from precedents and extensive private legal articles from reputable law journal, and that sounded quite focused. And we're doing an evaluation of it, to see if it's worth paying for it, not firing half the legal assistants before knowing what the actual benefits are. I'd say that my environment is more sane than most, given the reported experiences here. Maybe it because t was an AI tool peddled by the legal database maintainer, not an Ai tool made by an AI company (well, it is certainly chatgpt under the hood, but not directly). I agree with you that marketing types tend to exagerate the qualities of their products (they are producting human slop?).
 
Last edited:

I've noticed that the marketing department where I work has stopped all mention of "AI" or "Artificial Intelligence" at the start of this quarter. This is very different from last quarter, when they were mentioning both in practically every paragraph of ad copy they could find. Suddenly, at the start of this quarter, it's all crickets. Instead they say something generic like "software tools" or "algorithms," if they say anything at all.

Not sure what changed, but it sure changed in a hurry.
The shine has worn off.

People thought AI was ready to just be given the keys and generate magic. It is an incredible tool and it continues to advance....but at the end of the day its more like a human than a computer. It makes mistakes, it lies, it has to be managed. People have gotten so used to computer systems being "practically perfect" that they have forgotten what a human heavy process looks like....it requires real work to ensure a good quality product.

AI can generate things at a speed no team of humans can match....but it requires real scrutiny and QA review. A lot of people just threw themselves into everything AI and did not understand the product that they had. AI is amazing....and its terrible. It has great use cases...and garbage ones.
 

AI tasks tend to excel in areas where the workproduct takes a lot of effort (or requires a certain level of creativity) but can be validated quickly.

For example, I can look at an AI art piece and tell very quickly that its good, decent, or garbage. But the time to create that piece could be hours or days. That is a use case where AI does well.

And when I say creativity, before people jump on that word, I mean using its incredible vast knowledge base to generate answers I would never even consider. For example I recently used it to give me suggestions on software products that might be good for my team. It came up with some answers just completely outside my experience, and it was easy to go look them up in the internet and confirm they were real products that would be useful. So that's what I mean by "creative", as compared to my limited "box" it can think very outside the box.


Another key aspect of AI is...it is good for small pieces rather than full projects. In software terms, you have it write a specific function, it can actually do quite well a lot of the time (and functions are relatively easy to test and bug fix). But when you have it write an entire application in one go it stumbles a lot.

In writing terms, if you have the AI focus on a specific chapter, and do some editing and cleanup to make it sharp, then repeat that chapter by chapter you can churn out a decent product that was still a lot faster than writing it yourself. But AI use still takes work, the idea you can just write in a prompt and generate a novel of quality is a fallacy with the current technology.
When I was writing my RPG Solis People of the Sun/Kosmic my gf convinced me to try AI and it was: wrong, plagiarizing, or boring. I understand that I am very niche in being near future, hard sf, and solarpunk with probably the most accurate star maps. Other science fiction writers complimented my maps as a valuable resource.

The best I have heard that AI does is to create summaries, such as with notes from a meeting. That's fine, though at the same time it shouldn't be pushed for everything which it can't do. That is just not honest. My other hobby is cars and trucks, American V8's and AI has zero to do with any of that, I have bought a truck, someone was trying to get their 4x4 to work, couldn't, and asked me if I wanted to buy it. I said yeah, though it's not a big fix, they said they were sick of it, I fixed it quick (it was the encoder motor on the transfer case wasn't aligned properly, and it needed to do a relearn), and I sold it quick. The person buying it said they always trusted me when I sell a vehicle because I am honest. This whole thing of business becoming constant dishonest used car salesman hucksterism it is just not good.
 
Last edited:


Remove ads

Top