ChatGPT lies then gaslights reporter with fake transcript

Speaking of Amazon.com, they operated at a loss for 8 years before turning in their first annual profit. It can take a while for these things to shake out.
And they had to wait for the dotcom bubble to burst and overall infrastructure to catch on and make them viable.
 

log in or register to remove this ad

And they had to wait for the dotcom bubble to burst and overall infrastructure to catch on and make them viable.
Their early losses were because they subsidized their insanely low prices (in the beginning) to undercut their competitors until their competitors were gone.

It's effectively the same thing that's happening now. VC money funding AI companies that are driving the tech that'll put millions out of work. Same thing different day.
 

While Amazon was operating at a loss, it was still selling products that a lot of people wanted. The extremely high failure rate of enterprise AI projects, the multiple studies finding that AI frequently degrades productivity, and especially the end of the "all you can eat" offerings from AI services paint a picture where the road from operating at a loss to profitability has yet to appear and the flailing on the part of the AI companies is just beginning. It's not just that the profit isn't there yet - there's a very real possibility that there's not even a plausible model for future profit, and operating costs can't be paid without a constant IV drip of VC funding. Which will eventually run out when the cash is gone and investors still aren't getting anything out of this.

And that's not even taking into account the expensive legal fights with IP behemoths who are very good at getting their way in court - like Disney.

And as training gets more expensive, training data gets more difficult to acquire, and what training data can be acquired is increasingly polluted with the output of previous-gen models, there's a very real chance that LLM AI output becomes even more obvious shovelware than it already is (which is how I interpret 'AI slop' as a term - AI output being intent-free, largely context-unaware, homogenized non-content)

My favorite summation of AI content comes from an AMA thread with Daniel Abraham and Ty Franck, authors of The Expanse. "If you ask AI to make you a sandwich, it takes 100,000 sandwiches, grinds them up, and then injects the pulp into a sandwich shaped mold. You will get an object that looks like a sandwich, and contains things that sandwiches contain, but it will taste terrible."
 
Last edited:

While Amazon was operating at a loss, it was still selling products that a lot of people wanted. The extremely high failure rate of enterprise AI projects, the multiple studies finding that AI frequently degrades productivity, and especially the end of the "all you can eat" offerings from AI services paint a picture where the road from operating at a loss to profitability has yet to appear and the flailing on the part of the AI companies is just beginning. It's not just that the profit isn't there yet - there's a very real possibility that there's not even a plausible model for future profit, and operating costs can't be paid without a constant IV drip of VC funding. Which will eventually run out when the cash is gone and investors still aren't getting anything out of this.

And that's not even taking into account the expensive legal fights with IP behemoths who are very good at getting their way in court - like Disney.

And as training gets more expensive, training data gets more difficult to acquire, and what training data can be acquired is increasingly polluted with the output of previous-gen models, there's a very real chance that LLM AI output becomes even more obvious shovelware than it already is.
I would love to see the entire industry fail, assuming it also failed globally, or at least change such that we only enjoyed the promised benefits (cures for diseases, cold fusion, the other good stuff) without the negatives (job losses, dumbing down of society).

I also hope it goes down that way, but I honestly think we're way past that point already. Even if it fails in the US, why would it also fail in China? Japan? The UK? Russia?

OpenAI, just one of the players, is selling a million new subscriptions per week. They have an est. 20 million paying customers now (source) and almost 1 billion users. Money is still pouring into AI. It isn't slowing down.

AI is also being deployed in more and more systems and devices daily. Do a lot of the zillions of new startups and AI projects fail? Of course they do. But in the tech space now, it's becoming quite hard to find a major tech company that isn't integrating AI into its products.

In 2024, OpenAI posted $3.7 billion in revenue, with $5 billion in losses. This past March, the company said it anticipated $12.7 billion in revenue for 2025, and $29.4 billion in 2026. Due to the high costs of data centers and GPUs, it does not expect to be cash-positive until 2029 — at which point it projects a whopping $125 billion in annual revenue. -Source

1759522530164.png

-Source
 

ChatGPT lies then gaslights reporter with fake transcript​

For those claiming that AI slop is an unfair description, just watch how ChatGPT faked a transcript of a podcast which had not yet been uploaded and then doubled down and gaslit the reporter and insisted he had uploaded it and that this was an accurate transcript. It was, of course, completely made up by the AI.
I can call what you say a 'lie', but that is inaccurate, you just don't know what you're talking about. This isn't me trying to be offensive or trying to start an argument, this is how the current LLM situation is, people using it with misunderstanding how it works, what they should use it for and how to use it. Similar shenanigans happened 25+ years ago with Google Search, the amount of people that threw a hissyfit about how it 'sucked' and was 'useless' was as painful. The problem wasn't with Google Search, just the people using it.

LLMs are made to cobble together text strings, they'll do that with what is fed into them, according to the 'rules' that were set. If they don't have the optimal text strings to cobble together, they'll make due with sub-optimal text strings. The optimal text strings are what you're looking for, the sub-optimal text strings is what we call hallucinations (what you call lying). The thing that happens when you ask for something that doesn't exist (in the LLMs reference field).

A 'lie' is an intentional false statement. LLMs can't do anything intentionally, they are just complicated math stitching together text strings in particular patterns that are prompted by the user. These 'reporters' are either not fit to use LLMs, as so many people are, or they fully knew what would happen. Heck, many of the very nice and very nasty results out of LLMs are the result of massive prompting, long conversations you aren't seeing and build up profiles. Often used to make a point. A bit similar to how Google Search works these days...

Are LLMs extremely cool? Hell yeah! Are they better then sliced bread? Nope. Would I trust them with anything important? Hell no!

LLM is a TON of monkeys with typewriters, if you are smart, you can use that effectively to get one of them to write the greatest novel in human history (you just don't know which one will do it). But for that to happen, you must first know how to recognize the greatest novel in human history... Translation: Don't ask a LLM anything you don't already know the answer to.

Also AI encompasses much more then just LLM and what you call AI 'slop' generally refers to other AI systems then LLMs, often also generative AI making images, animations, music, etc.

Think of LLM as technical support. From your perspective you call first someone who has very little/no knowledge, if they can't solve it, you go a level higher, if they can't solve it... etc. From a backend perspective you have X amount of people calling with technical issues, but you only have Y amount of absolute experts in their field, far from enough to service all the people calling. You use an army of monkeys at all kinds of competency levels to filter out what gets to those absolute experts in their field. This works! Yeah, there are some total noobs at the lowest level, but if you know how to use and instruct them, they can take a load of work off your hands. The same goes for LLMs, if you use them in the right way for certain tasks, you can save a lot of time=money. But if you use them for the wrong jobs or for everything, it's going to cost you more time=money.

Umbran is right in how this is being sold to people, but it's also up to people to be critical of what they're being sold. When people buy a car, they also won't believe a car salesman who tries to sell them a fusion powered car that requires no fuel. But they blindly believe Tech bro's, influencers, and AI sales people that LLM will do the dishes, do the laundry, and if they sweet talk it, will have sex with you... facepalm

That said, I get, with some work, better texts for 'read aloud' then most official D&D adventures by TSR/WotC and many a third part publisher. It's not a question of AI/LLM slop, but when does the AI/LLM slop becomes better then the human generated slop? For AI/LLM it all depends on who's prompting what, how, etc. And recognizing when things go wrong, also very important, and instead of continuing the same conversation, you start a new one. I made 300+ (room) descriptions for a mega dungeon, are they all of top notch quality? Heck no! But so weren't the all the room descriptions in the first place from the WotC product... I'll probably need to do another thousand fro the complete dungeon and the custom expansions. Is it work I'm proud of? No. It's just necessary stuff that needs to be there for the experience to work. Just as I'm not proud of serving chips and cola when hosting in person.

If we were to get someone who evaluates literary works to look at the ENworld products, chances are that it won't be rated high, just like most pnp RPG products and related novels and short-stories. Most campaigns are at the level of the pnp RPG equivalent of fanfiction. It's pretty easy from my perspective to get better results out of LLMs then that, with some work better results then many a large publisher, will it be better then some of the pnp RPG gems? No.

Do you recognize that the modern tech space has a lot of snake oil salesmen in it? That they will lie to you, either blatantly or subtly, to sell a product?
Yep, this is a LARGE part of the problem and in the IT space, it's not just with AI/LLM, even big established companies (like MS) use sales people that either don't understand exactly what they're selling or they're flat-out lying to you, in either case, they don't really care. It has become a large part of my IT job to vet what's being sold to a customer (my client) and if what's being sold actually works as advertised, what's missing, what it's not working with, etc. In the last 5 years I've stamped more MS products as 'unfit' for production in it's current state then the 20 years previously.
 



Why? The technology is supposed to serve us, not the other way around.

I was taught that Utopia is where the machines do all the heavy and dangerous work, leaving humans free to engage in art and intellectual pursuits. You are arguing that we should allow the machines to pick up the art and intellectual pursuits, and that we should change our behavior to enable it.

That sounds pretty backwards, to me.
Euh... Euh... Euh...

Technology is tools, LLM is a tool. You don't throw a hammer on the ground, shout "Do the dishes!" and expect to be obeyed.

Now, that certain people are shouting that AI/LLMs can do xyz does muddy the waters. But you also wouldn't believe the same people if they said your hammer can do the dishes...

And my hammer doesn't 'serve' me, it's a tool it's used when appropriate, according to certain usage and safety instructions. Just because some people can use their bare fists to hammer in a nail, doesn't mean we shouldn't be using a hammer to do the same.

Imho writing or illustration does not equal art. It can be, but in most cases it isn't. I would say that 99.9999% of the pnp RPG products aren't art. Just as 99.9999% of the stuff we read, watch on tv/internet or listen to isn't art. It's rote in a different form. And most art isn't good, it's just a form of expression.
 

Remove ads

Top