ChatGPT lies then gaslights reporter with fake transcript

Euh... Euh... Euh...

Technology is tools, LLM is a tool. You don't throw a hammer on the ground, shout "Do the dishes!" and expect to be obeyed.

General public : "But the hammer seller told me so! How could have I known?" The only difference is because we're used to hammers as a society, so deep knowledge of hammering is expected, while we aren't used to AI. @Umbran pointed that it took years to go from a society where smoking was a casual habit practiced by everyone (or at least every male) to tobacco-free countries like NZ. And I mentionned cars, it took nearly a century before road safety become a deep focus of public policies. I think the point is that it might take a long time for people to learn to use it. Which is expected. For some time, typing on a typewriter was a skill. We had large pools of secretaries for execs to type reports. Now, we have collectively mastered typing, and execs just type their report themselves. The path seems to be (nobody knows the tool) ==-> People able to use the tool gets an advantage over others ==> the knowledge becomes widespread enough that it becomes unskilled knowledge).

We're at the very beginning of the curve.
 

log in or register to remove this ad

I would love to see the entire industry fail, assuming it also failed globally, or at least change such that we only enjoyed the promised benefits (cures for diseases, cold fusion, the other good stuff) without the negatives (job losses, dumbing down of society).

I also hope it goes down that way, but I honestly think we're way past that point already. Even if it fails in the US, why would it also fail in China? Japan? The UK? Russia?

OpenAI, just one of the players, is selling a million new subscriptions per week. They have an est. 20 million paying customers now (source) and almost 1 billion users. Money is still pouring into AI. It isn't slowing down.

AI is also being deployed in more and more systems and devices daily. Do a lot of the zillions of new startups and AI projects fail? Of course they do. But in the tech space now, it's becoming quite hard to find a major tech company that isn't integrating AI into its products.



View attachment 418777
-Source
There exist some methodological issues with the reported ChatGPT user counts - very little was done to detect logged-out users "double dipping" by using multiple devices, clearing their browser cache, etc, and there's also no accounting for how engaged any of those users remained. This data counts every person who so much as dropped one prompt in the text box just to see what the hype is about. I would also not be surprised if dozens of "users" are being generated by every illicit ChatGPT user trying to trick it into writing ransomware for them, and when they fail, they clear their browser cache, change their VPN exit node, and try again.

Plus trusting anything coming out of OpenAI is a tough sell as it is, they have little reason to be honest in their reporting since they're not publicly traded. These user counts are almost certainly also being inflated by bundling subscriptions with other things, like university enrollment. Plus there are reports from ChatGPT users that they're being offered deep discounts that are, measured against the cost of operating these models (a cost which doesn't seem to be going down, unlike past success stories with scalable IT products) likely unsustainable in the long run and may very well be helping to cover up user churn that is far worse than it looks from the outside. There's a very real possibility that AI companies are more like WeWork than Uber and OpenAI's self-reported success and growth is a shell game to disguise the realization that their novelty shovelware-generator isn't actually useful or reliable for creating products worth selling (or buying).

There's very little objective information that supports the idea that LLMs are being implemented in ways that are actually useful. Hallucinations are getting worse. The benchmarks showing AI success have profound issues. AI keeps getting more expensive per user to operate. Big finance is getting cold feet. AI agents can't reliably complete basic tasks, much less complex ones.

It's not like LLMs solve zero problems, but IMO the major issue with the AI/LLM discourse is how frequently the number of problems AI can reliably, efficiently solve is vastly overestimated. And that many of the problems LLMs do solve aren't problems reasonable people wanted solved - like how to more efficiently commit fraud, develop ransomware that avoids anti-malware software, clog the public Internet with slop text, etc.
 
Last edited:

This is all obviously true, the costs and social benefits are distinct from whether I find it useful.
I don't understand what distinct means here. How do you isolate the costs and social benefits from whether or not you find something useful? Aren't they all aspects of the 'thing,' aspects which we can divide into entities in theory, but not in practice?
 

I don't understand what distinct means here. How do you isolate the costs and social benefits from whether or not you find something useful? Aren't they all aspects of the 'thing,' aspects which we can divide into entities in theory, but not in practice?
I mean that whether or not I find it useful is relevant, but not decisive. If it is an environmentally costly and morally questionable technology (@Umbran 's view) then me finding it useful may not make it worthwhile.

That said I do think in practice it is hard to separate these concerns, and that folks with moral concerns are primed to see the results as not useful.
 

See folks, I was right, it IS about the Vineyards.

Altman wants to become a farmer.

"I think there will come a time when AI can be a much better CEO of OpenAI than me, and I will be nothing but enthusiastic the day that happens," he told Axel Springer CEO Mathias Döpfner in an interview this week.

At the top of the list is tending to his farm.

"I have a farm that I live some of the time and I really love it," he said. Over the years, he has also purchased multimillion-dollar residences in San Francisco and Napa, California, as well as a $43 million estate on the Big Island of Hawaii.

Before ChatGPT took off, Altman said he had more time on the farm, where he used to "drive tractors and pick stuff," he said.

Its the modern 'let them eat cake'.

What do you mean you dont already have massive estates around the world? Whats wrong with you?
 


There exist some methodological issues with the reported ChatGPT user counts - very little was done to detect logged-out users "double dipping" by using multiple devices, clearing their browser cache, etc, and there's also no accounting for how engaged any of those users remained. This data counts every person who so much as dropped one prompt in the text box just to see what the hype is about. I would also not be surprised if dozens of "users" are being generated by every illicit ChatGPT user trying to trick it into writing ransomware for them, and when they fail, they clear their browser cache, change their VPN exit node, and try again.

Plus trusting anything coming out of OpenAI is a tough sell as it is, they have little reason to be honest in their reporting since they're not publicly traded. These user counts are almost certainly also being inflated by bundling subscriptions with other things, like university enrollment. Plus there are reports from ChatGPT users that they're being offered deep discounts that are, measured against the cost of operating these models (a cost which doesn't seem to be going down, unlike past success stories with scalable IT products) likely unsustainable in the long run and may very well be helping to cover up user churn that is far worse than it looks from the outside. There's a very real possibility that AI companies are more like WeWork than Uber and OpenAI's self-reported success and growth is a shell game to disguise the realization that their novelty shovelware-generator isn't actually useful or reliable for creating products worth selling (or buying).

There's very little objective information that supports the idea that LLMs are being implemented in ways that are actually useful. Hallucinations are getting worse. The benchmarks showing AI success have profound issues. AI keeps getting more expensive per user to operate. Big finance is getting cold feet. AI agents can't reliably complete basic tasks, much less complex ones.

It's not like LLMs solve zero problems, but IMO the major issue with the AI/LLM discourse is how frequently the number of problems AI can reliably, efficiently solve is vastly overestimated.
Regular disclaimer here: you can choose to believe whatever you want. Everyone has an opinion. With all due respect, quite a few people have shared examples of how they're using AI to perform work all the time. Multiple people have provided concrete examples of how they use it and described how they see others using it too, how helpful they feel it's been to them personally and professionally.

As for the number of users, there are multiple sources for that. Whether it's 1 billion users, 800 million or 400 million, what do you think it would say if it were on the low end of those numbers? Those are all huge figures. OK, what if there are only 400 million users right now? Are you suggesting that ChatGPT is losing users?

Time will tell. We'll see where things are in a year. By then the picture should be clearer.
 



Weird how my grandpa and all the farmers I know skipped over this step when they decided they'd be farmers.

I mean its interesting, my Grandfather was an Electrical Engineer, gave that up to go back and farm. Did that till he retired. I would love to be able to get land, and just go farm that land, but I dont have the quite literal millions it would take to do so.

Altman of course, likely has 100's of millions if not billions, so what does he care if he upends the world and ruins the lives of millions?

Hes already got his farm.
 

Remove ads

Top