Judge decides case based on AI-hallucinated case law

And somehow I can't think that there are (for example) many Indian rice farmers who are going to need AI.

Or toddlers.

Saying "8 billion people" is meaningless, in the long run.

Missing the point, which was to demonstrate scale in a way that "trillions" doesn't.

So, toddlers, having no money, will not be paying anything. Someone else will need to pay their share, increasing the load of the cost on the rest of the population.

Thus, it is worse than the minimum I presented. Thank you for making my point for me.
 

log in or register to remove this ad

My statement includes reinforcement learning. That's how you train a generative AI.

In the end, the limiter is still the data. It's a statistics thing that you can't get around without violating laws of thermodynamics.
So data and compute are different, right? You said "In order to cut the error in the results in half, you need to roughly quadruple the data input", and therefore require more compute.

But that's if the new data is primarily of the same quality as old data. There is a big effort now to get human-curated, high-quality datasets targeted for specific domains. Mercor, for example, hires people to generate this data in a variety of fields.

This gives you a way to improve without needing more compute. Yes, you need to have the high-quality data, and that's expensive, and maybe it won't be so easy given that the good data that was previously easy to get (bc piracy) is now much harder...etc. etc. We can have a whole conversation about that.

But speaking narrowly, datacenter size is just one limit on your success rate. There are many other factors.

If my generative AI cannot give a weather report without making up cities that don't exist, using it to back agents is not going to make the results better.
Which results? Why would I use a LLM to generate a weather report?
 

All those who knew it wouldn't be easy please raise your hand.
giphy.gif

giphy.gif
 


I think Umbran's point was that if US AI-model making companies go bust, no other company will be able to afford training model further to reach a better level that what we have now, despite having huge datacenters built and now left without customers and therefore a heavily depressed price for computing power and models being produced by non-profit entities. So basically, a weird scenario where there would be nobody to buy the assets even for a dollar.
China will do it.
 


This shows a critical misunderstanding of how the technology works.

There are a lot of ways to express your idea, and you chose to be insulting. I'll not interact with you further as a result.


For the other readers, who might have been confused about my point, I wanted to say that infrastructure built to train model is built and won't disappear, lessening the (still very high, due to maintenance and energy) cost of trying to create an effective model. I wasn't discounting operational costs, of course. For example, one of the problem datacenter owners meet in the US is the power grid and the necessary investment will be one-off (for decades). Once Google has built its planned nuclear power plants to power its datacenters, they won't go poof tomorrow even if Google goes bankrupt. Also, deeply depressed compute price will reflect on the price of training runs, which are already lower than they were a few years back. Deepseek R1 training cost was "only" 5 millions, and Google's Gemini 79 millions. Mistral, as a company, probably never had the means to spend even a single billion and produced a large number of models. Are they leading the market like billions-spending OpenAI by training models on a mere 24k GPU when the leaders have much, much more? No, they're trailing slightly behind, but they simply take more time. Even if the financial incentive to build many new concurrent datacenters is deeply lessened, it will probably slow the rate of model production, but not make new labs unable to enter the field.

And while one may question the financial sense of building further models and bearing those costs, most of the research and high quality models are provided by universities (as @Maxperson put it, China will do it) without a clear monetization goal. I'm also sure a few sovereign countries could be interested in a LLM listening to every phone calls and emails and reporting citizens with uncompliant thoughts, even if there is no money to be made with that. Unfortunately, I am also convinced that those same actors will not be put off by the occasional upright citizen being attributed hallucinated faults.


Which has no bearing on the cost to run the model once it is working. And the article we're speaking of postulated that we're already there for the use case he mentionned. So no more investment needed anyway.
 
Last edited:

Missing the point, which was to demonstrate scale in a way that "trillions" doesn't.

So, toddlers, having no money, will not be paying anything. Someone else will need to pay their share, increasing the load of the cost on the rest of the population.

Thus, it is worse than the minimum I presented. Thank you for making my point for me.
Your math is solid, but how the money comes in varies.

Municipalities, counties, states, governments spend money on AIs. That gets charged against all taxpayers. Corporations spend truckloads on AIs, and that money eventually comes from their customers. And while a toddler might not be spending money, people and schools might be spending money because of that toddler.

This isn't about sales to individuals, big money comes from corporations and governments.

Just the Fortune 500 companies made just shy of $20 trillion in 2025. If AI, between targeting, building, in-house efficiency, and whatever gives them a couple of percentage points extra, even if the cost is a full half of what they make extra it's bonus money for them. Then you have all the smaller (but still large, just not Fortune 500) trying to copycat that.

How many billions might a government pay for a farm of AI social media bots that spread disinformation in a rival country? (Seen in the real world.) What would an authoritarian country pay for identifying and keeping track of any subversive citizens? (Suggested by Larry Ellison, billionaire founder of Oracle.)

They haven't made these ridiculous investments without some sort of business plan. Second guessing it without access to their data to claim the opposite presupposes that they are incompetent. And given the amounts of money being invested, they probably aren't even if they are working with lower confidence because it's such a new technology.

I'm not saying this in defense of AI, just that those investing think they can make money even with these huge costs and it's not likely simple to poke holes in why they think so.
 

Do we have any figures on where machine learning has in fact improved efficiency or productivity for any company or sector? Not anecdotal "my job is better now" or "I hear XCo is working on some neat stuff" but actual hard figures? Last time I looked, OpenAI was pouring billions into an empty hole with no signs of profits. Will machine learning actually ever make money? And that's counting firing people - sure, that counts as short-term savings at least.
 

Do we have any figures on where machine learning has in fact improved efficiency or productivity for any company or sector?

We don't have industry-wide hard figures because AI just started being sold as a product. I doubt the Internet started "making money" in its initial phase either (which explained a stock market collapse, but had no significant effect on the technology).

We did get a lot of complaints by image making professionals thinking they would get displaced by generative AI, but none I saw provided hard numbers. Them being out of work would mean that the people who used to hire them and relied on AI instead saved those dollars, but then again, no hard figures I know of, especially aggregated at the industry level.

We get some measurement, though, from research:

The National Bureau for Economic Research reports that on a measurable metric (number of issues resolved per hour by a support team), the productivity increase (which translate as money made) depends on the expertise of the staff being AI-assisted, but averages 14% : Generative AI at Work

Depending on how you see it, companies can translate that into money by improving the customer experience by reducing the time to answer a ticket (improving customer satisfaction, leading to more sales) or simply firing extra low-level customer support workers

OpenAI is spending a lot of money because they want to beat Anthropic (who's spending a lot of money) and Facebook (who's spending a lot of money). They don't care about the quality only, what matters to them is being the first to reach the goal. For end users, it doesn't matter really, and that's where those companies might overestimate their returns.
 
Last edited:

Enchanted Trinkets Complete

Recent & Upcoming Releases

Remove ads

Top