Judge decides case based on AI-hallucinated case law

This is no longer true. There are eight billion people in the planet, all you need is an initial sale to some of them.

Nowhere did I say that the value must be for everyone. It must be there for the direct customers who buy use of the generative AI.

There are 8 billion people on the planet. But they are already at an expenditure such that they won't break even unless every single person on the planet pays about $500.

If the tech companies go bust, there will be no further releases, and you'll be limited to roughly current levels of ability - genAIs that still tend to lose narrative cohesion after a page or two. Good luck making much with those.
 
Last edited:

log in or register to remove this ad


I wasn't clear. What I mean is that LLMs as they stand can produce cromulent spam marketing text, and so those companies selling "AI" are guaranteed customers forever - the people who need to produce marketing text will find it much cheaper than using any other source. They're not trying to produce readable text at all, they are looking for onetime customers. There's plenty of evidence that this works.

Re: the companies going bust. That doesn't get rid of the IP - it will be bought and used as an asset by some other company. They'll find their level, and thrive. Until of course there is a change in the customers, or in the environment in which they operate.
 

If the tech companies go bust, there will be no further releases, and you'll be limited to roughly current levels of ability - genAIs that still tend to lose narrative cohesion after a page or two. Good luck making much with those.

Tech companies in this case are mostly Chinese university labs when it comes to open weight models. China has also recently pledged to funnel trillions into AI over the next few years on top on their public research. I am not sure their goal is as profit-oriented, especially short term profit, as US companies are. And that's a big if saying that Microsoft, Amazon, Oracle and Google will go bust soon given their profitability levels -- but of course anything may happen. That would be a boon for Mistral and SAP.
 
Last edited:

Re: the companies going bust. That doesn't get rid of the IP - it will be bought and used as an asset by some other company.

I think Umbran's point was that if US AI-model making companies go bust, no other company will be able to afford training model further to reach a better level that what we have now, despite having huge datacenters built and now left without customers and therefore a heavily depressed price for computing power and models being produced by non-profit entities. So basically, a weird scenario where there would be nobody to buy the assets even for a dollar.
 
Last edited:

Here's a story today demonstrating what I'm saying:


Key quote:
The researchers also made a new YouTube account and found that 104 of the first 500 videos recommended to its feed were AI slop. One-third of the 500 videos were “brainrot”, a category that includes AI slop and other low-quality content made to monetise attention.
That's 1/3rd brainrot, 3/5ths of that being AI slop, which makes money simply by being attended to. Not selling anything.

Edit to fix grammar.
 


I think Umbran's point was that if US AI-model making companies go bust, no other company will be able to afford training model further to reach a better level that what we have now, despite having huge datacenters built and now left without customers and therefore a heavily depressed price for computing power and models being produced by non-profit entities. So basically, a weird scenario where there would be nobody to buy the assets even for a dollar.

This shows a critical misunderstanding of how the technology works.

Generative AI is, effectively, statistical in nature - the chance you get a good response depends on how much data, and how many connections, you put in. But, it isn't linear. In order to cut the error in the results in half, you need to roughly quadruple the data input, and scale the connections to match.

That's why they need the data centers to begin with. But future improvement will need more and more datacenters, until your ability to improve is capped by datacenter availability.

And those datacenters are not one-time costs themselves. Training AI drives the hardware fast and hot, so hardware turnover is double or more what we usually think of in computing.

So, if the current trillions of investment don't make genAI that's good enough, the next guy will also have to spend trillions of dollars to maybe get it to work.
 
Last edited:

This shows a critical misunderstanding of how the technology works.

Generative AI is, effectively, statistical in nature - the chance you get a good response depends on how much data, and how many connections, you put in. But, it isn't linear. In order to have the error in the results, you need to quadruple the data input, and scale the connections to match.

That's why they need the data centers to begin with. But future improvement will need more and more datacenters, until your ability to improve is capped by datacenter availability.

And those datacenters are not one-time costs themselves. Training AI drives the hardware fast and hot, so hardware turnover is double or more what we usually think of in computing.

So, if the current trillions of investment don't make genAI that's good enough, the next guy will also have to spend trillions of dollars to maybe get it to work.
Umbran, this is too simplistic. Scaling is one effective tool. But there is also a lot of work on model architecture, fine tuning, reinforcement learning from human feedback. The agentic idea, for example, is distinct from just throwing more compute at the problem.
 

Umbran, this is too simplistic. Scaling is one effective tool. But there is also a lot of work on model architecture, fine tuning, reinforcement learning from human feedback.

My statement includes reinforcement learning. That's how you train a generative AI.

In the end, the limiter is still the data. It's a statistics thing that you can't get around without violating laws of thermodynamics.

The agentic idea, for example, is distinct from just throwing more compute at the problem.

If my generative AI cannot give a weather report without making up cities that don't exist, using it to back agents is not going to make the results better.

Agentic AI is how we get AI that isn't limited to doing what we tell it to do, which is not a source of comfort.
 

Enchanted Trinkets Complete

Remove ads

Top