Ruin Explorer
Legend
I don't think the current LLM approach is going to "get really good", because of three factors:When this stuff gets really good
1) They're out of training material even having stolen everything that's not nailed down and a lot of stuff that was! There are attempts to circumvent this and stop it eating its own feces but they seem pretty desperate and not very effective so far. Basically throwing money at the problem, which as we all know often fails completely in tech.
2) It's genuinely very expensive to operate LLMs and it's not getting cheaper or paying for itself. No-one has a model yet which properly pays for itself, or can really even explain one that isn't just a fairly expensive usage-based charge or subscription. It's essentially currently a loss-leader balanced on the back of the hope and in some cases outright fantasies of a mixture of very senior tech execs and VCs, two groups not particularly known for "getting it right" reliably over the last 20-odd years. To even pay back some of the investments they're going have to charge some REALLY big money over the next decade.
3) LLMs aren't actually that well-understood, even by the people who created them, meaning it's much harder to refine the tech than it would be if it was well-understood. But to be clear, this isn't in a scary spooky way, it's just in a "we don't know how to make it do exactly what we want" way.
That's not to say it won't gain acceptance, but I suspect one of two paths to that is more likely:
A) Sheer time - gen A etc. are getting raised on this and likely will see it as intrinsic - if the financial bubble it's in doesn't burst before then (which won't eliminate it, but might hugely attenuate it). But that's like 15-20 years realistically, or more.
B) A different approach to all this prompt-based black box LLM stuff, probably starting in a way that doesn't have the same flaws (various paths are being researched, but it's a way out).
We shall see but I'm increasingly skeptical, rather than concerned, especially as current LLMs seem to have stalled for some time now and are even backsliding a bit in some ways. What progress has been made is unfortunately often tied to "Well if we used 10x as many processors and 10x as much power to do the thing 10x to get a marginally better result!", and that's also going to be 10x to 100x as expensive in most cases, and at least 10x as environmentally destructive (and the bills from the latter are only increasingly going to come due in the next decades).
Last edited: