Ruin Explorer
Legend
What would be some examples of succeeding in "ways we don't understand"?Sure, some of the models fail in predictable ways. But what's shocking and interesting is that they also fail (and succeed) in ways that we don't understand.
I ask because I have yet to see such. Some stuff works surprisingly decently, like better than one might instinctively expect, but I'm not aware of any "not understood" successes. And a lot of "not understood" failures in all fields through history have been extremely well-understood by basically everyone but the people creating the thing that failed!
I also follow the field (I dunno if more or less closely - probably less) and I have to say, I've not been shocked.As someone who follows the field and was relatively unperturbed by all of this a year ago, I am truly shocked. Not just because of the leap in the forward-facing tech, but more importantly, because of the consumer-facing nature of it, which means that adoption and dissemination will be spreading that much faster.
It's not that there's been some stunning novel advance - it's that tech that's fundamentally been around for a few years is being tested on the public, essentially. The sheer amount of it isn't anything to do with some flipped state - it's to do with greedy businesses trying to roll out unoriginal tech before their competitors roll out nigh-identical tech.
Maybe? I'm not seeing that many genuine use-cases for any of this yet - i.e. ones which aren't just "It does a crap job, but at least we don't have to pay someone". AI code generation is a possible genuine use case, but I'm somewhat (tangentially) familiar with it through my job, and at least in my sphere, it doesn't actually work very well, in terms of creating real time savings. Perhaps it does in other fields.In other words, the more use-cases it has, the more use-cases it will generate.
AI art gets less impressive every day as it becomes more and more obvious how limited it is, and how easy to spot it is - not just weird hands and teeth, but stuff like how it's obsessed with facial lines and makes everyone look like they're wearing a ton of contouring makeup if it tries to do photographs. I'm not sure there are many real use cases that aren't extremely suspect (deception, semi-illegal/illegal niche porn, etc.). It's not bad for creating a scene to work from if your imagination is failing you, I guess.
Language models demonstrate a very impressive to sound human in text, and that's great, but they're constantly wrong, inaccurate, misleading, don't attribute sources, and have been programmed to be various irresponsible, evasive, stubborn, and fundamentally keen to create a misunderstanding, which, is frankly, horrifying and I'd say borders on actively unethical design.
False dawn, I'm telling you man. In five years or whatever we'll revisit and see how much of this really created change and how much was a cool but largely useless deal.This feels different.
People will lose jobs and stuff, but AI right now is just the outsourcing/offshoring of the 2020s. In the 2010s and a little before, outsourcing and offshoring was all the rage. You could save a huge amount of money if you outsourced various tasks, or even entire departments, or if you weren't willing to outsource, offshored them somewhere cheap and with low standards. Millions, probably tens of millions or more of people lost jobs because of this.
But they did they stay lost? Nah. From about 2014 onwards there was a ton of very quiet re-shoring and in-housing. Why? Because whilst the other options were cheaper on the balance sheet, they didn't produce the results that they wanted. I could go into extreme detail but I strongly suspect the same will happen here to a large extent. AI will be useful in a lot of ways - particularly for identifying stuff like the spread-patterns of malware, but I question how much further it will go until a new generation of IT with fundamentally different models comes in.