So, since this thread bubbled up to the top again, I thought I might note something I ran across which is germane to this discussion. To wit: The language we use matters.
Back in the 1950s, when computer scientists and engineers started thinking about, and working on the possibility of, building computers that mimic human thought, they landed on calling it "artificial intelligence", because they were scientists, that described what they were trying to make, and it sounds cool and futuristic, which is great if you are trying to get money to support research.
And "artificial intelligence" stuck. And Isaac Asimov wrote robot novels, and it was good. And everyone stays focused on how it can be as good as or better than a human at cognitive tasks. It became all about how AI could replace humans, for good or ill.
But, if we ask normal folks what they want to DO with AI, it is "I want it to help me do X."
So, how would it have been if we didn't call it "artificial AI", and instead called it, "assistive technologies"? Same basic tech underneath.
Assistive technology is still certainly valuable. Our corporate masters would still be interested in assisting their workers with tasks, making them more efficient, giving them tools. "Assistive tech" doesn't suggest general intelligence, though. It suggests helping people with focused tasks, which the tech is better at doing anyway.
Most importantly, "assistive tech" is worthwhile, but not worth going crazy over. Like, your corporate master isn't going to have FOMO over not having assistive tech NOW. So, no economic bubble, no threats to energy and water resources getting chewed up by data centers. Gradual development of functionality, which might still end in "general intelligence", but adopted at something like a reasonable pace...