So, if it became cheaper per computing operation, or needed fewer operations,
There is no need for the “if” in your statement here. Below is a sample chart -- I'm sure you can find others. Even for the same family, costs are dropping rapidly, and when you combine that with computational advances, it's a very large decrease. And inference cost is almost all power cost, so the chart below essentially shows how often LLMs become 10x more efficient than before.
Increases in efficiency almost never result in reduction of power use - they typically lead to increase in use, because now use is cheaper! LED lightbulbs (and fluorescents before them) are more efficient than incandescent - their rollout meant that people (including businesses) lit more places more brightly, and left the lights on longer because they felt they could afford it.
So, let's do back-of-the-envelope math:
- A business needs lights 8 hours a day
- They start with incandescent bulbs using X energy an hour, so use 8X a day
- They switch to LEDs, using X/8 energy an hour, but start using them 24 hours a day, so use 3X a day
Even with your example, the new technology saves energy! This example does not support your statement.
So, if it became cheaper per computing operation, or needed fewer operations, you can be quite sure that the response was, 'Well, then use even more operations!" The basic limit will be a monetary one - if they have a budget of $100 million, they will spend all of it, and buy the maximum number of machoflops they can get for it.
So, although this is a site dedicated to unreal worlds, I think everyone who works in any business will recognize that this is a fantasy. When management are told "a basic resource costs less" they do not think "buy more of it". They think either "yay, we make more money" or "we can buy more to do more". I am assuming the latter is what you are driving at.
And this is the core of where I think you have a disconnect from the way business AI is working. It's a very hyped area, and I can see why you might think that every company is throwing money at AI regardless of usual business planning. Maybe that was true at the very start, when costs were 100x - 1000x what they are now. It's not true now and it absolutely won't be true in 5 years when the hype cycle dies down completely.
So yes, I use more AI than I did previously (within the same budget), because I can do more with more. But I use AI more because it makes (business) sense to do so.
Here's an example from recent work I've been doing: Our company piloting using a product that transcribes and summarizes a patient's visit with a doctor. The patient has to agree to it, and the doctor has to review and edit the final result -- there are various other safety things we do, including manual checking, using other LLMs to judge the summary quality and others.
18 months ago, we could not have done this. The cost per visit would have been too high. But now, with the same (energy) budget, we can. This technology is a solid win. It means patients and doctors spend more time talking and less time with the doctor typing into a screen. The summaries, once validated and edited by the doctor, are as good as the ones that eat up doctor-patient time (and add after-hours work to the doctor).
But the important point is that this was NOT a "hey, AI is cheaper, go use more of it" activity as you characterize it. It came out of the bog-standard project planning process -- define what will be done, weigh the advantages against the costs and go for it if it's good (typically after a pilot).
The biggest thing AI companies are trying to do now is prove ROI. Because we are exiting peak hype and entering into the "I will do this only if it makes me money" part of the cycle. Do I think we will end up with more AI use? Sure, because there are enough use cases where it makes sense. But your statement implies that businesses are throwing money at AI without thinking. This is not my experience. If AIs do not prove worth their energy cost, they will not be used.
Now, the definition of "worth the energy cost" includes things like bitcoin mining, which uses a huge amount of energy and produces nothing of intrinsic value, so I am very comfortable saying stuff like that is heinous. But this is not an argument against the tool (LLMs or other models), it's an argument about the value of the result.
Which is one reason why I do AI now in a not-for-profit company rather than in the big company I used to work for.