Divinity video game from Larian - may use AI trained on own assets and to help development

I think it's a great idea, and it takes away one of the main complaints about AI, that being the assets taken from anyone. If they're able to leverage AI to help make their game quicker, without losing quality, then I think that's a win.
It is still a huge "L", because it will still make the process more enviromentally wasteful and it will still encourage further use of an inherently immoral and enviromentally destructive technology.
 

log in or register to remove this ad

Generative AI is a tool that will be in the future. Your thought, while valid, isn't sustainable. It will be used more and more, and not just in software.

This isn't pro-AI, it's acceptance that it's not going away. Look at how corporations (and governments) are accepting and pushing it -- it will have the financial push to become part of the world.

Early automobiles could break people's arms when you crank started them. I am sure many railed against the loud, smoke-spewing menaces. Still, they were here to stay.
Generative ai is not comperable to automobiles, to a degree that makimg the comparison is entirely laughable.

There is literally no major societal benefit to generative AI, neither currently nor potentially. it will never make life better for anyone but oligarchs.

The idea that it is "the future", as if inevitable, is a lie you have bought into from the would be techno-oligarchs who are selling a toxic product that they know is toxic.
 

It is still a huge "L", because it will still make the process more enviromentally wasteful and it will still encourage further use of an inherently immoral and enviromentally destructive technology.
Curious, can you unpack what you mean by "inherently immoral" for generative AI? It's the "inherently" that's getting to me, because I have big problems with unethically sourcing training material, but if they are only training it on human-created assets they own I don't have an ethical issue. So I'm trying to understand how it is inherent that it is immoral. It can't be the environmental issues, you list those as a separate item.

For example, they've trained it on decades of weather patterns, and it can do predictions quicker and with much less compute then the big weather prediction simulations. That means that they (a) are better for the environment then the simulations (less energy, cooling, water, etc.) and (b) can get out warning of hazardous weather quicker, which can save lives especially in shipping. Can you explain how that use of generative AI is inherently immoral since it satisfies my ethics about sourcing and your environmentally destructive aspect, just leaving the inherently immoral term.
 

There is literally no major societal benefit to generative AI, neither currently nor potentially. it will never make life better for anyone but oligarchs.
Citation please. If you're going to make such a claim as the foundation of your argument, can you support it? The points are that it does not have the potential to have a major societal benefit. and that it will never make anyone's life better (except oligarchs).

The idea that it is "the future", as if inevitable, is a lie you have bought into from the would be techno-oligarchs who are selling a toxic product that they know is toxic.
Okay, please explain how you see it not in the future. Considering corporate and governmental adoption of it, explain to me how it is a "lie" (your words) that is will stay around.
 

AI TaskPer Unit30 Minutes60 Minutes
ChatGPT Text Generation (1 prompt/min)~0.3 Wh~9 Wh~18 Wh
Image Generation (e.g., DALL·E)~2.9 Wh/image~58 Wh~116 Wh
Audio Generation (e.g., MusicLM)~5 Wh/minute~150 Wh~300 Wh
Video Generation (estimated)3–10 Wh/minute†~180–600 Wh~360–1200 Wh
YouTube Video Streaming (HD)~12 Wh/5 min~144 Wh~288 Wh

† Video generation estimates remain variable depending on platform, resolution, and model complexity. Current figures are drawn from public estimates and indirect measurements such as Synthesia Case Study and studies on frame-level inference scaling.

Note: Figures are for data center/server-side energy, and do not include device, network, or edge delivery (unless noted).
  • Training: This is the heavy-lifting stage where a model like GPT-4 is created. It requires massive amounts of data, compute, and time. Training a single large-scale model can consume millions of kilowatt-hours and take weeks or months of 24/7 GPU usage across thousands of servers. Training is capital- and energy-intensive, but it is an infrequent and centralised operation. In some enterprise scenarios, however, models are fine-tuned, refreshed, or retrained regularly — adding incremental training costs that shouldn’t be ignored.
  • Inference: This is what happens when you use the model — e.g., asking a question, generating text, or solving a problem. Inference is lightweight by comparison, typically consuming just a fraction of a watt-hour per query. It is decentralised and real-time, and the energy cost is proportional to how many interactions take place.
The following figures represent estimated energy use per individual user over typical engagement durations:
Platform/ServicePer Interaction30 Minutes60 MinutesKey Notes
TikTok (1 short video ≈ 15s)~10.4 Wh (per video)~1250–2500 Wh (1.25–2.5 kWh)~2600–5000 Wh (2.6–5 kWh)High-resolution short video, autoplay, high engagement loop
YouTube HD Streaming (5 min)~12 Wh (per 5 min video)~180–360 Wh~360–720 WhVaries by resolution and device
Facebook/Instagram Browsing~3.3–5.5 Wh (per scroll/post)~60–100 Wh~120–200 WhIncludes video, image loading, and backend AI feeds
ChatGPT (1 prompt)~0.3 Wh~9 Wh~18 WhTurn-based, ephemeral inference only
Note: YouTube and TikTok energy figures are based on 2019–2021 estimates; actuals may now be lower due to ongoing efficiency improvements in streaming and content delivery.
While energy use per unit of content might be lower (especially with caching and CDN delivery), the scale and continuous nature of usage is what drives up the carbon footprint of these platforms.
Even a five-minute YouTube video — viewed 1 billion times — results in hundreds of millions of watt-hours consumed globally.

Yes, AI uses energy — and yes, it should be designed, deployed, and scaled responsibly. But let’s not fall into the trap of isolating AI as the villain while ignoring:

  • The persistent energy drain of social platforms
  • The scale multiplier of viral content
  • The invisible cost of idle digital infrastructure
 


Early automobiles could break people's arms when you crank started them. I am sure many railed against the loud, smoke-spewing menaces. Still, they were here to stay.
They should not have been to remotely the extent they were though. It was a dead-end path that helped destroy the planet, and was pushed solely for money, not because it was actually a good idea.

Public transport was the primary path that should have been taken, and individual motor vehicles absolutely minimized.

Further, a lot of the reason that public transport had issues in the US was specifically because automotive companies intentionally undermined and destroyed it, in quite a systematic way, through a combination of lobbying and intentionally purchasing and destroying public transport systems. We're seeing a similar pattern with AI already, where system that work well and don't benefit from AI are being needlessly replaced with junk AI, simply so the AI people involved can try and steal more money from dim-witted Diplodocus-like investors.

This isn't pro-AI, it's acceptance that it's not going away. Look at how corporations (and governments) are accepting and pushing it -- it will have the financial push to become part of the world.
No, it's pro-AI, you might not be conscious of that, or willing to accept it, but you're not "the voice of reason" here, your argument is simply "well capitalism can push worthless drivel and we just have to accept it".

Generative AI is a tool that will be in the future. Your thought, while valid, isn't sustainable. It will be used more and more, and not just in software.
Generative AI in the form it exists today, let alone the forms envision for tomorrow, are not sustainable, not financially, and not environmentally. They should be opposed and slowed as much as possible. It's very notable that China is not "all in" to the same degree on AI, and the AI it is using is vastly less power and processing intensive, and thus vastly less environmentally destructive.

Sustainability is the last argument you can make in favour of AI. It's very likely we'll see full environmental collapse within 20-30 years and that will absolutely take civilization, including AI, with it. That would have been possible to avert were not for GenAI's insatiable demands for 10x as much power pretty much year on year. We're already seeing normal people's power bills being squeezed so that GenAI farms can be massively subsidized by the public - essentially we're seeing private taxes enacted by power companies.

That won't be sustainable either, because governments that don't act will be voted out or overthrown, and the companies forcing the public to pay for datacenter power usage won't be able to continue. Again, sustainability is a bizarre and perverse argument to make here.
 

The points are that it does not have the potential to have a major societal benefit. and that it will never make anyone's life better (except oligarchs).
You can't prove a negative.

You're asking for the impossible.

It's on people like you, who are promoting GenAI, claiming it's inevitable, to say that there is a societal benefit, that it does benefit people. So far we've seen no benefit to people in general, only tremendous and increasing harms. Elon Musk is busily using GenAI to generate CSAM in public, and when confronted about it, did he stop that? No. He merely limited who can generate CSAM. Is that the societal benefit you're envisioning?

If your argument was "Yeah GenAI is no good, but you can't do anything about it", I could accept that as a somewhat valid (at least in the sense that I could understand how one would believe it) if defeatist argument. But your argument is that there is some benefit, and that it's more than merely the force that is, quite frankly, going to end civilization via climate destruction and making everything anyone says or does into a lie or suspected of being a lie. It's hacking at the tree that holds up society, frankly.
 
Last edited:

Curious, can you unpack what you mean by "inherently immoral" for generative AI? It's the "inherently" that's getting to me, because I have big problems with unethically sourcing training material, but if they are only training it on human-created assets they own I don't have an ethical issue. So I'm trying to understand how it is inherent that it is immoral. It can't be the environmental issues, you list those as a separate item.

For example, they've trained it on decades of weather patterns, and it can do predictions quicker and with much less compute then the big weather prediction simulations. That means that they (a) are better for the environment then the simulations (less energy, cooling, water, etc.) and (b) can get out warning of hazardous weather quicker, which can save lives especially in shipping. Can you explain how that use of generative AI is inherently immoral since it satisfies my ethics about sourcing and your environmentally destructive aspect, just leaving the inherently immoral term.
This is a very shoddy argument.

You pick one extremely narrow use-case and claim, without supporting evidence, that GenAI is "better" at that, and thus try to say "Well there's one exception I managed to dig up, so your general argument must be bad", which is laughable. Also, you need to detail how much power is actually used on climate simulations, because I'm guessing that it's probably a lot less than 1 hour of one day of Claude or ChatGPT or whatever.

Now, let's be real - there are probably narrow use cases where GenAI is superb and not vile. But they're narrow, and limited. They're not the sweeping "Put GenAI in everything" that we're seeing.

Also, logical failure on your part - environmental issues can go in both "inherently immoral" and outside that. There's no reason it has to be either or. Because even if you don't care about theft of ideas and so, the fact that the environment is being destroyed to try and force people to make GenAI write overlong emails for them so they can send them to someone who is forced to use GenAI to parse and summarize the hordes of overlong AI-drivel emails they're getting is an incredible example of total waste, with nothing to redeem it.
 
Last edited:

Yes, AI uses energy — and yes, it should be designed, deployed, and scaled responsibly. But let’s not fall into the trap of isolating AI as the villain while ignoring:

  • The persistent energy drain of social platforms
  • The scale multiplier of viral content
  • The invisible cost of idle digital infrastructure
Except a lot of those social platforms are also using AI nowadays. Meta a.k.a. Facebook is a minefield of that crap now. Obviously everyone knows about Grok on Twitter and all the vile garbage it's doing (and its current owner actively encourages). YouTube recently introduced automatic AI upscaling for its videos, and viewers cannot turn it off.
 
Last edited:

Enchanted Trinkets Complete

Remove ads

Top