Divinity video game from Larian - may use AI trained on own assets and to help development


log in or register to remove this ad

There's value is to call out that absolutionist assertions are factually incorrect.

Not really. It was technically correct, but value is not generally found in technical correctness.

Let's face it, absolutist arguments are almost always unserious in rhetorical terms. Finding the one single item that dispels them does not make your overall argument serious, or move the discussion forward in a practical sense. Thus, no real value is rendered unto us.
 

Yes, an argument made time and time again as technology provides yet another step of efficiency can be describes as a tired argument.

How about a couple of more basic arguments:

1) Efficiency, in and of itself, is not of value. Only with a plan to usefully use the savings does it become valuable. If you don't plan what to use it for, you will waste it anyway.

2) Generative AI, in general, has not been shown to raise overall efficiency. The folks who produce genAI say it will increase efficiency, and will show you things like time-to-task-completion metrics to support that assertion. But they studiously avoid showing you what happens in the rest of the workflows the AI work is associated with.

Commonly, the generative AI content goes into the workflow, and then in one way or another requires editing and revision that wipes away the savings in that task-completion time. This is often most obvious in AI generated code, or technical writing - the coding task is completed quickly, but the code is fragile, or difficult to maintain, and breaks later, increasing cost in fixing regression defects than it originally saved.

This leads to how only about 5% of enterprise-level generative AI projects end up delivering the expected value to the company that uses it.
 
Last edited:

Curious, can you unpack what you mean by "inherently immoral" for generative AI?
It cannot be create without largescale theft and waste, and it disrupts without compensating those disrupted.

And additional, it creates worse product than what it replaces, and that truly is inherent because it cannot ever be creative. It can only ever be derivitive with substantial quality entropy. It is built into the fundemental function of the tech.

If i make an animated Lord of The Rings in the style of Studio Ghibli, it at least will have some creative spark added to it as a result of my own perspective and biases and creativity. When someone promtped an ai model to do it, it was worthless trash that pissed on the memory of tolkien and the living heart of Ghibli while making a product that ultimately was much less than the sum of its stolen parts.

as for the rest of arguments, others have addressed them more than adequately.
 

Cold comfort for those who's family starved thanks to innovation.
And warm comfort to those that survived because of advances in technology. Those who had enough food, or warm enough clothes, or a roof over their heads.

If your argument is simply "people have died when technology advanced and what they did was no longer as relevant", that's basically an argument against all disruptive advances in technology.

If you want to take that stance let me know, because I have nothing that will convince you otherwise and I shan't try.
 

I have nothing that will convince you

Correct. It is what it is, and the development will continue marching on. This is not a debate, its not even a discussion. Hopefully people enjoy going back to the trades, and digging ditches.

Before the year it out, I predict entire job types at my work will be replaced with AI. Project Managers and the like, probably some middle managers.

Those people will not be 'finding new work' that is on par with what they do or make, and many of them are not equipped to be labourers.

'Too bad, thats the price of progress!'
 

How about a couple of more basic arguments:

1) Efficiency, in and of itself, is not of value. Only with a plan to usefully use the savings does it become valuable. If you don't plan what to use it for, you will waste it anyway.
Sure, I can agree to this. But let me flip this around. I've used plenty of open-source tools and modules in IT. The creator of the tool has a plan on why they are making it. That does not mean any particular use-case is a good fit. I've seen heavyweight javascript modules embedded into HTML pages for some fairly simple tasks that could have been done with a bit of javascript or likely a lighter module that would load faster and be easier to maintain.

Gen AI is a tool. The creators have goals they are persuing in creating it. For those who use it, they also need a plan about why they are using it, as you said. A table saw does not give me any efficiency if I'm making clay figurines. That doesn't mean a table saw is worthless, it means that it has things it does well, but what I do isn't one of them.

2) Generative AI, in general, has not been shown to raise overall efficiency.
Can you define "overall"?

To give a real life example, a friend who's a lawyer uses it extensively. He needs to check every reference, but he's still completing a significant chunk of his non-courtroom work in about a third of the time. When he became proficient in using it there was a real concern about billing hours, because they were getting cut down a lot. Luckily he was able to take on more clients and just is more productive every week. Note that this isn't actually increasing his billable hours, it's decreasing the cost to all of his clients.

I have multiple friends who use it professionally by choice, I can give plenty of anecdotal examples. There's not a lack of specific places I can show where it raises efficiency.

So what do you mean by "overall", and why do you consider that a new class of tool must fit that definition in order to be beneficial?

The folks who produce genAI say it will increase efficiency, and will show you things like time-to-task-completion metrics to support that assertion. But they studiously avoid showing you what happens in the rest of the workflows the AI work is associated with.
Well, no. I've had discussions about this with my lawyer friend. Just because you don't see this information doesn't mean people are "studiously avoiding" the issue.

And I wouldn't be surprised if those who studiously avoiding it are those who, like the table saw from before, didn't have a plan on both how to use it and what specifically it does that they thought would be an improvement, so are loathe to give those details.

Commonly, the generative AI content goes into the workflow, and then in one way or another requires editing and revision that wipes away the savings in that task-completion time. This is often most obvious in AI generated code, or technical writing - the coding task is completed quickly, but the code is fragile, or difficult to maintain, and breaks later, increasing cost in fixing regression defects than it originally saved.
Okay, I need to get to another example. Because it's very easy to use the tool poorly and get the results you are talking about. A different friend uses it in coding. I think Claude, but he's changed several times as whicheve ones are best at coding has updated. I'd need to get permission to post publicly some of the things he's shared with me, but I can give a general gist.

The big difference is between using it in an amateur fashion and getting results like you say, and using in in a professional way, understanding and leveraging the strengths and weaknesses of the tool.

He uses it like pair programming, a well know and widely used real world technique.

Nothing is a single prompt. Everything is a lengthy back and forth, with code snippets tested. He points it at documentation and wikis, he discusses priorities and goals. He has it evaluate various modules and both what they would add and if they are light enough for the benefit they would bring. He uses it in a way that he needs to be skilled in the topic to do.

From the beginning he asks for unit tests, and also based on the back-and-forth asks about what corner cases aren't getting tested to add those as well.

In conversations he prototypes things, explores avenues to do them including backing out of ones that aren't the best, talking about the technologies used, makes sure maintainability is important, and otherwise brings skilled, professional-level planning and knowledge into it.

At the end he has it generate a single in-depth prompt that would do everything the back-and-forth revealed. That he posts into a clean (no tokens from that conversation) window of the LLM as well as into a different LLM (using free cycles) to see what they generate. There's times he's been surprised because it's come up with different approaches and he's gone and evaluated those.

He can prototype things in hours instead of days or weeks, determine if a path is worthwhile for the larger project as a whole. He can not only succeed faster in the steps he uses it for, he can determine dead ends and prune them faster.

This leads to how only about 5% of enterprise-level generative AI projects end up delivering the expected value to the company that uses it.
Oh, absolutely agree with you here. It's criminal (well, unfortunately not really) how many corporations and individuals think it's a panacea. It's real work to use it correctly, and they don't train their staff how to do that, nor do they pick if for the tasks it's stronger in but try to apply it to everything.

Though to add a touch, "expected value" is often set by C-level people listening to marketing from those selling it. I've implemented IT solutions that have nothing to do with AI where the "expected value" will be so much lower then the C-level expects because they aren't the man-in-the-trenches who actually does the work and demanded it from on-high instead of consulting with all of us experts that they were already paying. "We'll put it all in the cloud!" was one of those. "Expected value" isn't a strong metric to use, I might even go so far to say that the majority of all projects from corporations large enough to have enterprise-level projects don't end up delivering the expected value. Still more than 5%, not saying that the tool is being used poorly. Just level-setting expectations.
 

Gen AI is a tool. The creators have goals they are persuing in creating it. For those who use it, they also need a plan about why they are using it, as you said. A table saw does not give me any efficiency if I'm making clay figurines. That doesn't mean a table saw is worthless, it means that it has things it does well, but what I do isn't one of them.

Let me give a concrete example: Fluorescent light bulbs.

Fluorescent bulbs are an efficiency improvement over the prior incandescent bulbs. More lumens of light out for fewer kilowatt-hours of electrical energy in. They were marketed and bought with the idea that public buildings of all sorts could reduce their fossil-fuel energy costs by using more efficient lighting.

But, it turns out, as fluorescent bulbs rolled out, electrical use on lighting INCREASED. Broadly speaking, we lit more areas, more brightly, and then left the lights burning when nobody was present - because it was cheap! Increase of use blew away any carbon emission savings you'd expect from efficiency.

This is broadly true in many places - increases in efficiency don't result in savings, but instead drive increased use, much of which is superfluous.

Can you define "overall"?

I did - entire workflow instead of atomic task.

Like, look at a software project, from instantiation to completion of the entire project. One team writes their own code, the other freely uses AI code generation. While you can note AI code generation tasks going quickly, you also see the bug counts rise, and rework due to fragile or ill-conceived code arise later in the project.

Generative AI is used in a context, and it is not enough to look at the use itself, but the impact of that use on the rest of the context, to accurately judge efficiency.
 

It cannot be create without largescale theft and waste, and it disrupts without compensating those disrupted.
Thank you for the definition. I have a huge issue with the ethical sourcing of data used for training of models. And none of the general LLM or generative art models are up to snuff for me. That however does not mean that the sources are inherent in the tool. There is nothing inherent that all sources must be stolen.

In this article Larian talks about if they use generative AI for art, it will be trained on only data that they own. There's no "largescale theft" going on. It does not disrupt the artists without compensation.

There is a robotics researcher that trained an AI on videos where they hired people to come in and fold laundry, which can have massive variation based on identifying types of clothes, size, etc. Because they were working on a general-purpose humanoid helper concept. I think it would be hard to show "largescale theft and waste, and it disrupts without compensating those disrupted", but you need to be able to if you want to claim that it's inherent.

You mention waste, do you know that spending an hour watching Youtube or playing a videogame will have a much larger environmental impact then spending the same hour chatting with an LLM?

And additional, it creates worse product than what it replaces, and that truly is inherent because it cannot ever be creative. It can only ever be derivitive with substantial quality entropy. It is built into the fundemental function of the tech.
The same thing can be said of search engines, but they've been a valuable tool for decades.

You're trying to judge it against an expert creating something. That's far from the only uses of the tool of generative AI. I talked in a recent post about a friend who uses it to prototype code. He's a skilled professional, uses it like as if he was doing pair programming, and is able to discard bad paths and identify good ones to explore much quicker than without it. He's the person creating things, it's the tool that's helping him do it.

If i make an animated Lord of The Rings in the style of Studio Ghibli, it at least will have some creative spark added to it as a result of my own perspective and biases and creativity. When someone promtped an ai model to do it, it was worthless trash that pissed on the memory of tolkien and the living heart of Ghibli while making a product that ultimately was much less than the sum of its stolen parts.
Sure. Again, that's not the tool, that's the use that someone put the tool to. It isn't inherent.

Again, I agree with you that the current crop of LLMs and generative art have horrible issues of unethical sourcing of training material. And while I talk of friends who use it professional I don't because of that. But that's because of the choices of the companies that trained the models, not anything inherent to generative AI as a tool.
 

Correct. It is what it is, and the development will continue marching on. This is not a debate, its not even a discussion. Hopefully people enjoy going back to the trades, and digging ditches.

Before the year it out, I predict entire job types at my work will be replaced with AI. Project Managers and the like, probably some middle managers.

Those people will not be 'finding new work' that is on par with what they do or make, and many of them are not equipped to be labourers.

'Too bad, thats the price of progress!'
I applaud you for your ironclad adherence to the idea that there should never be a disruptive technological advance of any type, because the only metric you use is those immediately displaced and anything non-zero is unacceptable.

I hope you put your money where your mouth is and aren't wearing any garments woven on a loom, aren't living in a house made with mass produced materials. I hope you raise or barter for all your own food, and do so without disruptions to agriculture and animal husbandry that allowed one family to do the work that previous took the work of many.

This isn't a strawman, history is a cascade of new ways to do things that freed up people. From hunters and gatherers where we spent every waking hour trying to find enough calories to survive, with an average lifespan much shorter than now. Agriculture, animal power, bronze, iron, steel.

Do I feel for those displaced? Without a doubt. But I don't ignore all of those that were helped by disruptive technologies either.
 

Enchanted Trinkets Complete

Remove ads

Top