D&D (2024) Using AI for Your Home Game

During the course of getting my MS in Data Science, I attended a talk given by some C-level exec from TGI Friday's. The level of... detail of data this man wanted to collect on his customers, and the accompanying plans to acquire said data, led me later that evening to ask some of my fellow students, in all earnestness, if our field was literally evil.
The social conventions of the future are very hard to predict. Today we are used to being anonymous in certain respects. I am not at all sure the people of the future will mind sharing their data. I swear at Facebook for showing me posts irrelevant to me because it seems to think I live in the USA. At the same time it keeps showing ads for things I just bought on the net, which is stupid becasue I already bought them. The battle over this is in fact going on right now, the more big data gets abused, the more paranoid future generations will be.
That’s not a job people have in the modern world. There’s an academic discussion to be had about whether the invention of the printing press, the typewriter, and the word processor were all net goods for society in their time or not, but use of those technologies today isn’t directly harming anyone’s livelihoods. The same cannot be said of AI.
The printing press did take away jobs from scribes for a time.
 
Last edited:

log in or register to remove this ad


The printing press did take away jobs from scribes for a time.
Certainly, but since it is no longer that time, the question of if the printing press taking their jobs was a net positive is a purely academic discussion. No currently-living person’s ability to continue living hangs in the balance. The same can not be said of the question of AI being a net good. Maybe, in a few hundred years, the people of the future will look back at this moment the same way we might look back at the invention of the printing press. But we don’t live a few hundred years from now, so that is, again, a purely academic concern. Right now, in the present day, AI does threaten currently-living people’s livelihoods, and that should be of greater concern to us, who also live in the present day, than academic whatabouts concerning past instances of technological upheaval or hypothetical futures after the dust of the current upheaval has settled. Sure, other dust has settled before and this dust will probably settle eventually too. But right now the dust is starting to get pretty darn unsettled and I’m concerned about shielding people’s eyes from it, pronto.
 

Mentioning "net good" is the key here. In the past, scribe jobs were destroyed by the printing press (bad, because it forced people to change jobs, which they didn't necessarily want), but it fostered an era of scientific progress that benefitted everyone (good), also leading to a war of religion that killed milions as people (bad) and so on. Then you add all the bad and good and you decide that yes, it was a net good, even if it was detrimental to scribes and and extremely detrimental to people being killed to death in religious wars. It would have been very bad to suppress the printing press because of the consideration of damage caused to scribes.

So right now, we have jobs that may be removed by AI (bad, same as scribe) and we try to evaluate the other factor to determine if there will be an overall good or an overall bad. The fact that we currently leave to witness one factor in the "net good" or "net bad" equation doesn't change the equation. We don't want our descendants, a century down the line, to suffer from the lack of benefit of AI if it was a net good, because we overfactored our current comfort.

Right now, we're mostly protecting whales and other species of fish from overfishing. That is destroying jobs for fishermen and causing them economic duress even if they don't lose their job outright (because of regulation saying you can't fish some species if they are under a certain size). If we were to consider only this bad and not the overall good that the continuation of the fish species (if anything, for further fishing in the next generation) will bring in the future, we would eat whales quickly while they last. Generally, we don't, because we try exactly to discern if something will be a net good or a net bad, not only doing a short-viewed, self-centered evaluation.

It's difficult to value the benefits (free art for the billions of people who couldn't economically commission someone to draw their characters, or scenes from their campaign, or a painting to adorn their house, so that it's a very small comfort added to a lot of people), the economical impact (doing things in 20% less time [number invented] at work means reducing the workweek or higher productivity or less tasks widely referred, most notably by David Graeber, in reference to the excrement of the male ox, to do at work (a large benefit for many people), the derivative benefits (encouraging AI in general will allow breakthrough in other fields where AI may have derived impact (AI diagnosing unit at home? that are not certain but might be a big benefit to a lot of people) and the detriment (some artists, who don't embrace AI to gain productivity, will have to switch carreer, so a larger detriment to a small number of people, and at the same time a benefit from a subset of artists who, integrating AI in their workflow, can product more art in less time) and the extreme detriment (some artists who live in country where losing your job means becoming homeless and dying from cold and hunger, an extreme detriment to a handful of people). So calculating the net good or bad is complicated. I can see different people valuing the different factors differently, possibly ending up with opposed conclusion on whether AI will be a net good or a net bad. But I don't think it's logical to only take into account the current detriment for a category of people, discard all other elements to say it's a net negative by saying "we're currently experimenting this negative, so we should discard or disregard all other factors".
 
Last edited:

But we don’t live a few hundred years from now, so that is, again, a purely academic concern.
Which is why we shouldn't worry about, say, climate change?

You can't just ignore potential future harms and benefits when determining current policy. Virtually every policy is going to have some costs and benefits.
 

Which is why we shouldn't worry about, say, climate change?
Lol no, climate change is also affecting people’s lives in the current moment. Moreover, avoiding possible future harm is a different thing than dismissing current concerns because they might hypothetically not be so concerning in the future. And if all that wasn’t enough, AI is a major contributor to the acceleration of climate change.
You can't just ignore potential future harms and benefits when determining current policy. Virtually every policy is going to have some costs and benefits.
Yes, and the cost of real humans’ livelihoods today is infinitely more important than the potential “benefit” of maybe making production of creative work more efficient without costing livelihoods in a hypothetical future. If climate change hasn’t caused total societal collapse by then.
 


By virtue of being transformative, maybe any net good positive entails some bad. Agriculture didn't end well for hunter-gatherers.
Very much this. Technology can worsen the quality of life and still win out. Agriculturalists lived shorter lives, were smaller, and suffered from more diseases than hunter-gatherers. Agriculturalists had one big advantage, they could feed many more people in a certain area. And numbers win wars, even against the comparative supermen of the hunter-gatherers. The same argument can be made about feudal peasants vs early industrial workers. "Progress" need not be an increase in quality of life, especially not in the short run.

A tangent, I know. Sorry. Now back to our regular program.
 

Yeah (but the tangent was interesting), so I propose another tangent... I guess that AI use for prep in games will increase in time as people rely more on AI for their job-related tasks. Right now, we have seen mostly AI used for generating pictures and sometimes for prep help, a single time for monster stat making. And not a lot of people answered, so I'd say it's still pretty niche. But we're starting to see students try to use chatgpt on a daily basis, and copilot will increasingly be part of office work routine, people will be in general more used to AI and will certainly include it more in their prep routine.
 

I use AI now to help with plot outlines, treasure generation, monster and encounter generation, NPC and location names, and all sorts of things besides portraits and location art, or even travel maps and battle maps. It helps me flesh out my adventures to give them greater depth, which saves me a lot of time.

Now, I wouldn't use it for commercial adventures, but for my home games it is very useful. I do find it laughable to a point when I find AI battle maps on Esty or someplace for sale. It takes some time to get to make more useful things with AI--you have to train yourself on its better use--but the stuff for sale is pretty poor AI IMO, and you shouldn't be using AI soley for commerical products. If I ever decide to try to publish an adventure, AI art gets me close to the right concept, but in general it is never remotely "spot on" and I'll bring in an artist to commision the work then.
 

Remove ads

Top