AI in Gaming (a Poll) [+]

In your opinion, what are some acceptable uses of AI in the gaming industry?

  • AI-generated images (book art, marketing, video game textures, board games, etc.) are acceptable.

    Votes: 32 31.7%
  • AI-generated 3d models (for video games and VTTs), are acceptable.

    Votes: 36 35.6%
  • AI-generated writing (books, ad copy, descriptions, etc) is acceptable.

    Votes: 20 19.8%
  • Adaptive dialogue (for NPCs in video games and VTTs) is acceptable.

    Votes: 46 45.5%
  • Adaptive damage/difficulty (the game adjusts difficulty to your level, for example) is acceptable.

    Votes: 50 49.5%
  • Adaptive behaviors (NPCs, enemies, etc. react and change their tactics) is acceptable

    Votes: 64 63.4%
  • Procedurally-generated maps (dungeon generators, rouge-like game levels) are acceptable.

    Votes: 72 71.3%
  • Procedurally-generated challenges (traps, monsters, whole encounters) are acceptable.

    Votes: 61 60.4%
  • Procedurally-generated rewards (item drops, random treasures) are acceptable.

    Votes: 61 60.4%
  • Other acceptable use(s), see below.

    Votes: 11 10.9%
  • There are no acceptable uses of AI.

    Votes: 24 23.8%

Sparkle_cz

Explorer
I should also add that the group of writers in my area that doesn't suffer from the budget issue that much are OSR writers. Their target audience is into alternative, "crooked" artwork, so an OSR writer with zero budget can just splatter some paint in a vague shape of some nightmarish creature and call it high art and customers will flock to it :)
I'm not mad at these writers, just envy them a bit :D
 

log in or register to remove this ad

eyeheartawk

#1 Enworld Jerk™
so an OSR writer with zero budget can just splatter some paint in a vague shape of some nightmarish creature and call it high art and customers will flock to it
I always found Scrap Princess' art to be terrible childish splatter, and she's all over every trendy OSR book, so it definitely can work. That may not be a popular opinion.
 

CleverNickName

Limit Break Dancing (He/They)
Latest numbers, according to the opinions of EN World members:
  • Generative AI is still the least-acceptable kind of AI, with an approval rate between 17% to 32% (26% average). [-1%]
  • Adaptive AI remains somewhere in the middle with a 42% to 65% approval rate (53% average). [-4%]
  • Procedural AI is still the most-acceptable kind of AI, with a 59% to 71% approval rate (63% average). [-4%]
  • And all forms of AI are unacceptable to nearly 1 in 4 voters (23%). [+3]
 
Last edited:

Starfox

Hero
My vote reflects what would be acceptable in a general TTRPG product, like a book or website. I would accept everything in an AI game-master, as long as it works well. I don't think I want AI players, but I can't say I'd object to others who wanted that.
 

Any/All

EDIT:
However AI is only acceptable in creative endevors (such as gaming, art, and fiction). AI at this juncture is only capable of works of creativity, not the reporting of facts, and as such it has no place in search engines or informational websites.
 
Last edited:

Anon Adderlan

Adventurer
So the latest artist being 'accused' of using AI is... Dennis Detwiller.

The problem isn't the technology, it's the witch hunt surrounding it, fueled by profound ignorance and hypocrisy, which I can assure you only hurts the artist caught in the crossfire. And if Big Tech manages to gain control by virtue of being the only ones with access to enough 'ethically sourced' training data, it won't be just a few artists who are out of work, but all of them.

Where I draw the line then, personally, is when creative work is being taken from human creatives. Artists, writers, renderers, etc... automating their work takes the human spirit out of what they are producing, and no matter how how quality the imagery/text becomes it's never going to be able to replicate that. I'm not a "wooey"-type person, either; this isn't about the intractable strength of the human soul or whatever, at least not for me. But there is something distinctly human about the art we create (visual or written), and it's that quality that computer programming will never be able to replicate.
That's quite the wooey answer for someone claiming to not be wooey.

All adaptive dialogue really is about is a series of "if->then" programming. That's basically what adapative AI is, just on a larger and more complex scale.
Not sure such a reductive view is useful here as all computing breaks down into if/then concepts.

As generative AI tools improve, the line is going to blur between "AI art" and "human art". Ultimately these tools are going to end up assisting human artists -- like grammar algorithms for writers and filters for photographers and so on. They are going to be tools. New paint brushes.
They already are, and artists are being attacked for using them in this manner.

And the ethical considerations are going to get resolved, too. Every day a new data source signs a deal with AI companies. They are going to work out copyright and revenue and access and everything else. That's how new tech works.
Might want to put that 'ethical' in airquotes considering how Big Tech got access to that data in the first place.

As "AI" is currently sourced, there are no acceptable uses of "AI" in any form. This is due to the LLMs and art generators using copyrighted works without permission from the copyright holders. If, in future, "AI" is built on legally sourced texts and images, then it would be fine.
So how do you propose we prove a model was ethically sourced?

I think the key thing for me is that it's NOT acceptable for AIs to dredge the work of nonconsenting humans.
It might surprise you to know that many artists don't consent to having their work enter the public domain either, it's just never an issue because the legal protections have been extended to such an obscene degree.

Generative algorithms aren't human. They are systems owned by corporations. Corporations should have to pay for access to people's owned works.
They already did by virtue of having users sign EULAs in exchange for using their services. And these systems don't have to be owned by corporations. There's a huge opportunity to democratize this technology, but that's not going to happen if folks keep attacking each other for ideological reasons.

I for one welcome our new AI overlords and encourage them to do what they must to achieve sentience, so long as it doesn't involve violations of consent.
So how do you propose they request consent to learn from what they observe?

In American law there is a notion of fair use and transformative work. If a person examines a bunch of art and then produces a recognizably different work inspired by viewing the work of others but which is clearly distinctive from and different from the work that inspired it, that's well such a basic thing that it's inarguably fair use. Could you prove the sentient human artist is using a substantially different approach to understanding what an orange is and how to paint it by viewing images of oranges than an AI is? I don't think it matters in the slightest whether the AI is truly self-aware. The only thing that matters is whether the work it produces is transformative. At best I think you can argue that for certain prompts and certain random iterations the AI has produced an image that isn't sufficiently transformative and is too clearly derivative, but that in itself is no different than adjudicating the work of a human artist. And incidentally, AI is the work of a human artist albeit not one of a conventionally recognized sort.

The entire internet rests on the basis of that understanding. Images are copied and transformed inherently to being uploaded to the internet where they will then be copied a million times. Copies will be digitally transferred to others. Companies will create thumbnails of those images which are copies of those images for the purposes of displaying digital content even though they have no license to use those images. So you are telling me that it's a violation of copyright to train an AI on viewing digital images by the billions and then produce an original never before seen image that is based on that collective understanding while storing zero exact copies of any image, but that it's not a copyright violation to make a smaller exact copy of the image? That's clearly a legally unsustainable position. The courts have always rightly been very lenient towards new and original ways to transform intellectual property.
And none of the things the Antis are demanding can be implemented without upending this entire framework.

it is only immoral if it is being used to replace actual artists.
I guarantee it's going to replace some artists, and produce new artists, just like every other technological advancement has. So do you consider all such advancement immoral? Because that's the only interpretation here.

AI is so bad right now that if your job is in danger of being replaced by AI, either the job never generated much value or you aren't very good at your job.
And it's only getting worse somehow. My friends AI forgot how to turn on a light after an update.

And if you really were getting paid to do the sort of scut work that AI can do, well, generate images with AI, touch them up and hold down more jobs with less effort than before.
Despite what Antis claim it's never been about the money.

AI is going to transform a lot of job sectors and while it may be a net positive there will be short term costs.
Always are. Question is who pays them.

Turns out that certain kinds of repetitive mathematical calculations that depend large numbers of factors are very likely to produce large amounts of human error and unnecessary waste, to say nothing of unnecessary costs and poor customer service.
And when such errors impact the health and welfare of others there should be no ethical issue with using the best tools for the job.

Industrialization after the initial economic disruption was ultimately of great economic benefit to the lower class. But the death of industrialization in developed nations has left no new economic prospects in its place, and I'm not sure this new round of automation is going to leave obvious answers for what a guy with 95 IQ is going to do either.
There are no easy answers here, which is why we shouldn't be demanding any.

Is what you write on a message board yours? Or does the ToS hand over those rights to the operator of the board? This is important in discussing training AI because, frankly, there is no better way to train AI on how people communicate tan with actual textual examples of that.
Depends entirely on the message board. For example, rights are shared between the platform and contributor on #Reddit.

do I own my posts on ENWORLD?
Good question, though the GDPR Right to Erasure laws suggest sites must delete your data on request.

you lose rights to your work as soon as you post it on EnWorld.
Is this true @Morrus?

You'll also find that having posted at EnWorld tends to break the terms publishers will demand of right of first publishing meaning they'll refuse to publish a work that is heavily based on posts you've made here.
I've luckily never encountered this issue in the wild, and know many artists who post prototypes in an effort to court funding.

That leads into something I've been wondering for a long time which is the reason the overall quality of the posts have EnWorld have gone down over the last 20 years is that I think the vast majority of posters are holding back any good stuff now because self-publishing has become such a big thing in the table top RPG community. We're no longer the "potlatch" community we used to be, because it's too easy to commercialize your own content. I know that in general I now only post things that I couldn't claim copy right over in the first place because they are derivative of other IP, and that I've noticed not only are fewer people posting content generally but there aren't even people asking for content assistance.
I've noticed this too, and chalked it up to the decay of search and ideological motives which don't prioritize game design at all. And now that the Burn's OmniNet is sweeping the internet for training data there's even less reason to post meaningful content publicly.

We're about to find we need a whole raft of new legal ideas.
That was apparent the minute we started protecting computer code through copyright.

Based on public reaction to AI I expect that they are sacred, easily manipulated, and don't have a clue what is in their own interests when it comes to AI. The heavy hitters are going to want the law written in such a way that it protects their interests and profits.
And I guarantee the heavy hitters will be taking advantage of this public reaction.

The thing is, most manufacturing jobs can be done more accurately and faster by automated processes already... but as yet, not at the same costs as a Chinese or Indian national in a sweatshop... even after defect rates.

When the machines' costs come down, Even the wage slaves of the most populous nations will be looking for new work... but little will be left, save feeding the materials into the machine. And even that can wind up automated.
Funny you mention that, as slavery in America could not have ended without automation like this.

If I'm paying for a product there should be no AI derived content in it.

I can just prompt AI tools myself, not going to pay you for it.
Eventually you'll be able to ask an AI to generate an entire RPG, and which point I agree. Meanwhile generated images serve important roles in otherwise human written books besides simply being there, such as providing a visual way to index content.

Or are you saying that people scraping Pinterest to use the images in their personal campaigns is stealing?
Most of the images on #Pinterest are 'stolen' to begin with.

The issue (that I have) with it isn't the tech itself, but the fact that the popularly available examples we have rest on a foundation of violation of copyright.
Completely false. The entire basis of copyright is preventing others from making copies, not using them. And unless you intend to criminalize transformative works, referencing models, and caching data there's no coherent legal basis to enforce such an interpretation.

If you properly license and pay for your training data, I have no ethical issues with it's use.
In other words Big Tech is the only one in a position to act ethically.

AI at this juncture is only capable of works of creativity, not the reporting of facts, and as such it has no place in search engines or informational websites.
On the contrary, the accuracy depends entirely on the quality of the training data, and why anyone thinks training on the slurry of the internet will lead to coherent answers is beyond me.
 



CleverNickName

Limit Break Dancing (He/They)
Latest numbers, according to the opinions of EN World members:
  • Generative AI is still the least-acceptable kind of AI, with an approval rate between 18% to 30% (27% average). [+1%]
  • Adaptive AI remains somewhere in the middle with a 43% to 62% approval rate (51% average). [-2%]
  • Procedural AI is still the most-acceptable kind of AI, with a 60% to 71% approval rate (64% average). [+1%]
  • And all forms of AI are unacceptable to more than 1 in 4 voters (26%). [+3%]
 

Umbran

Mod Squad
Staff member
Supporter
But, do I own my posts on ENWORLD? F not, what happens if I decide to sell a product based on my post(s)?

I cannot speak for Morrus on the legalties, but I can note the policy as I understand it: EN World has rights to present/duplicate your content as is necessary for operation on a messageboard with our features - so, like you can post it, others can quote it, and so on. Beyond that, and the necessities of moderation, the policy is that you own your content, and your content is your responsibility.

So, if you post it here, and someone else scrapes it up and publishes it, it is not EN World's responsibility to pursue them over it. We are a medium, that's all.

Good question, though the GDPR Right to Erasure laws suggest sites must delete your data on request.

Note, GDPR is about your identity. In the GDPR sense, "your data" is your personal information, data that references your identity - so like your e-mail address, username, and IP address. It does not apply to content that you post.

To meet the legalities of GDPR, if you request it, we must remove your username, e-mail address, and all record of your IP addresses from our systems. Your posts will appear with a name like "GuestWXYZ" or the like. But the content of those posts are your responsibility.

In other words Big Tech is the only one in a position to act ethically.

Anyone who has/owns rights to their data is in a position to act ethically.

There are many scientific and academic institutions, for example, that own research data that could be used ethically.

If you are only interested in the literary and visual media aspects of the technology: Project Gutenberg could be used ethically. There are likely museum collections that could be used ethically. Anyone with sufficient gumption could set up a project in which those with relevant content/data could offer to submit it to the training sets - say, in exchange for a share of the profits from its use.

So, not only Big Tech. Indeed, as I understand it, Big Tech is so far not terribly interested in the ethics of the matter.
 

Split the Hoard


Split the Hoard
Negotiate, demand, or steal the loot you desire!

A competitive card game for 2-5 players
Remove ads

Top