WotC Updates D&D's AI Policy After YouTuber's False Accusations

YouTuber falsely accused D&D artist of using AI based on "something feeling off".

dungeons-and-dragons-2024-players-handbook-fighter-full-page-splash.jpeg

This awesome art by Nestor Ossandón is not AI

Following a YouTuber falsely accusing an artist who worked for WotC of using AI based on "something feeling off" in a widely watched (but now deleted) video, Wizards of the Coast has updated its AI policy.

For 50 years, D&D has been built on the innovation, ingenuity, and hard work of talented people who sculpt a beautiful, creative game. That isn't changing. Our internal guidelines remain the same with regards to artificial intelligence tools: We require artists, writers, and creatives contributing to the D&D TTRPG to refrain from using AI generative tools to create final D&D products. We work with some of the most talented artists and creatives in the world, and we believe those people are what makes D&D great.


The YouTuber in question is Taron Pounds, username 'Indestructoboy', and made his now deleted video because, in his words, 'something felt incredibly off'. He's an ENnie-winning game designer, and has since posted an apology on Twitter:

I contributed to "rage bait" content this year after the OGL situation. That's on me. If I was frustrated by a situation, I felt compelled to say something to the camera. That's just not okay. I bought in hard on the "anti-WotC" train and should have just put my energy elsewhere.


IMG_2202.PNG

Rage-bait videos are a problematic part of not just the D&D community, but on YouTube in general--as a massive Doctor Who fan, my YouTube feed is full of similar stuff about that show. The D&D stuff I see is overwhelmingly negative about how D&D is dying (it isn't, by the way). Unfortunately, that's what YouTube incentivises, and that's what gets the thousands of clicks: video thumbnails with big text, a controversial statement or question, probably a big shocked face, and a giant question mark or arrow, or maybe a jagged cartoony graph trending downwards. It's important to realise that just because that's what gets the clicks, it doesn't make it true. It is, however, a massive part of what drives the community narrative at the moment.

A shout-out should go to Christian Hoffer, who took the time to actually email the artist in question, who confirmed--with evidence--that the art was completely human generated. The YouTuber did not even make that basic step. You can read his report on Twitter here (and you should follow him if you're still on that site). The artist in question is Nestor Ossandón, who responded to Hoffer as follows.

First of all, I do not use artificial intelligence (NOT AI) for my work and no one but you and my director have asked me. And that image is completely painted. It is one of my favorite recent jobs that I have been able to do. And if you see other old works, you can see that my tendency is very similar when it comes to painting. I always play with warm and cold ones on my face. Thanks to the work together with the art director. They give me the freedom and appropriate time to develop it. This character is completely painted from scratch with a gray and superimposed color technique. Then I paint the cold tones to give atmosphere and light. It took me more than two weeks and my director was very happy with this work.


To be clear, Nestor Ossandón did not use AI to create the above art.

The artist provided proof (not that they should have to) which Hoffer posted on Twitter.

GBqR8ntbIAA5QIS.png


There's not much real journalism that goes on in the tiny corner of the world that is the TTRPG industry; it’s still a niche topic, although it’s more popular than it’s ever been. I myself do not consider myself as such--I report on stuff, but I don't investigate stuff, and my contribution is not much more than simple reportage and aggregation (not that I undervalue that--I've been doing it for 24 years now, and folks still read it, and I recognise my own value!) Christian Hoffer (ComicBook.com), Lin Codega (laid off from iO9, but hopefully they will find a new outlet soon), Christopher Helton (retired) and other folks like that are great examples of journalism in this little industry. YouTube... there's a lot of great, informative, fun stuff on there, and there are folks I follow and enjoy, but you should be careful!

(Edit—I had some examples of video thumbnails here but I don’t want to give the impression they are related to this AI art episode.)
 

log in or register to remove this ad

Kurotowa

Legend
A better description, but I would not say mine is inaccurate.
Yeah. I mean, technically it's rendering the images down to pure math and then using that to reassemble new pictures. But both the intent and the result are often more like a collage. The important part is there's no intent or understand behind the creation. It doesn't know what a hand is, it just knows what pixels most often appear in images labeled "hand", and that's why it usually gets them wrong.
 

log in or register to remove this ad

jgsugden

Legend
Because that's not how AI works.

It doesn't learn. It uses math to to take parts of its data set and use those parts to complete a request.

It's not learning techniques from Bob Ross and applying them to what it wants, its cutting up all of Bob Ross's paintings and clipping them together into a ransom note-style collage without any intent to make a collage.
You're making a distinction that academics would, but most people do not. This discussion is not going to turn, on the end, on the technicalities... and AI is going to evolve quickly to do more and more that is further and further away from merely blending cut and paste.
Laws don't work either. This world we live in is no longer run by the government we put in to govern us. The laws of my state and country are not the only ones that impact me on a daily basis. EU privacy laws do. Chinese tariff laws do. Laws from numerous nations impact me on a weekly if not daily basis.
Please take a moment to firmly plant your tongue within the recess of your cheek. Ready? OK. Let's put the first half of that to the test. Drive 40 MPH above the speed limit from now on, and just take anything anytime you see someone with something you don't think they deserve. Let's see if any of these local laws actually mean anything.

As for the second half - this isn't new. The Boston Tea Party was about people across the sea impacting the lives of the people living in the states. We have always been impacted by international laws, and always will be.
Laws will simply fail because we won't be able to get common laws in enough of the key countries to achieve it.
That must be why we don't have any copyright laws. I mean, we know certain countries do nothing to enforce copyright (as the US sees it), so obviously we don't have any copyright laws because they'd be pointless, right? Or do we, despite limitations, still rely upon these laws extensively?

Obviously, there are limitations when you have to get everyone to play ball - but when you need everyone to play ball, there are ways to force the topic. Nuclear proliferation is an example where plenty of countries want to say they have nukes - but there has been global cooperation to slow and limit the situation and to prevent the use of these weapons. When was the last nuclear war? If nothing had been done to systemically limit these situations through international accord and law, we'd likely not be here today to argue about this situation.
We do [need to stop the scorpion from stinging our toad buts]. And either we can try to accomplish this through numerous governments world-wide, or through our own behavior. Neither is likely to work. But which one can you control more?
If you need to move 10,000 lbs of water - is it easier to move when it is frozen and can be handled systemically, or when reduced to steam and allowed to move into whatever direction it wants to go? Systemic controls are far more effective at large scale implementations than relying upon individuals to self regulate. And there are countless examples where laws and systemic limitations are effective - not perfect, but effective - in creating a better world.
And perhaps more importantly, which one can bring near immediate health benefits to you? Boycotting click baiters :)
Obviously, the systemic solution is going to be more effective faster because - as noted - relying upon people to do the right thing is entirely ineffective on a large scale. People suck.

You know the old story about the magic button that gives you a fortune, but a person you've never met would die? What happens in the end? The people push the button and then learn that ... when the next person after them pushes the button, they themelves will be the people that will die! Shocking twist! What does the story tell us? One, people suck. (All stories tell us people suck). Two, if people knew there were consequences that will be enforced on them should they do the wrong thing, they wouldn't have done the wrong thing ... but in the story, that is not realized until too late to save the people. If they knew the laws of the magic button put them at risk, they'd have avoided the risk.

Anarchy fantasies aside, there are countless works written on the need for laws and how and where they can be effective. There is some novelty here, but the fundamental premise underlying it is the same: So long as there is free will, we can't trust people to not suck, so we need to make it suck more if they're %@holes than if they play ball.
 

Vaalingrade

Legend
You're making a distinction that academics would, but most people do not. This discussion is not going to turn, on the end, on the technicalities... and AI is going to evolve quickly to do more and more that is further and further away from merely blending cut and paste.
It's going to turn when all the grifters trying to make bank on it realize they can't copyright the stuff they make and thus can't grift money out of it.
 

Morrus

Well, that was fun
Staff member
I'm allowed to learn to paint by watching Bob Ross. Why not let an AI learn too? Why can't it look at anything publicly available and learn just ike people do? There are obvious reasons ... but at the core, that argument will be there and in the end will win.
I'm allowed to learn to write by watching jgsugden. Why not let an AI learn too? Why can't it look at anything publicly available and learn just ike people do? There are obvious reasons ... but at the core, that argument will be there and in the end will win.
 

I'm allowed to learn to write by watching jgsugden. Why not let an AI learn too? Why can't it look at anything publicly available and learn just ike people do? There are obvious reasons ... but at the core, that argument will be there and in the end will win.
I'm allowed to learn to dance by watching Morrus. Why not let an AI learn too? Why can't it look at anything publicly available and learn just ike people do? There are obvious reasons ... but at the core, that argument will be there and in the end will win.
 

This discussion is not going to turn, on the end, on the technicalities... and AI is going to evolve quickly to do more and more that is further and further away from merely blending cut and paste.
We've diverged a little bit here. I know this thread is about AI, but the part of the discussion I was engaged in wasn't about AI regulation. It's about social media algorithms, click bait titles and the monetization of negativity via those means.
That must be why we don't have any copyright laws. I mean, we know certain countries do nothing to enforce copyright (as the US sees it), so obviously we don't have any copyright laws because they'd be pointless, right? Or do we, despite limitations, still rely upon these laws extensively?
And these laws are becoming... less useful. When they were created, the internet did not exist. The ability for a server to be hosted anywhere in the world and provide "illegal" content to users in other parts of the world wasn't a thing. Books were printed in a few places, they had to be shipped in mass to be commercially viable. Port inspectors were a thing.
The best technological example of this applying to the internet is the Great Firewall of China. Each nation could chose to do the same, but the technology exists to bypass each of those as well. solutions exist, but those too can be breached, and patched, and breached, and patched...
Obviously, the systemic solution is going to be more effective faster because - as noted - relying upon people to do the right thing is entirely ineffective on a large scale. People suck.
Not on the individual level. Each one of us can immediately help ourselves avoid the negativity of click bait et al by refusing to engage. It's immediate, and effective. Regulation of the platform algorithms has been poor and piecemeal. It will never be faster (though it might be more effective en mass) that what we can do for ourselves.
 

jgsugden

Legend
I'm allowed to learn to write by watching jgsugden. Why not let an AI learn too? Why can't it look at anything publicly available and learn just ike people do? There are obvious reasons ... but at the core, that argument will be there and in the end will win.
Well now you just sound fooli .... WAIT A SECOND!!!!! :p
 

jgsugden

Legend
[In response to people suck]...Not on the individual level. Each one of us can immediately help ourselves avoid the negativity of click bait et al by refusing to engage. It's immediate, and effective. Regulation of the platform algorithms has been poor and piecemeal. It will never be faster (though it might be more effective en mass) that what we can do for ourselves.
Yes, you can avoid clickbait by not clicking on it and you, individually, will benefit from isolating yourself from it.

That has little to do with the larger AI issues under discussion here, but that is true.
 

Yes, you can avoid clickbait by not clicking on it and you, individually, will benefit from isolating yourself from it.

That has little to do with the larger AI issues under discussion here, but that is true.
Again, my comments were not in regards the AI part of this discussion, but rather the click bait creators and platform algorithms.
 

Clint_L

Hero
No, that's not what generative AI does.

AI detects patterns in random visual noise that resembles what the user requested, then repeats the process.
It's a lot more complicated than that, and the reality is that no one knows exactly how generative AI does what it does, but there is significant research showing that it has already exceeded its intended parameters in many surprising ways.

Also, I think what you described is pertinent to a great deal of animal cognition, including human animals, which is also rooted in pattern recognition and prediction. One thing that the rise of generative AI has made me consider is the amount of human writing that is essentially unoriginal. Most of it.
 

Related Articles

Remove ads

Remove ads

Top