WotC: 'We made a mistake when we said an image not AI'

It seems like AI art is going to be a recurring news theme this year. While this is Magic: the Gathering news rather than D&D or TTRPG news, WotC and AI art has been a hot topic a few times recently. When MtG community members observed that a promotional image looked like it was made with AI, WotC denied that was the case, saying in a now-deleted tweet "We understand confusion by fans given...

Screenshot 2024-01-07 at 18.38.32.png

It seems like AI art is going to be a recurring news theme this year. While this is Magic: the Gathering news rather than D&D or TTRPG news, WotC and AI art has been a hot topic a few times recently.

When MtG community members observed that a promotional image looked like it was made with AI, WotC denied that was the case, saying in a now-deleted tweet "We understand confusion by fans given the style being different than card art, but we stand by our previous statement. This art was created by humans and not AI."

However, they have just reversed their position and admitted that the art was, indeed, made with the help of AI tools.

Well, we made a mistake earlier when we said that a marketing image we posted was not created using AI. Read on for more.

As you, our diligent community pointed out, it looks like some AI components that are now popping up in industry standard tools like Photoshop crept into our marketing creative, even if a human did the work to create the overall image.

While the art came from a vendor, it’s on us to make sure that we are living up to our promise to support the amazing human ingenuity that makes Magic great.

We already made clear that we require artists, writers, and creatives contributing to the Magic TCG to refrain from using AI generative tools to create final Magic products.

Now we’re evaluating how we work with vendors on creative beyond our products – like these marketing images – to make sure that we are living up to those values.


This comes shortly after a different controversy when a YouTube accused them (falsely in this case) of using AI on a D&D promotional image, after which WotC reiterated that "We require artists, writers, and creatives contributing to the D&D TTRPG to refrain from using AI generative tools to create final D&D products."

The AI art tool Midjourney is being sued in California right now by three Magic: The Gathering artists who determined that theirs and nearly 6,000 other artists' work had been scraped without permission. That case is ongoing.

Various tools and online platforms are now incorporating AI into their processes. AI options are appearing on stock art sites like Shutterstock, and creative design platforms like Canva are now offering AI. Moreover, tools within applications like Photoshop are starting to draw on AI, with the software intelligently filling spaces where objects are removed and so on. As time goes on, AI is going to creep into more and more of the creative processes used by artists, writers, and video-makers.

Screenshot 2024-01-07 at 19.02.49.png
 

log in or register to remove this ad

The issue here was that WotC did neither - they've required product art to be non-AI for a while, but they didn't do the same with promotional art. Nor did they ask for proof of work, clearly.
I appreciate being corrected if I am wrong on something and certainly this is an area where I don't know much (though I do work with artists for my own publishing). But I wasn't just making stuff up. I was saying what my understanding was based on the article. Which could be totally wrong, but it was based on my own experience hiring artists. Now I am just a small publisher with little technical skill on the art side so if there is a way to vet for AI that is good. I just didn't realize there was a way to see the use of these tools after the fact
 

log in or register to remove this ad

Umbran

Mod Squad
Staff member
Supporter
I just didn't realize there was a way to see the use of these tools after the fact

There isn't a definitive way to do so. Generative AI doesn't understand what it is creating - so it sometimes doesn't get details correct - eyes don't look like they are both focusing on the same point, a figure has the wrong number of fingers, objects in a scene aren't connected to each other in a rational way, and so on.

But note that our ability to interpret these signs as proof is flawed. The community has already seen one very public case of someone calling "AI art!" and being quite wrong about it.

And, obviously, we can get cases of, "Um, no. Canonically this character has six fingers on one hand, and this other character canonically suffers from strabismus."
 

I appreciate being corrected if I am wrong on something and certainly this is an area where I don't know much (though I do work with artists for my own publishing). But I wasn't just making stuff up. I was saying what my understanding was based on the article. Which could be totally wrong, but it was based on my own experience hiring artists. Now I am just a small publisher with little technical skill on the art side so if there is a way to vet for AI that is good. I just didn't realize there was a way to see the use of these tools after the fact
So when an artist uses digital tools to make a piece of art, they don't just instantly create a single-layer file. They'll have a history of what they did, which they can send you, the person who commissioned the art. That's the proof of work referred to.

When AI creates "art", it does instantly create a single-layer file.

That's most straightforward way to check for AI art.

Now, if there's really just a "tool" in Adobe or whatever which does this, whilst that may be layered in, it'll still be in the history (or the history may be somewhat suspect, i.e. it starts with the piece nearly finished), so you should still be able to see when that was used - that is a little more involved, but if someone claims a piece is AI or partially AI, you could then check. I am actually rather skeptical that this was generated mainly by hand just with a "tool" assisting - I suspect the reverse was true, but either way, if you have the files showing how the piece was created, you can check at a later date.

Re: hiring artists, you can contractually require them to not use any generative AI tools. If they then do, you have legal recourse against them. My feeling is if this becomes commonplace we'll pretty quickly see things shake out into artists who don't use it and artists (and "artists") who do. The latter are likely to be cheaper, of course.
 

There isn't a definitive way to do so. Generative AI doesn't understand what it is creating - so it sometimes doesn't get details correct - eyes don't look like they are both focusing on the same point, a figure has the wrong number of fingers, objects in a scene aren't connected to each other in a rational way, and so on.
Yup. This is why you need the file(s) showing how the piece was created, not just the final image. I know some artists will be cagey or difficult about that (god knows we've had issues about "just send us the actual file not this terrible low-res bitmap you've decided we should have, please!" before!), but it's one of those "this is why we can't have nice things" deals, and artists who are unwilling to do it will (and probably should) become somewhat suspect.

I think in most cases it'll be users who find something suspect and then cause a piece to be looked into, where it gets discovered as AI art. Once professional consequences start kicking in, there will be fewer attempts at passing off, and more just lowering prices on partial-AI stuff.
 

Umbran

Mod Squad
Staff member
Supporter
So when an artist uses digital tools to make a piece of art, they don't just instantly create a single-layer file. They'll have a history of what they did, which they can send you. That's the proof of work referred to.

When AI creates "art", it does instantly create a single-layer file.

As soon as art editors start checking work like that across industries as general policy, we will see AI created to also produce that history.

Generative AI is not currently designed to create it, but there's no digital asset it technically cannot put together.

Ultimately, we may need something like a company that makes digital art tools that creates trust that none of the tools use generative AI, and uses something like PGP to sign the art so produced to certify it AI-free.
 

As soon as art editors start checking work like that across industries as general policy, we will see AI created to also produce that history.
You can't do that though. To have the history, it has to have assembled the piece in stages, and it'll obviously false if you go back and try and simulate every brushstroke - and paint programs do record every brushstroke. Also, as you say correctly, AI art has no understanding of what it's doing, so couldn't create the fake layers needed. It would be an obvious nonsense.

So that might happen one day but would require a different approach in the generation in the first place, but we're a long way out from that, and I suspect even then it'd be highly distinctive. Further, using that would be deception - and whilst criminal action is unlikely, civil action is highly likely. Even if such a technology existed, it'd get found out, and people would get bankrupted being sued for this.
Generative AI is not currently designed to create it, but there's no digital asset it technically cannot put together.
They'd need to start again from first principles to make AI generate art like an artist does. It's fundamentally not how AI art works. It doesn't have brushstrokes, it doesn't have layers, it doesn't have you selecting X tool and performing Y function, it just BLARTS out a fully-formed calculation and says "This am picture".

Could they build a thing that operated like an artist? Sure. But that's nothing that's on the market right now. The bigger threat right now is more down to part of a scene being done with AI art which is what appears to have happened here (well, I suspect, it was almost all of the scene, but I'd need the files to prove that), like just the background or whatever. And that's where your point becomes very relevant:
Ultimately, we may need something like a company that makes digital art tools that creates trust that none of the tools use generative AI, and uses something like PGP to sign the art so produced to certify it AI-free.
This is indeed the ideal - and I suspect we will see it - but probably not from Adobe or their ilk, because honestly they're heavily invested in trying to legitimize AI art, not to create a situation where it can be identified easily.
 

Scribe

Legend
Also @Scribe, you're significantly overstating the progress. It wasn't "a year ago" (though sure, COVID etc. may make it feel that way) that AI art looked like you describe, it was multiple years ago, and the rate of improvement has declined drastically, and will continue to decline, especially as we're likely to increasingly see AI art face more regulatory challenges, potentially being sent "back to the drawing board", and so on. The next steps forwards for AI art tools are likely to be usability-related, moving away from carefully-worded and often tricksy prompts to more straightforward selections.
You are almost certainly correct. Perception of Time is a bit off for me now. :)
 

this treads close to "what is art"

which part of any of the following is sub par?

qXBTDlyAqpNNglU3ZRNv--1--6tvw0.jpg

KDClQ9VtMwBRMBeVmlVk--1--zo2ce.jpg

e6cJQaOXMqtbJwOL9wJt--1--85x9a.jpg
As far as I can tell, nobody has used the word “abomination” in their reply yet, but I’ll go that far. These are intriguing, detailed. images wherein the more closely you look, the more details you realize are totally wrong in unsettling ways. It’s like looking into some particularly diabolical hell, where every face is a monster and every sign is a curse.

I probably sound hyperbolic, but I truly believe it. Those images are hideous, and look more hideous the longer I look at them. They’re technically bad, but they also now remind me of the insideous disruption they represent, which should scare everyone.

A few years ago, internet slang for pictures like this was “cursed images”, regardless of creator. Those are images that are deliberately wrongly/badly drawn to make the viewer uncomfortable. Generative AI (probably, I assume) doesn’t intend to make its viewers uncomfortable, but the outcome is the same: profoundly unsettling images. Why is there so much detail when all the details are wrong?? The only thing generative AI seems to draw well are beautiful faces, likely because the horndogs who invented this technology considered pretty faces to be the type of images in the highest demand.
 

So when an artist uses digital tools to make a piece of art, they don't just instantly create a single-layer file. They'll have a history of what they did, which they can send you, the person who commissioned the art. That's the proof of work referred to.

When AI creates "art", it does instantly create a single-layer file.
I usually get images with layers so I can make formatting changes in photoshop. But I haven’t published a book since drivethru issues its AI rules. One question I have is whether the tools in photoshop that utilize AI are instantly recognizable when you look at the layers. I am not familiar with the tool they were talking about in the article
 

Umbran

Mod Squad
Staff member
Supporter
You can't do that though. To have the history, it has to have assembled the piece in stages, and it'll obviously false if you go back and try and simulate every brushstroke - and paint programs do record every brushstroke.

We used to think that AI couldn't produce such complicated images at all. Declaration that such feats are impossible is... maybe not a good bet to make.

Better to say that we can't do that... yet. But, again, short of cryptographic signing, in principle there's no digital asset that generative AI systems cannot create. There are only ones they haven't yet been trained to create.

Also, as you say correctly, AI art has no understanding of what it's doing, so couldn't create the fake layers needed. It would be an obvious nonsense.

If you make such layers part of the training data, it is entirely possible to do.

So that might happen one day but would require a different approach in the generation in the first place, but we're a long way out from that...

"A long way" doesn't mean what it used to. For the next couple of years, maybe this will suffice, but if there's sufficient money at stake, the technology will catch up sooner than we'd want it to.

They'd need to start again from first principles to make AI generate art like an artist does.

No, again, they'd need to start with data on how an artist works. Current generative AI is trained on only the end products, because those are easy to get off the internet. Feed it files with those histories, though, and it becomes a different ball game.

Watch for image creation tools to start having terms that allow access to your data for "diagnostic purposes"...

It's fundamentally not how AI art works. It doesn't have brushstrokes, it doesn't have layers, it doesn't have you selecting X tool and performing Y function, it just BLARTS out a fully-formed calculation and says "This am picture".

That is not fundamental to the technology. That is merely what we have trained it to produce. Note that producing art and producing text is not even fundamental to the technology. For example, back in the day, I did research in training neural networks to simulate high energy particle collisions for tuning data analysis tools at accelerators.

Since the AI doesn't understand what it is doing, it also doesn't actually care what it is doing - what is fundamental to the technology is intake of digital data and output of things that are similar to that data. And that's about it, fundamentally speaking.
 

Related Articles

Remove ads

Remove ads

Top