AI/LLMs AI art bans are going to ruin small 3rd party creators

The work “created” is derivative, if I understand correctly.
In such case, the creators are all the artists who’s works were drawn upon, and the person who designed the prompt for the AI/LLM.



To bring that idea back home:
View attachment 432687View attachment 432688

2e
This is a derivative work based on the original ADVANCED DUNGEONS & DRAGONS Players Handbook and Dungeon Masters Guide by Gary Gygax and Unearthed Arcana and other materials by Gary Gygax and others.


3e
BASED ON THE ORIGINAL DUNGEONS & DRAGONS® RULES CREATED BY E. GARY GYGAX AND DAVE ARNESON


5e (2024)
Based on the Player's Handbook (2014) designed by Jeremy Crawford (lead), Bruce R. Cordell, Tom LaPille, Peter Lee, Mike Mearls, Robert J. Schwalb, Rodney Thompson, James Wyatt

Building on the original game created by
Gary Gygax and Dave Arneson and then developed by many others over the past 50 years
Agreed, well mostly. I don’t know for sure that I’d credit the creator of the original work as the creator of the derivative, even though there is clearly influence there. But I’m open to either interpretation.

What about the programmers and engineers that created the LLM and algorithm it uses to process the prompt?
 

log in or register to remove this ad

Presumably at the very least you are functioning as director and final editor of that endeavor.
I do definitely believe that art direction is creative output, and I do not mean to diminish art direction as an important role. If that endeavor was to send requests for art to a black box, and the black box said "here is a bunch of CC-licensed artwork, please follow their authors' terms of use as follows: ...", or "for sending $500 along with your request, you may now use this library of stock art meeting your specifications", or "thank you for the commission, I'll get right on that art for you", the art direction endeavor would have been the same, yet this debate would not exist. The subject of this debate isn't the art director. The subject is the black box, and the societal cost of using it, and the people convincing themselves using the soulless environment-killing labor-stealing box which is not a black box at all carries the same weight as using the other boxes.
 

Fundamental question here. Who did create that novel? Unless AI is sentient, with intention and will then surely the creation must ultimately be attributable to a person or at least multiple people?
I would say no one wrote it. However, the "who" here might be the problem, or the wrong question. I will let someone with more knowledge of LLMs, source-scraping technology, rights issues, and whatnot answer this, if they want
That sounds very similar to letting the AI know what you want and developing the necessary skills to do that? Is there something fundamentally different there?
Human experience, understanding, intelligence, and the like? As above, I'll let someone better-versed respond or explain as they see fit. I'm at work right now.
 

Agreed, well mostly. I don’t know for sure that I’d credit the creator of the original work as the creator of the derivative, even though there is clearly influence there. But I’m open to either interpretation.
The creator of the original work has a claim, and rights.
What about the programmers and engineers that created the LLM and algorithm it uses to process the prompt?
Not my area of expertise.
My first thought is that I am under no obligation to cite a tool.
Does your word processor have rights to a novel that you wrote? (Old school writing)

Edit: the employees who developed the AI/LLM are presumably being paid for their work. What their rights are with respect to the tool(s) that they build is between them and their employer.

AFAIK, IANAL, other relevant disclaimers.
 
Last edited:


I do definitely believe that art direction is creative output, and I do not mean to diminish art direction as an important role. If that endeavor was to send requests for art to a black box, and the black box said "here is a bunch of CC-licensed artwork, please follow their authors' terms of use as follows: ...", or "for sending $500 along with your request, you may now use this library of stock art meeting your specifications", or "thank you for the commission, I'll get right on that art for you", the art direction endeavor would have been the same, yet this debate would not exist.
Glad we can agree there.
The subject of this debate isn't the art director. The subject is the black box, and the societal cost of using it, and the people convincing themselves using the soulless environment-killing labor-stealing box which is not a black box at all carries the same weight as using the other boxes.
Well that’s not been the primary debate I’ve been involved in for this thread.

But I’m also not convinced the arguments about major negative impacts from the technology are fundamentally any different than those against any other form of automation. Potentially more far reaching, but not fundamentally different on principle.

I’m not even sure the initial scraping the internet to train even on copyrighted material was unethical, but if not it was right at the edge. There’s a notion of fair use for transformative and/or educational works.

Those are discussions I’m willing to have, but that’s not what I’ve been discussing here.
 

The creator of the original work has a claim, and rights.
Legally they have rights. But that’s not the same as saying they were the creator of the derivative.

Not my area of expertise.
My first thought is that I am under no obligation to cite a tool.
Does your word processor have rights to a novel that you wrote? (Old school writing)
That’s the AI is a tool camp. If AI is a tool like a word processor then the person using it is the creator, the artist, etc. or at least I don’t see a logical way around that.
Edit: the employees who developed the AI/LLM are presumably being paid for their work. What their rights are with respect to the tool(s) that they build is between them and their employer.
I’m not really asking about legal rights.
AFAIK, IANAL, other relevant disclaimers.
 

Edit: the employees who developed the AI/LLM are presumably being paid for their work. What their rights are with respect to the tool(s) that they build is between them and their employer.

Listen to the Ezra Klein interview of Naomi Klein (no relation) from the last few days: some really interesting parts* about how tech employees (in particular) have started exerting control over the companies they work for, and the tech bros...Marc Andreeson is quoted...are terrified by that.

*Really most of the parts were interesting.
 

So I ask, in AI ‘art’ or whatever you want to call it, who or what is the creator of the unique image that gets produced?
The creators of an AI image are every human being who contributed training data or prompts which, together, generated that image, the same way the creators of a collage I produce by splicing together two pre-existing images are myself and the artists who created the two pre-existing images.

And please note the nuance of my stance. If an artist or workshop of artists train their own gen AI exclusively on their own images, and they then use that AI to produce variations of their own images, then they are the creators of everything their AI produces. Every step of the process which produced that output is something which can demonstrably be identified with them and them alone.

In a round-about way, this harkens back to a question @Bill Zebub posed up-thread. To paraphrase: What's the difference between a human producing an image and an AI producing an identical image?

My answer to that question (as it applies to the gen AI models widely available to the public today) is attribution. I believe the ability of the artist to acknowledge their influences and inspirations is a defining characteristic of human art, in the same way the ability of a researcher to cite their sources is a defining characteristic of academic research papers. Human art is an expression of the human experience, so I want to know that the lived human experiences of the artist or artists have informed their artwork.

An artist who creates a painting using a paintbrush can explain every influence which affected that work during their lifetime. The artist can tell me where they lived, what artists' work they've previously seen, what painting techniques they've studied. If they provide enough detail about their life and their environment, I can understand the series of contemporary events that led to the creation of their painting, and I can understand how their lived human experience gave rise to that piece of art. (The same is true about a prompt engineer and their prompt.)

In contrast, a human who creates an image by feeding a prompt into one of the gen AI models currently available to the general public can't name every artist whose work contributed to that image. The image generation process is (in practice, if not in theory) a "black box" whose internal working are not presented for scrutiny. The prompt engineer has no way to cite their sources, as it were. There's no way (with the specific AI models under discussion here) to accurately provide attribution to all the individuals whose lived human experiences combined to produce the AI image.
 

Until it lands in court, whether or not stays speculation.
Sure, I suppose it is always possible they will overturn an older ruling. But DaVinci Editrice S.r.l. v. Ziko Games isn't from last century, it's from 2016, and it says the same things as the older ones. It seems unlikely to me they will overturn it for a TTRPG context. But sure. Anything is possible.
 

Recent & Upcoming Releases

Remove ads

Top