D&D General DMs Guild and DriveThruRPG ban AI written works, requires labels for AI generated art

That said, I have not seen a particularly compelling case where an AI doing visual art is doing something different in a way that can be expressed than artists who learn from looking at other art are doing. The fact the actual processes are different doesn't seem self-evidently significant from where I sit. Both are capable of actively copying style, both are capable of mixing and creating divergent material
A human needs to be deliberate in copying. Otherwise they end up doing something very personal. Even imitators end up putting their personal touch. Humans aren't just copiers, humans carry baggage, experiences, feelings, emotions, and desires. No two people will draw the same while a machine will produce the same result when using the same seed and parameters.
 

log in or register to remove this ad


That is not the definition of theft. At worst, it is a copyright violation. (That doesn't make it right, but I feel like we should be clear when labeling things.)
🤷‍♀️ seems like a distinction without difference to me. Theft is a stronger, more direct term, which is why I use it
Also, if it WAS trained only on works in the public domain, it wouldn't even be a copyright violation.
Candy and nuts.
 

🤷‍♀️ seems like a distinction without difference to me. Theft is a stronger, more direct term, which is why I use it
That's where we disagree pretty strongly. Which is cool. We are allowed to disagree.

One way I thought of that could (probably will) manage to make generative AI more legal and even less ethical is when all the image storage service (iPhoto, Deviant, whatever) add clauses to their TOS that the owners of the pictures allow the company to use their images for training AI. I bet that is coming.
 


That's where we disagree pretty strongly. Which is cool. We are allowed to disagree.

One way I thought of that could (probably will) manage to make generative AI more legal and even less ethical is when all the image storage service (iPhoto, Deviant, whatever) add clauses to their TOS that the owners of the pictures allow the company to use their images for training AI. I bet that is coming.
Given what a big revenue generator that would be for sites like Flickr, whose biggest periods of growth are behind them, I suspect they're all waiting for someone to go first and attract all the anger before jumping in themselves.
 

One way I thought of that could (probably will) manage to make generative AI more legal and even less ethical is when all the image storage service (iPhoto, Deviant, whatever) add clauses to their TOS that the owners of the pictures allow the company to use their images for training AI. I bet that is coming.
Unfortunately, that does seem likely in the near future.
 


Even OpenAI doesn't fully understand why ChatGPT outputs the text it does. That's why they're having such a hard time preventing it from creating harmful text.
On preventing harmful texts
1. A language model is just a probability distribution. It's large because it happens to have say billions+ of weights.
2. Because it's a probability distribution with so many weights and relationships between them, it's going to be very difficult to change specific weights to prevent whatever text is undesirable and even if you could you likely lose many of the weights and relationships that are needed to produce as realistic of results.
3. So instead you might try to just pick training data that doesn't have those problematic elements, but perhaps those things are more pervasive in our language (just under the surface) than we realize - making this not work so great either.
4. So then maybe you just try to train another AI to recognize the results you don't want - which from my understanding is where we are at now. But of course generative ai's predictions/pattern recognition isn't foolproof, in which case certain undesirable text slips through the cracks.

On understanding ChatGPT
OpenAI doesn't fully understand why ChatGPT outputs what it does is technically true - but only because the amount of data is too large and training process too computationally complex to step through step by step as you would in a traditional program. High level they know exactly how it works. They will also eventually settle on the reasons it's able to do anything unexpected that we see it doing - like what's happening when it's given a categorization prompt based on categorizations that are unlikely to be present in it's data model.
 

The only one I can see staying out of it is Apple, which has the money to do so and "your stuff will be kept private from AI" would fit into a lot of their branding over the last 10 years.
Until there's a valid revenue model that pays enough for potential loss in customers for allowing AI to train on your images I think most sites will ultimately opt out.
 

Remove ads

Top