D&D General DMs Guild and DriveThruRPG ban AI written works, requires labels for AI generated art

I understand the ethical issues involved in using AI art that was trained on scraped images without permission, but I also understand the value of being able to create the images you want for your project. Blocking all AI art has the effect of increasing the barrier to entry for people who can't afford to pay hundreds or thousands of dollars to commission professional custom art.
In my view, artists borrow from each other all the time. "Good artists borrow, great artists steal." Much of art is reinvention, being inspired, and homage. AI art is no different. Also, I feel AI legally passes the legal test of "transformative".

I understand the need of the artists to economically maintain a viable livelihood. Namely survive and flourish in reallife. At the same time, this is a concern for every member of the human species whose livelihood technology eventually outmodes.

Artists remain important. AI by itself is less useful and less appealing when it only references itself solipsistically. AI needs humans to make its products meaningful.

It is probably a good idea that no AI artwork can be copyrighted. But when an artist collects images and especially when carefully guiding and personally modifying the image, at a certain the AI is a medium and the artist should be able to copyright the effort. This seems a legally gray area.
 

log in or register to remove this ad

That's the troubling thing about generative AI: it turns out that a lot of tasks we thought required an understanding of "true human psychology" or some other uniquely human trait can be executed very competently by a machine.

There are two things I am confident of.

First, that humans have absolutely no understanding of what intelligence is or what is hard. We have assumed all through history until really the past few years that what was intelligent was what required rare skill in a human and usually lots and lots of training and study to develop - often training and study that was beyond the average human. So for example, we assumed playing chess well was hard and a mark of intelligence, or that being able paint an object was hard and a mark of intelligence. Early in AI development history we even had people teaching computers how to play chess on the assumption that if the computer could play chess, that the complexity of the software that played chess would hit some threshold where general intelligence would just naturally evolve.

Turns out that the things that we assume require intelligence are things that we are simply morons at. Multiplying two big numbers in your mind is no proof of intelligence because it's something is computationally very inexpensive, it's just that humans are generally morons at remembering numbers and if we see someone who is barely more functional than the normal moron level humans reach we go, "Wow, that's amazing." Statistics? Humans are so bad at statistics that it took them 3000 years after inventing math to even conceive of them, and they still are completely unable to understand them so they misuse them all the time.

The things that humans are almost universally good at so that almost every human can do a good job at them, we don't think of as being intelligence and because the skill is universal we don't prize it, but it turns out that those things we are almost universally good at are often much more impressive than the stuff we are bad at and have to find unusual members of the population to train into performing.

What this means from an AI researcher perspective is that the "easy jobs" that we want to automated away because we don't find them rewarding are the least likely ones to automate away, while the "hard jobs" that so many people assumed would be beyond AI's ability are very often the ones that are easiest to automate away because it won't be hard to create an AI better than someone who is really just slightly better than the average human moron. We've actually already done that decades ago when the job of "computer" was replaced by mechanical and electronic computers to such an extent that when we use the word "computer" we don't even think about a highly skilled and reasonably well-paid professional - we think about a machine. But more of that is coming.

And the second thing I'm absolutely confident of is that humans think that they are super special for no good reason, and that whenever you hear a human say "a computer will never be able to do that because it will never truly understand something the way a human can", there is a good chance we're already past that point. For me, this happened almost thirty years ago when I was watching Deep Blue's rematch against Kasparov, and there were multiple points in the play - most famously 36.axb5! axb5 37.Be4! - where Deep Blue passed the Turing test because everyone including Kasparov assumed that that deep understanding of what chess is about was beyond the capacity of a "soulless machine". Maybe even the bigger moment for me though was game 5 in the endgame that changed the world where Deep Blue secured a draw off a line of play that again not only caused commentators to suggest that Deep Blue was receiving human input, but which the live stream commentators did not even see the point of until Kasparov himself resigned because they couldn't see more than two or three moves in advance and were just assuming Deep Blue was confused. Like literally right up until the moment Kasparov resigned, you had people saying computers would never beat a human because they lacked some uniquely human trait.

The biggest danger of AI is not that they are going to act like a human. The biggest danger represented by an AI is that within just a few years they will be wittier, funnier, more knowledgeable, and more engaging than anyone you know. The real danger isn't Terminator where AI decides to fight us for mates and material possessions and power like it's another ape. The real danger of AI is WALL-E that we create AI that cares for us an pampers us and does everything for us.
 

There are two things I am confident of.

First, that humans have absolutely no understanding of what intelligence is or what is hard. We have assumed all through history until really the past few years that what was intelligent was what required rare skill in a human and usually lots and lots of training and study to develop - often training and study that was beyond the average human. So for example, we assumed playing chess well was hard and a mark of intelligence, or that being able paint an object was hard and a mark of intelligence. Early in AI development history we even had people teaching computers how to play chess on the assumption that if the computer could play chess, that the complexity of the software that played chess would hit some threshold where general intelligence would just naturally evolve.

Turns out that the things that we assume require intelligence are things that we are simply morons at. Multiplying two big numbers in your mind is no proof of intelligence because it's something is computationally very inexpensive, it's just that humans are generally morons at remembering numbers and if we see someone who is barely more functional than the normal moron level humans reach we go, "Wow, that's amazing." Statistics? Humans are so bad at statistics that it took them 3000 years after inventing math to even conceive of them, and they still are completely unable to understand them so they misuse them all the time.

The things that humans are almost universally good at so that almost every human can do a good job at them, we don't think of as being intelligence and because the skill is universal we don't prize it, but it turns out that those things we are almost universally good at are often much more impressive than the stuff we are bad at and have to find unusual members of the population to train into performing.

What this means from an AI researcher perspective is that the "easy jobs" that we want to automated away because we don't find them rewarding are the least likely ones to automate away, while the "hard jobs" that so many people assumed would be beyond AI's ability are very often the ones that are easiest to automate away because it won't be hard to create an AI better than someone who is really just slightly better than the average human moron. We've actually already done that decades ago when the job of "computer" was replaced by mechanical and electronic computers to such an extent that when we use the word "computer" we don't even think about a highly skilled and reasonably well-paid professional - we think about a machine. But more of that is coming.

And the second thing I'm absolutely confident of is that humans think that they are super special for no good reason, and that whenever you hear a human say "a computer will never be able to do that because it will never truly understand something the way a human can", there is a good chance we're already past that point. For me, this happened almost thirty years ago when I was watching Deep Blue's rematch against Kasparov, and there were multiple points in the play - most famously 36.axb5! axb5 37.Be4! - where Deep Blue passed the Turing test because everyone including Kasparov assumed that that deep understanding of what chess is about was beyond the capacity of a "soulless machine". Maybe even the bigger moment for me though was game 5 in the endgame that changed the world where Deep Blue secured a draw off a line of play that again not only caused commentators to suggest that Deep Blue was receiving human input, but which the live stream commentators did not even see the point of until Kasparov himself resigned because they couldn't see more than two or three moves in advance and were just assuming Deep Blue was confused. Like literally right up until the moment Kasparov resigned, you had people saying computers would never beat a human because they lacked some uniquely human trait.

The biggest danger of AI is not that they are going to act like a human. The biggest danger represented by an AI is that within just a few years they will be wittier, funnier, more knowledgeable, and more engaging than anyone you know. The real danger isn't Terminator where AI decides to fight us for mates and material possessions and power like it's another ape. The real danger of AI is WALL-E that we create AI that cares for us an pampers us and does everything for us.
That's completely false.

Chess is math and AI 'art' is just advanced plagiarism of stolen intellectual property.

Remember when the same con artists pitching AI promised that cryptocurrency and NFTs were totally legit and going to revolutionize finance? Instead they're mainly used by criminals and proved why their real-life counterparts are heavily-regulated.

The claimed AI 'revolution' is just the same old techbros rolling out their latest attempt to rip people off.

AI has yet to provide even 10% of the claimed benefits, instead it just creates new and exciting ways to scam people.

We DO know how AI think. They don't. They don't think, don't comprehend, they just blindly regurgitate whatever they're programmed to produce.
 


Turns out you don't actually need to think in order to outdo or replace humans in a lot of tasks.
There is no 'outdoing' or 'replacing,' AI is entirely dependent on humans to create the work it plagiarizes and AI creators have outright stated that they need to keep stealing intellectual property to train their AIs on.

You're cheering on theft of intellectual property on an industrial scale.

AI isn't sustainable, it destroys the livelihoods of the very creatives it needs to keep going.

The fact AI advocates have to lie to defend AI says it all.
 

There is no 'outdoing' or 'replacing,' AI is entirely dependent on humans to create the work it plagiarizes and AI creators have outright stated that they need to keep stealing intellectual property to train their AIs on.

You're cheering on theft of intellectual property on an industrial scale.

AI isn't sustainable, it destroys the livelihoods of the very creatives it needs to keep going.

The fact AI advocates have to lie to defend AI says it all.
hate.jpg
 


DMsGuild and drivethrurpg didn't need to ban AI specifically since they already ban plagiarized and stolen works, but it's good to see them striking back against this kind of theft.

AI is based entirely on copying or outright theft of work done by other people, is completely unsustainable, has yet to provide even a fraction of the promised benefits, the loudest advocates for it in the tech industry are known con artists and fake experts (it's basically a copy-paste of the same people who were advocating for crypto currencies and NFTs), it's ruining the livelihoods of people creating the work it steals and the people it replaces, and all it amounts to is advanced plagiarism.

We know how AI works, this isn't a complicated issue.
 

Remove ads

Top