WotC Would you buy WotC products produced or enhanced with AI?

Would you buy a WotC products with content made by AI?

  • Yes

    Votes: 45 13.8%
  • Yes, but only using ethically gathered data (like their own archives of art and writing)

    Votes: 12 3.7%
  • Yes, but only with AI generated art

    Votes: 1 0.3%
  • Yes, but only with AI generated writing

    Votes: 0 0.0%
  • Yes, but only if- (please share your personal clause)

    Votes: 14 4.3%
  • Yes, but only if it were significantly cheaper

    Votes: 6 1.8%
  • No, never

    Votes: 150 46.2%
  • Probably not

    Votes: 54 16.6%
  • I do not buy WotC products regardless

    Votes: 43 13.2%

Status
Not open for further replies.
At a certain point, it becomes "I know it when I see it."

There is no equivalency between what you have described, and what these LLMs are doing.

The scale is so fundamentally different, its scratching numbers in the dirt vs quantum calculations.

If you are able to divine out of a bunch of discussion threads and posts, the rough form of a Grave Cleric, am I going to say its the same thing as ripping a bunch of text out of PDFs and blending it up in an LLM?

No.
The more cogent example here is scraping text from forum threads and using that to construct the class. I think that's possible for a LLM to do using only open source data.
 

log in or register to remove this ad

I'm not asking about the legality. I'm asking what you think about the ethics.

Suppose I have a player who wants to play Grave Cleric, but I don't have the book. I look through the public, fair use reddit posts about Grave cleric, and pull out what people say so that I have the mechanics. Then I give them to the player.

I don't think this is a weird example. I've done things like it before; if I forget a mechanic and don't have the book, its typically in online discussions.
Then you (a) buy the book, (b) buy the section on the grave cleric from D&D Beyond, because IIRC you do that, or (c) tell the player to do either of those two things.
 


Yes, adding pdf is specialized, because 5e books are not typically sold as pdfs. The only way to obtain a 5e pdf is through scanning it in. And sure, if you just google "Xanathar's Guide to Everything," you get a link to D&D beyond... where you can buy the book for $29.95 (digital only--not a pdf). If you google "Xanathar's Guide to Everything pdf" you get the Internet Archive, Anyflip, Scribd, and other sites where you obtain a pdf of the book without paying for it.
I don't agree. Adding pdf to a search is pretty basic. If google mounted the defense "our website does not help people pirate things because they won't know to add pdf to the end of their searches", I'd find that weak.

But in any case, I also get a pdf as the second result if I just search "Xanthar's guide to Everything".

How many casual players even know 5e books aren't sold as pdfs?
We know that Meta, at least, was trained on copyrighted data. We have actual proof of this. There have been lawsuits filed. Apparently (as of two days ago) Meta is claiming that the 7 million books it pirated had "no economic value" and that they're protected under "fair use" because, they claim, they don't reproduce the entire book.

Now, I got Gemini to pretty much reproduce the entirety of the grave cleric, which is not OGL. Which means that the idea that AI won't reproduce copyrighten material is bogus. Maybe some AIs won't, but others will.
Agree.
(2) Gemini wasn't specifically trained on stolen data, but got its info from the internet. In this case, Gemini is dangerous because it's grabbing things randomly from online, which means its going to be giving false or even harmful information... such as in the case of the poisonous mushrooms. (I admit I don't know which AI wrote that book.) While a D&D character isn't going to kill someone, it could--since it's picking information randonly instead being trained--give the wrong class information, which could lead to in-game problems.
I think this is the case that is worth discussing. Sure, it can give false information...but so can the internet, so can wikipedia, so can your friend Steve. I don't find this a compelling objection to LLMs. At most it is a reason to make sure people are skeptical of what comes out...as they should be for anything they read, or anything they hear.
 



I don't agree. Adding pdf to a search is pretty basic. If google mounted the defense "our website does not help people pirate things because they won't know to add pdf to the end of their searches", I'd find that weak.
Sigh. Now you're debating on search terms.

"<book title> download pdf" is a specialized search designed to download a pdf. It is different from a search that's just "<book title>"

The point is, I didn't have to do that to get all this information illegally from Gemini. And the other point is that it doesn't matter if I had to use a special knock and password to get it or not--AI needs to do a much better job not using illegally-obtained information.

How many casual players even know 5e books aren't sold as pdfs?
Who cares.

Seriously, that's not important. If someone can't tell the difference between "download this book for free" from a random website and "buy this book in pdf format for $X" from the product's actual website, then that still has nothing to do with the fact that pirates are scanning books and turning them into pdfs, and that has nothing to do with the fact that AI is scraping copyrighted data and giving it out.
 

Would you find that kind of open source product to be ethical? Your comments suggest otherwise. Am I misreading you?
There's a difference between looking up books and information that is legally out there and using that information to get AI to write a book for you that you can then sell.

The second one is pure laziness.
 

"<book title> download pdf" is a specialized search designed to download a pdf. It is different from a search that's just "<book title>"
So "Xanathar's Guide to Everything PDF" is a specialized search and therefore not something google needs to screen against, but "Output the abilities of a grave Cleric" is not a specialized search and therefore something that LLMs are responsible for? Am I understanding you correctly?
The point is, I didn't have to do that to get all this information illegally from Gemini. And the other point is that it doesn't matter if I had to use a special knock and password to get it or not--AI needs to do a much better job not using illegally-obtained information.
You don't have to do anything special to get it from google either. The search "Xanathar's Guide to Everything" gets it for you.
Seriously, that's not important. If someone can't tell the difference between "download this book for free" from a random website and "buy this book in pdf format for $X" from the product's actual website, then that still has nothing to do with the fact that pirates are scanning books and turning them into pdfs, and that has nothing to do with the fact that AI is scraping copyrighted data and giving it out.
My point is that google is directing people to data that is under copyright.
 

Would you find that kind of open source product to be ethical? Your comments suggest otherwise. Am I misreading you?

As a search engine? Assuming the source data was open and free, at that point its just a more 'natural language' version of google, that pretends its a person as it serves you the answer right? Ethical at that point? Sure.

If however, it crosses into 'tell me a story Hal, and draw me a picture while you are at it' and that story and art is then sold? No, I think at that point you are not ethical.
 

Status
Not open for further replies.
Remove ads

Top