WotC Would you buy WotC products produced or enhanced with AI?

Would you buy a WotC products with content made by AI?

  • Yes

    Votes: 45 13.8%
  • Yes, but only using ethically gathered data (like their own archives of art and writing)

    Votes: 12 3.7%
  • Yes, but only with AI generated art

    Votes: 1 0.3%
  • Yes, but only with AI generated writing

    Votes: 0 0.0%
  • Yes, but only if- (please share your personal clause)

    Votes: 14 4.3%
  • Yes, but only if it were significantly cheaper

    Votes: 6 1.8%
  • No, never

    Votes: 150 46.2%
  • Probably not

    Votes: 54 16.6%
  • I do not buy WotC products regardless

    Votes: 43 13.2%

Status
Not open for further replies.
I agree that's the basic structure of the argument. The weak point, to me, seems to be "being an accomplice is ok", especially in the case that the accomplice is 1) aware of the crime, 2) aware their service helps people commit the crime, 3) could make their service less helpful for those purposes but chooses not to, and 4) instead actively profits off the crime.

Yep, it's a stretch to exonerate accomplices of all responsabilities and I think very few established systems do promote it. It would make "spreading fuel on the forest" something right, because it's another person who cracked a match... It wouldn't be a very popular ethical system for long.

With regard to equating law and ethics, I disagree with your point that there is no point to conflate the two. Several ethical systems define law as the source of ethics (legal confucianism, or Han Fei Zi's legism, Hobbes). Basically, there is no right, or wrong, unless the will of the Sovereign (who can be collectively agreed upon) establishes it. In such a system, by definition, the law can't be wrong. It can be changed, because the collective agreement shifted to another position, but it can't be inherently wrong, as it's the source of morality.
 
Last edited:

log in or register to remove this ad

My point wasn't surprise.

My point was that the fact that these results are so highly ranked suggests google could do something about them. We all know the sites that pirate; we see them all the time. Would it be that difficult for google to implement a rule like: "if we find credible evidence of piracy on a site, we deindex that domain for 6 months"?
I already explained it. Plus it would deindex legitimate sites by accident.

This is a poor analogy because the store here owns the material that would be shoplifted. A better one would be a storefront that sold schematics of other store's security protocols.
And google doesn't own the material that would be pirated, so why should they go out of their way to alter their search engine? Again, that's the job of the material's owner. If WotC really wanted to, I'm sure they could get their team of lawyers together to go talk to google about it.

WotC has apparently decided that it's not worth to time, effort, or money to go after wikidot. Perhaps it's because their attempts to go after similar sites (such as 5etools) hasn't born much fruit.

If you're not interested in this discussion feel free to leave. I think it's been interesting.
The discussion is "would you buy WotC products produced or enhanced with AI. Talking about google and whatnot merely gets away from the point that some people think it's OK to steal/pirate/scrape material they don't own and use it to produce items for sale and some people would rather that actual artists and writers are paid.
 

I already explained it. Plus it would deindex legitimate sites by accident.
I don't think your explanation is adequate.

Content moderation happens all the time. It always deals with the fact that there is too much content to review manually and the risk of enforcing rules accidentally.

There is no reason google cannot do this. And if it is too hard for any search engine to do, perhaps that says something about the ethics of search.
And google doesn't own the material that would be pirated, so why should they go out of their way to alter their search engine? Again, that's the job of the material's owner. If WotC really wanted to, I'm sure they could get their team of lawyers together to go talk to google about it.
Because their search engine is aiding and abetting piracy.
WotC has apparently decided that it's not worth to time, effort, or money to go after wikidot. Perhaps it's because their attempts to go after similar sites (such as 5etools) hasn't born much fruit.
Well yeah, this goes back to the "we as a society have decided piracy is ok" point.
The discussion is "would you buy WotC products produced or enhanced with AI. Talking about google and whatnot merely gets away from the point that some people think it's OK to steal/pirate/scrape material they don't own and use it to produce items for sale and some people would rather that actual artists and writers are paid.
I don't think there is such a distinction. Both search engines and LLMs scrape the Internet for data, process and repackage it, and then present it to the user in response to their queries. Both benefit from using copywritten works to do so. And both face the same issues in avoiding copywritten material.
 

If WotC ever made anything again that appealed to me I would get it, AI or not. As I don't see that happening anytime soon, I don't have much stake in it.

As for the "AI" component, I don't care. AI is here to stay and I am happy for it. I use it often in my own games and frankly don't care if a company uses it for commerical products or not.

Yes, AI "learns" from real artists, but so what... that is what REAL artists do as well... learn from those who came before them. Some even copy them directly and if they're "caught" face legal issues. AI and companies that use it should face the same. If an artist can prove AI copied them to the degree a real artist could claim another real artist forged their work, then great. Otherwise, it is no different than dealing with a real-life artist stealing your work.

Yes, AI will make it so some people lose their jobs. Again, so what? This happens with every major technological advancement throughout history. A good real artist will still find those who want authentic real art and not AI-created and will pay for real artwork.
Neural networks lack subjectivity and experiences. They can grow and be trained, but what they get cannot be considered "learning". It isn't even remotely the same as a human artist learning from previous art. You probably don't have the experience as an artist or art apprentice to compare, but an artist learning is more than just memorizing other people's paintings/drawings (way more) and most of the time you aren't committing stuff to memory -in fact, many talented artists suffer from aphantasia and cannot actually memorize pictures, some talented portrait artists suffer from face-blindness and cannot even distinguish people faces-. Most of an artist learning involves practicing, learning how color works, how light and shading build volume, how to turn your thoughts into structures that can be projected into two dimensions, how the different instruments respond to your touch, how something as simple as handling a pencil/brush/stylus in a different way changes the result, how the final result can reflect your emotions or what you want to express.

And seriously, AI pictures are extremely difficult to fine-tune. Neural networks are just black-boxes that cannot be properly controlled. I've also tried to use LLMs as knowledge bases and they have a strong tendency to turn innacurate the moment you need more specific data. Not to mention they are literal plagiarism machines. Their use for academic cheating is widespread.
 

I don't think there is such a distinction. Both search engines and LLMs scrape the Internet for data, process and repackage it, and then present it to the user in response to their queries. Both benefit from using copywritten works to do so. And both face the same issues in avoiding copywritten material.

The way they operate is slightly different. Sure, Google needs to access the data to know which keyword to associate with a website, but it merely points toward a resource, while an LLM uses the content to increase its knowledge. Yet, some parallels can indeed be drawn. Initially, I was unconvinced by your take on the question, but I am finding more merit to it now.

I won't speak of ethics, since there will be no more consensus on ethics than on politics. It's strange that some people seems to be incensed because other people think doing something is OK when they don't. I mean, I understand that they don't like it, of course, but why are they trying to push their moral views on others?

So let's speak of law instead, a much more consensual topic to observe.

Interestingly, when search engines appeared, they were deemed unpalatable as well. Since they accessed the content of the websites to index them, and they didn't get the explicit agreement of the authors, their action could be seen as a violation of copyright. While the EUCJ ruled broadly in favour of the "new" technology, a specific exception was added to copyright to explicitly allow technical copies of a document in order to allow the functioning of search engine. It happened in 2001 in the EU.

What can we learn from that? Another technology was criticized at the time by authors, who complained that the economic models of the search engines was to sell ads, but their audience came to them to be pointed to the authors's content without paying for it. Without content to point to, they said, the search engine wouldn't be able to be useful and they'd make no money by displaying ads.

What was the outcome? While the ethical question is indeed the same (profiting from content without paying the author), the technological means was different. But the consensus was established to allow it (I can't speak for zones other than the EU, but I am thinking search engines are allowed all over the world). 25 years after that, I am not seeing many people expressing displeasure at the economic damage caused by search engines.

Back to ethics, did many people's ethical values change during that time or did the usefulness and ubiquity of search engines proved they were more valuable, collectively, than protecting the individual authors? Or did a generational change made search engines who were "always there" for people born after 1985ish, more acceptable? Or did the fact they are clearly legally allowed mellowed the rage against them? Is such a change going to happen with AI?
 
Last edited:

I don't think your explanation is adequate.

Content moderation happens all the time. It always deals with the fact that there is too much content to review manually and the risk of enforcing rules accidentally.

There is no reason google cannot do this. And if it is too hard for any search engine to do, perhaps that says something about the ethics of search.
I don't even see why you care, considering you're pro-plagiarism since you support using generative AI.

Because their search engine is aiding and abetting piracy.
And LLMs are outright committing it. So again, why do you care, other than to try to misdirect from the actual problem. Which is individuals and companies using AI to avoid having to pay for real writers and artists.

Well yeah, this goes back to the "we as a society have decided piracy is ok" point.

I don't think there is such a distinction. Both search engines and LLMs scrape the Internet for data, process and repackage it, and then present it to the user in response to their queries. Both benefit from using copywritten works to do so. And both face the same issues in avoiding copywritten material.
And this is entirely wrong.

I'd wager most of what's on the internet is legitimate. Without specialized searches (e.g., actively trying to find books to pirate), you're going to get mostly legitimate sites. When it comes to non-D&D games, I almost never get anything other than legitimate sites about it.

Heck, let me check, basing it on some games that I own: SWADE Science Fiction Companion. I've been wanting this. First three results: PEG, Drivethru, and the original kickstarter.

Fabula Ultima: Need Games site, Drivethru, reddit post.

Cypher System. First three results: two different hits for the game's home page and a reddit post.

GURPS: Wikipedia, SJ Games, reddit post.

Troika!: Wikipedia, Melsonian home page, some website that has nothing to do with RPGs.

Root: Magpie games twice, reddit post.

Monster of the Week (I really need to actually work on my next adventure for this): Evil Hat, reddit post, wikipedia.

So yeah, most of the time, when you're looking for RPG material, you're not going to automatically get naughty sites. Do they exist? Can one pirate these books? Probably. Would you have to do a specialized search to find whatever site has them? Yep.

However...

LLMs are built almost entirely on pirated material. There may be some that were trained on and have access to only a curated list of material the creator/user actually owns and/or non-copyrighted material. Metal, ChatGTP, Gemini, and so on are not those LLMs.
 


Pretty sure no human would of made these.

images

ai-fails-101-649d97d5cc6ec__700.jpg

ai-fails-1-64a2bc73d4da8-png__700.jpg

ai-fails-23-64a2844951da0__700.jpg


People judged those things as "ai fails".


At it's simplest is
  1. Humans give something a score.
  2. Give AI a bunch of examples.
  3. Spend a long time calculating how to get the highest score with that data. More compute allows for more data and nuance.
  4. It can now rapidly and repeatedly do the thing that gives the highest score.

So if you want "imaginative" AI, then we just need to do is score a bunch of "imaginative" things. Though I expect a lot of disagreement on that, and there's not enough compute for individual tastes yet.

Note you can use negative scores too. So ranking stuff by how "slop" it is works too. (I.e. a smooth car ride is 10 points, a fender bender is -500 points, and a totaled car crash is -10,000 points).

Wrong. There were whole movements of such as surrealism, cubism, and dadaism exploring all kinds of things. Knowing history and art history is important.

galathea-768x978.jpg

il_1588xN.6678516417_nq7d.webp

F958-Empire-of-Light-1954-by-Rene-Magritte-art-print_1024x1024@2x.webp

300219.jpg

Odd-Nerdrum-paintings-Dustlickers-1024x812.jpg

OIP (1).jpg

OIP (2).jpg

OIP.jpg

R (1).jpg

R (2).jpg

R (3).jpg

R.jpg

R.png
PabloPicasso-Baboon-and-Young-1951 (2022_07_30 08_08_08 UTC).jpg

francis-bacon-study-after-velazquez-innocent-x.jpg

bosch.jpg


Most of these are from the 20th Century (Pablo Picasso, Salvador Dali, Max Ernst, Odd Nedrum, Rene Magritte, Francis Bacon).

The last one, by Hieronymus Bosch is from the 15th Century. AI has brought nothing interest or value. All it can do is regurgitate the brilliance and genius of living, breathing, flesh, and blood human beings. AI is modern Silicon Snake Oil.
 

I don't even see why you care, considering you're pro-plagiarism since you support using generative AI.
It's a new ethical issue. I'm thinking it through.

Thus far it seems to me that if I had real piracy concerns with AI, I'd also have to have them for google to be ethically consistent. I'm interested if I'm missing anything.
And LLMs are outright committing it. So again, why do you care, other than to try to misdirect from the actual problem. Which is individuals and companies using AI to avoid having to pay for real writers and artists.
I think the discussion has a broader scope than that. We can think it is unethical to use AI for creative material in commerical products while thinking it has loads of very useful applications elsewhere.

And this is entirely wrong
I'd wager most of what's on the internet is legitimate. Without specialized searches (e.g., actively trying to find books to pirate), you're going to get mostly legitimate sites. When it comes to non-D&D games, I almost never get anything other than legitimate sites about it.

---

LLMs are built almost entirely on pirated material. There may be some that were trained on and have access to only a curated list of material the creator/user actually owns and/or non-copyrighted material. Metal, ChatGTP, Gemini, and so on are not those LLMs.
I'm sorry Faolyn but this statement seems to be outright false. The largest source of (weighted) training data for chatGPT-3, for example, was Common Crawl, which is not going to differ substantially from google. Maybe google is primarily using pirated stuff. But you rejected that, and in that case LLMs are not built almost entirely on pirated material.

I guess this gets back to why I care. I see some of the statements in this thread. And critics of AI just get stuff wrong all the time in their rush to attack it. The last few posts talk about burning books and say "AI has brought nothing of interest or of value".

I find this level of vitriol sad, honestly, because I know from my work that they do add a lot of value. That certainly doesn't mean that it is all good, or there are no legitimate ethical questions, or that we don't have to think seriously about how to integrate these tools into society without too many negative effects. Creators with concerns about the job market effects are right!

But this is being used to justify a lot of incorrect criticisms and a reflexive hostility that strikes me as fundamentally opposed to progress.
 

Interestingly, when search engines appeared, they were deemed unpalatable as well. Since they accessed the content of the websites to index them, and they didn't get the explicit agreement of the authors, their action could be seen as a violation of copyright. While the EUCJ ruled broadly in favour of the "new" technology, a specific exception was added to copyright to explicitly allow technical copies of a document in order to allow the functioning of search engine. It happened in 2001 in the EU.
Yes this is exactly what I had in mind. I imagine we'll take a similar course with generative AI. As someone said early on, these concerns will seem quaint in 5 or 10 years.
 

Status
Not open for further replies.
Remove ads

Top