WotC Would you buy WotC products produced or enhanced with AI?

Would you buy a WotC products with content made by AI?

  • Yes

    Votes: 45 13.8%
  • Yes, but only using ethically gathered data (like their own archives of art and writing)

    Votes: 12 3.7%
  • Yes, but only with AI generated art

    Votes: 1 0.3%
  • Yes, but only with AI generated writing

    Votes: 0 0.0%
  • Yes, but only if- (please share your personal clause)

    Votes: 14 4.3%
  • Yes, but only if it were significantly cheaper

    Votes: 6 1.8%
  • No, never

    Votes: 150 46.2%
  • Probably not

    Votes: 54 16.6%
  • I do not buy WotC products regardless

    Votes: 43 13.2%

Status
Not open for further replies.

log in or register to remove this ad


One example. I have a friend who has spent hundreds of hours on a campaign setting. He has original maps for every city there, extensive notes for encounters, questlines, lore. But it's all handwritten or hand drawn in his notebooks, not digitized, and not accessible to anyone else.

Typing these up and formatting them properly would be a massive value add. He's considered doing it but finds the amount of work daunting, given the other things going on with his life right now.

AI could make self-publishing this kind of thing way easier.
Now, in my opinion, using AI to compile notes is acceptable. That's not generative AI and it's not stealing anything. Although I would say it would be better to actually compile the notes yourself (since you have to type all that info in anyway), or hire someone to do it--but I accept that there are things that would get in the way of doing either of them.

Using AI to add details to those notes, and then making them pretty enough to sell, is stealing, though. So nope.

Because quite frankly, I don't know how detailed those notes are. Maybe your friend is the sort to go into excruciating detail--one of the GMs at my table has actually made a language, or at least some vocabulary and certain phrases (I don't know if they've developed grammar). I've gone fairly deep into completely unnecessary details for my settings. So I believe your friend's notes could be very extensive. But a lot of GMs don't have extensive notes. (And that's ignoring that a lot of GM advice is to only prep what you need, or even how to get away without prepping.)

So slippery slope time: how many of these AI-assisted self-publishers will be feeding the AI reams' worth of written information, and how many will have a few ideas and let AI do the rest? Because a lot of people have a few ideas; relatively few have a book's worth of material.

And that's still ignoring the visual art needed for such a book. If you can't draw and don't have the money to hire an artist, you could at least shell out a few bucks for clip art or spend some time looking for royalty-free non-AI art online.
 

I wonder if it is like humans learning to do art. Young children are actually better artists than high school students, with regard composition, color theory, and holistic presence. The high school students focusing on technique for details seems to disconnect from the whole. But after a while, the artists internalize the techniques, the sense of holism and composition returns, and the artists have the best of both worlds.
It's also because by high school, they potential artist has had a lot of people telling them "you're doing this wrong" or "it needs to look like this" and so get discouraged.

In an art class I took as a kid, we were made to paint a particular landscape. There was a rock by a lake. For whatever reason, I painted the rock in the lake, and the teacher was absolutely furious at me.
 

My job requires me to know all the ways they can go wrong, so alas, inescapable. But hey, job security!
The fact that LLMs can make mistakes does not mean they are not useful. It means they need to be used with care.

So many of the criticisms in this thread boil down to the assumption that people are going to blindly apply LLMs, or blindly trust the output of LLMs, or are going to assume that LLMs are intelligent or infallible. And yeah, if you assume everyone is going to put their car in neutral and push it or try to control the steering using their feet then the car will seem like a dumb invention.

There is not much engagement with people using these tools wisely for things they are good at.

So slippery slope time: how many of these AI-assisted self-publishers will be feeding the AI reams' worth of written information, and how many will have a few ideas and let AI do the rest? Because a lot of people have a few ideas; relatively few have a book's worth of material.

And that's still ignoring the visual art needed for such a book. If you can't draw and don't have the money to hire an artist, you could at least shell out a few bucks for clip art or spend some time looking for royalty-free non-AI art online.
Some of this is addressed by what I said above--yes, lowering the barrier to content creation means there is more low quality content. For every top-notch Youtube channel there is a lot of garbage out there. But Youtube makes it way easier for people who do quality work to broadcast to an audience.
 


This is really a tangent to the disucssion...

With respect, we aren't looking at the "practical applications". We here talk about the applications to visual art, and prose text generation. These are consumer and RPG production applications. That's pretty limited practicality.

There's an entire scope of use of AI tools in the sciences and engineering that we never touch on - where there's no ethical issues because the data is openly available scientific research data. When biomedical research or computer chip design people train generative AIs on research data in order to help solve problems, that's a highly practical ball of wax we never address.
Yes, I am talking specifically about generative ai, NOT narrow ai or medical diagnostic technologies. Apologies I didn't intend to widen the scope of the conversation.
 


Let me restate my point, because I don't think it came across. I am not claiming humans and LLMs work identically. I am not claiming humans creativity is rearranging data. I am not claiming there is anything similar about the way LLMs and humans function.

I'm making a point about creativity--that it is possible to make a highly creative work relying only on things that are already extant. I gave you examples of this.

Well, I use LLMs in my job daily and they're a significant boon. Anyone who work in software or scientific computing will see similar benefits. It will speed the pace of scientific advancement. Transformers already contributed to the Chem Nobel, as I mentioned, and LLMs are being implemented broadly in the sciences, especially 'omic' fields. Personally I think there is quite a bit to show for it.

This kind of attitude is what I have in mind when I say that legitimate concerns about job market effects are causing people to be overly cynical about the technology as a whole. It has to be useless, it has to not be creative, because the alternative is so uncomfortable.
I get what you are trying to say, I do.

I veered off of topic by mistake (and was reminded by Umbran), so I will try to get back on topic.

Trying to get back on topic, I don't think that comparing (as Umbran stated) scientific fields (which are peer-reviewed and typically open-source) to the publishing industry is going to do anyone any favors, probably best to keep them as separate discussions.

As for "this kind of attitude," well techbros have done this to themselves by trying to cram the word "ai" into everything they can for profit. You can't really have a discussion about generative ai without somebody straying off course (it was me this time!) into other fields, because they have used the term so broadly, that pinpointing what it is actually being applied to is becoming harder by the day.
 

Yes, I am talking specifically about generative ai

My point is that "generative AI", as a technology, is not limited to English text and fantasy art with too many fingers! It can be applied to any form of data!

I, personally, as a physics graduate student, did research on using early forms of the technology (before the term "generative AI" even existed) on high energy physics data, to help configure particle detectors and their software.

Our website is only dedicated to one tiny corner of the possible use of the technology. Condemning the tech in general based on our corner is... logically flawed.
 

Status
Not open for further replies.
Remove ads

Top