No, by and large all ENWorld would do is give the AI the ability to dispense advice on how to game. The AI lacks comprehension though, and can't actually put any of it into practice in the way that a human might. Even Actual Plays wouldn't really help, since an AI trained on those would probably just try to tell you a story (rather than run a game for you).I bet if we feed the AI Enworld, it would become the best DM ever! All of our collective knowledge poured into it would level it up to twenty instantly. It would be like giving a fighter a +6 sword!
I bet if we feed the AI Enworld, it would become the best DM ever! All of our collective knowledge poured into it would level it up to twenty instantly. It would be like giving a fighter a +6 sword!
It's perfectly reasonable to insist on experimental results... but then you can't just wave off GPT's failures and glitches during those experiments. You have to demonstrate that they can be fixed in a programmatic way (i.e., one that doesn't rely on a human devising ad hoc solutions to nudge the bot back on track).To be honest, I’m not super interested in metaphysical arguments based on opinions about what is feasible. I’m more interested in actual experiments.
An enworld trained AI would start off sharp as a tack, and then dive so far off track you’ll never find the plot againNo, by and large all ENWorld would do is give the AI the ability to dispense advice on how to game. The AI lacks comprehension though, and can't actually put any of it into practice in the way that a human might. Even Actual Plays wouldn't really help, since an AI trained on those would probably just try to tell you a story (rather than run a game for you).
No, I don't have to demonstrate that at all. Because I am not interested in whether or not ChatGPT is a true AI or can perfectly emulate a human being as a DM. I am interested in what it can potentially be used to do. In fact, I have specified in this thread (and in real life to my students) that it should be looked at as an assistant to human beings, not a replacement.It's perfectly reasonable to insist on experimental results... but then you can't just wave off GPT's failures and glitches during those experiments. You have to demonstrate that they can be fixed in a programmatic way (i.e., one that doesn't rely on a human devising ad hoc solutions to nudge the bot back on track).
arstechnica.com

(Dungeons & Dragons)
Rulebook featuring "high magic" options, including a host of new spells.