WotC Hasbro CEO optimistic about AI in D&D and MTG’s future

Yaarel

He Mage
I don't think every story needs to be created with a message in mind, certainly not a moral one. Not every story is about right and wrong.
Technically, every story has a message. At least every modern one does.

According to the story structure of plot points, there is a resolution after the climax. Toward the end of the story, the heroes are are actually defeated during the climax. It is the "message" of the story that saves the heroes.

Even in a tragedy, the "message" of the story explains why the heroes werent saved. Example, in the tragedy of Romeo and Juliet, the message of the story is something like, "family feuds are bad". After the deaths of Romeo and Juliet, the feuding families reconcile so this death of innocents never happens again. Conversely, if the families reconciled earlier, this message of the story would have saved Romeo and Juliet with a happy ending.
 

log in or register to remove this ad

Charlaquin

Goblin Queen (She/Her/Hers)
Being about something is not the same as having a message. Messages are about persuasion, trying to get the audience to believe or to do something. Many stories have messages, but plenty of others don't, and some of those are quality stories just as some stories with messages are quality. Having a message is not required for a good story.
Then you are using a particular definition of the word message, which I would say is pretty different than the common usage. I’ll re-evaluate your previous comments in light of this definition.

I don't think every story needs to be created with a message in mind, certainly not a moral one. Not every story is about right and wrong.
I agree, not every story is about trying to persuade their audience of something.

Which is a great goal, one I imagine everyone wants. I certainly want that too. I just don't think everyone needs to be actively pushing for it, and (more importantly) that people who don't are bad folks. Our polarizing society really seems to force a "them or us" narrative on just about everything.

I care far more about how the content I receive and pay for will work in my game than how its presentation affects the world at large. That is, after all, what the product was made for. Same with any creative media. The most important thing to me for a story is that it's a good story, and it is my opinion that creative works that focus first on creative quality and then look to message are more successful, more entertaining (they are intended as entertainment), and ultimately will be more likely to spread whatever message they're trying to convey more successfully.
Here, I don’t think my overall argument changes. Good stories are about something - sometimes they’re about persuading the audience, sometimes they’re about other things. Regardless, the quality of the story can’t be meaningfully separated from what the story is about. A story with a message, as you define it, would be bland if it focused on the message to the exclusion of its narrative, and it would be bland if it focused on the narrative to the exclusion of its message. Both need to be working in tandem to create a quality work. Likewise, if the story is about something other than persuading the audience, the narrative and the… let’s go with “theme” since we’re using the word “message” in a particular way… The theme and the narrative need to be working in tandem, and if either is focused on to the exclusion of the other, the result is likely to be bland.
 

Cadence

Legend
Supporter
From my understanding, they mostly know what other sources have said is correct. So, theoretically, if you just feed in good data, you'll just get good data (assume your programming will let the LLM say "I don't know".)

The problem is that the feeds often contain data of radically different validity, and the way to sort it out is--questionable.

Chat GPT doesn't seem to be a big fan of saying it doesn't know something , especially when questioned about it. Having it do so more - but only when it really doesn't "know"- would be nice.
 

Charlaquin

Goblin Queen (She/Her/Hers)
From my understanding, they mostly know what other sources have said is correct. So, theoretically, if you just feed in good data, you'll just get good data (assume your programming will let the LLM say "I don't know".)

The problem is that the feeds often contain data of radically different validity, and the way to sort it out is--questionable.
And, the more the LLMs fill the internet with wrong information, the more wrong information is out there for them to get tripped up on, making them more likely to spread said wrong information, creating a feedback loop that results in ever-degrading slop.
 



Thomas Shey

Legend
Chat GPT doesn't seem to be a big fan of saying it doesn't know something , especially when questioned about it. Having it do so more - but only when it really doesn't "know"- would be nice.

That's the second half of the problem; you need to have the LLM to either be good at selecting the wheat from the chaff (not impossible, but not particular something current versions do) or be limited in the data they'll use and say they don't know (which also isn't done).

In other words, its a problem with the criteria the LLMs have, but there's bad incentives about properly fixing them.
 

Thomas Shey

Legend
If only what makes money wasn’t the main determining factor in what media does and doesn’t get made. Late-stage capitalism strikes again!

Eh, in terms of what gets distribution, that well pre-dates late stage capitalism. If anything we're better off now because digital reproduction is so cheap some distribution has anywhere from minimal to zero functional cost. Just don't expect it to get particularly broad appreciation if it doesn't fit that zeitgeist. But making money is not an intrinsic limiter now (it may be a practical one if you can't afford to spend the time, but there's an awful lot of art, music and writing out there on the Net that isn't dependent on making money).
 

Scribe

Legend
AI benefits the masses, it is the few that want to get money that are against it.

This has to be one of the funniest things I've ever read.

I'm not protecting myself by thinking AI is impending disaster.

I'm not an artist, the AI generated art is not replacing my work.

I could quite literally be fired on Monday, and considering the life choices I've made I, and the wife and son (and dog, he's old though...) I fully support financially wouldbe fine.

I do not benefit from AI being stopped or heavily regulated, so please, I can only laugh so hard.
 

Yaarel

He Mage
Chat GPT doesn't seem to be a big fan of saying it doesn't know something , especially when questioned about it. Having it do so more - but only when it really doesn't "know"- would be nice.
Heh, in my uses of ChatGPT when trying to use it for factual stuff, after questioning it, it often ends up apologizing for misinformation.
 

Remove ads

Top