WotC Hasbro CEO optimistic about AI in D&D and MTG’s future


log in or register to remove this ad

Scribe

Legend
There's a lot of doomsayers out there, fearmongering sells. Let's just say that I'm skeptical that AI is going to be more of a threat to humanity than other humans already are.

What can do more physical damage, a machine, or a human?

What can process data faster, in larger amounts, a computer program, or a human?

The question isn't if Humans have a capacity to do harm. We clearly do, in infinite ways, and some even think their harm is harm is kindness!

No, the issue is that no tool, in a connected world, will have the potential to do harm faster, than an eventual AI.
 

Rystefn

Explorer
A thread about D&D ... and AI.


I expect computers to pass the Turing Test in year 2025.

I have had a feeling about this for almost twenty years now. I was influenced the book by Kurzweil, The Age of Spiritual Machines, which I read in the early 00s.

To be fair, my prediction is plus-or-minus a year or two. Kurzweil himself wobbled about this date plus-or-minus five years.

But I suspect the Turing Test will be next year, toward the end of the year.


In the case of a Turing Test, the AI can genuinely function as a DM.
lol... chat bots have been passing the Turing Test for decades. We still don't have anything like actual AI. We're not even heading in that direction in any real or meaningful sense. The success of crappy trend-copying algorithms we have now and pretend are making art (success at making tech execs money, not success at doing anything useful or worthwhile) is already stifling real research into actual AI. Anyone that's interested in the concept of AI for any purpose other than stealing art should really be the ones shouting the loudest about the not-actual-AI that's getting all the press these days.
 

Oofta

Legend
Supporter
What can do more physical damage, a machine, or a human?

What can process data faster, in larger amounts, a computer program, or a human?

The question isn't if Humans have a capacity to do harm. We clearly do, in infinite ways, and some even think their harm is harm is kindness!

No, the issue is that no tool, in a connected world, will have the potential to do harm faster, than an eventual AI.
Sadly, people have always been good at causing harm to others, oftentimes inadvertently. I just don't see AI being any more dangerous than a number of other technologies or social trends.

I certainly don't see significant broad based harm to humanity from a toy company's utilization.
 

Scribe

Legend
Sadly, people have always been good at causing harm to others, oftentimes inadvertently. I just don't see AI being any more dangerous than a number of other technologies or social trends.

I certainly don't see significant broad based harm to humanity from a toy company's utilization.
I mean fair enough. Think bigger I suppose.
 

overgeeked

B/X Known World
I’m not sure trusting “AI” to build a techno-utopia is a great idea.

When asked to describe a painting of a woman with her back turned it rambled on about the subject’s face…which wasn’t visible. When asked to describe the mechanics of Sword World it went on and on about d20s…SW uses only d6s.

This tech is nowhere near ready to simply take over running or planning society.
 

Vaalingrade

Legend
The idea is to dig down to near enough to the heat of the lava. Not situate in a gaseous vent.

Corrosion and gases are still an issue.

The surprise is, nations dont seem to be trying to get deep geothermic energy. We are literally floating on a pool of virtually infinite energy.
Geothermal energy is no where near as easy as you think.

Digging down that far is nowhere near as easy as you think.

Any time you have a shower thought that makes you go 'it's so simple! why aren't we doing this already?', I can assure you there is a reason. Probably because it isn't simple.

Also what we're currently calling AI doesn't think. The scientific usage right now that's actually showing promise is running simulations super fast, not like coming up with ideas. And that's not the AI we're actually discussing, which is generative AI. This is the problem with stealing a word to dazzle shareholders who still think of the telegraph as a modern marvel.
 

FitzTheRuke

Legend
I agree we will see abusive uses of AI in the short term.

But in the long term, AI can create models to predict the likely outcomes of short-sighted business tactics. We can sense intuitively that runaway greed is a bad idea, and see real examples where a mainly middle class society enjoys more happiness. But an AI model would be able to spell it out in detailed consequences why runaway greed is stupid.

Like I said, I am an optimist about AI. But also a realist. We need to encourage the compassionate uses, while minimizing the noncompassionate uses.
You know, the optimist in me really wants to believe that you are right here, and that AI could one day do the math that proves (as I have always believed) that short-term gain always leads to long-term loss, and that everyone loses under our current greed-first mentality (including those at the top, they just gain more in the short-term, which is what they care about now).

The big hurdles will be, under a scenario where AI "proves" this to be the case is: 1) Having them believe the AI, even if shown irrefutable proof; 2) Having them actually care - it often looks like rich people will happily lose money to make sure that others don't "join the club"; and 3) Having them act on that information, even if they believe it and care to reap the benefits of a different system.

Again, I don't personally worry about AI, I worry about humans.
 


FitzTheRuke

Legend
By feeding it new biased information until it produces a result that they want to hear?

Right! I meant actually act on it to follow its advice, but you hit the problem - the AI will have to keep spitting out the benefits of treating people with respect while being specifically asked not to. Even I, who believe it to be mathematically true that everybody wins when everybody wins, can't say for sure that an AI will come to that conclusion if humans don't want it to.
 

Remove ads

Top