WotC Hasbro CEO optimistic about AI in D&D and MTG’s future

Charlaquin

Goblin Queen (She/Her/Hers)
A thread about D&D ... and AI.


I expect computers to pass the Turing Test in year 2025.

I have had a feeling about this for almost twenty years now. I was influenced the book by Kurzweil, The Age of Spiritual Machines, which I read in the early 00s.

To be fair, my prediction is plus-or-minus a year or two. Kurzweil himself wobbled about this date plus-or-minus five years.

But I suspect the Turing Test will be next year, toward the end of the year.


In the case of a Turing Test, the AI can genuinely function as a DM.
I’m considerably more worried about them becoming able to pass reverse Turing tests (like CAPTCHA). With how automated so many of our important processes are, once the computers can convince each other that they’re human, it won’t really matter if we can tell them apart from other humans or not.
 

log in or register to remove this ad

Charlaquin

Goblin Queen (She/Her/Hers)
Is "People are considered resources, a dollar amount on a spreadsheet" really any different from any other industry? I've worked for companies large and small in IT writing the software that the company could not function without. The products that make the company competitive, that keeps them in business. Without us, there would be nothing. Yet in virtually every case, IT was considered an expense that was to be kept to a minimum. All of us software developers, database administrators, quality assurance? We were just numbers on a spreadsheet and the lower the numbers the better.

It can be frustrating, but it's just the reality everywhere. Managers frequently add minimal real value, sales people make commissions off selling projects that have unrealistic timelines and then go on to sell more and make big bonuses while IT goes on death marches to implement it, some manager reads an article and next thing you know we're supposed to integrate the new "silver bullet" technology that they don't understand.

There are exceptions to every rule, but practically all management in every company thinks of the people actually doing the work that pays there salaries as an expense to be minimized. At best the people doing the work are an expense they try to get the best ROI possible. WOTC is no different.
Yes, the fact that this is the attitude across all industries is exactly the problem. Tools are not good or bad, they are just tools, but they can be used in good or bad ways. But, the incentives of our economic system drive people to use tools in bad ways if it creates greater short-term profit than using them in good ways. There are hypothetically ethical ways to use Al. But the potential for harm when used unethically is tremendous, and in the world we live in, that potential is an inevitability, because it makes money.
 

Kannik

Hero
The technology behind the LLMs that is freaking everyone out is much older than 2 years. It's just that it's only been noticed by the general public for a couple of years.
Much like many surprise overnight advances, it's been decades in the making.
Aye, but the important bit for the purposes of the discussion, and the data on transitions I was using, isn't about length of development, it's impact/rapidity on the uptake after it is introduced into the common marketplace.

shouldn’t the others then take longer than 30 years?
Likely yes, I was using the shorter value to hedge for "one of the most rapid uptakes" (so their could be others that were quicker) to show that even in that quicker case it was still orders of magnitude faster than the current potential for loss with the current generative AI influx. :)
 

Zardnaar

Legend
Yes, the fact that this is the attitude across all industries is exactly the problem. Tools are not good or bad, they are just tools, but they can be used in good or bad ways. But, the incentives of our economic system drive people to use tools in bad ways if it creates greater short-term profit than using them in good ways. There are hypothetically ethical ways to use Al. But the potential for harm when used unethically is tremendous, and in the world we live in, that potential is an inevitability, because it makes money.

Last 40 odd years k8nda prices your point.
 

Charlaquin

Goblin Queen (She/Her/Hers)
There's a lot of doomsayers out there, fearmongering sells. Let's just say that I'm skeptical that AI is going to be more of a threat to humanity than other humans already are.
AI is a threat to humanity because humans are. AI is a tool, used by humans, and humans will use it in ways that are tremendously harmful to other humans, in the interest of profit. It’s an absurdly powerful force multiplier to the exploitative and self-destructive behaviors we as humans are already engaging in.
 

Zardnaar

Legend
AI is a threat to humanity because humans are. AI is a tool, used by humans, and humans will use it in ways that are tremendously harmful to other humans, in the interest of profit. It’s an absurdly powerful force multiplier to the exploitative and self-destructive behaviors we as humans are already engaging in.

Interesting point poster made earlier. Maybe we woukd be better off without the internet and computers.
 




Charlaquin

Goblin Queen (She/Her/Hers)
Interesting point poster made earlier. Maybe we woukd be better off without the internet and computers.
There are a lot of technologies we would be better off without. I work with anthropologists who would argue, with tongue only slightly in cheek, that agriculture is such a technology. Maybe a world without computers or the internet would be a better one. Maybe it would be worse. But we don’t live in thar world, and it doesn’t do us much good to fantasize about how much better or worse it would be. The world we do live in has computers and the internet, and is rapidly developing large language models colloquially called AI. Since this technology has such potential to do harm, it behooves us now to do as much as we can to try to minimize the harm they will do.
 

Remove ads

Top