D&D General Has anyone seen this Wired article about using D&D to teach AIs?


log in or register to remove this ad

DND_Reborn

The High Aldwin
Nice find!

I am not surprised and think it is pretty cool. My work with ITS (Intelligent Tutoring Systems) was nearly 15 years ago, but we had some tie-ins to AI, although very rudimentary.
 

NotAYakk

Legend
The "text adventure" AI gives me hope for having an AI that generates towns, NPCs and other details I can harvest, ideally from provided seeds and constraints.

Or even better, generates a west marches style map on which I can overlay an adventure.

Or even better, takes as input the skeleton of an adventure, and fleshes it out into a west marches style map.
 



Dannyalcatraz

Schmoderator
Staff member
As I recall, the terminator John Henry was being taught role playing games by Matt Murch. He then learned how to roll a D20 and get a 20 every time...

“Twenty.”
 

Theo R Cwithin

I cast "Baconstorm!"
Sounds interesting. Having an artificially intelligent GM for D&D, etc, would certainly be a nice option at times. But I'll be much less enthusiastic when researchers start working on D&D player AIs.

murderhobo_exe.jpg
 
Last edited:

Shiroiken

Legend
First they came for my bank's cashier, then they came for my car, then they came for my job.

Now they're coming for my role as DM? Where will it end? Time to wake up sheeple and rise up against the machines! ;)
I for one welcome our new robot overlords
 


MonkeezOnFire

Adventurer
Interesting to hear that AI driven DMing is being explored but as the article indicated it is a long way off. AI can understand the structure of language but we've struggled to give them any comprehension of meaning or context. Knowing what the words you're spitting out actually mean is pretty key for any back and forth dialogue to make any sense.
 

I did see this a few days ago. I have a feeling that an AI DM would turn out something like a Madlibs. Not that that is a bad thing it might make for some memorable game sessions. Now Im curious how my next game would turn out if I secretly kept a Madlibs page behind the DM screen and used it for improv during roleplaying scenarios.
 


Aaron L

Hero
I did see this a few days ago. I have a feeling that an AI DM would turn out something like a Madlibs. Not that that is a bad thing it might make for some memorable game sessions. Now Im curious how my next game would turn out if I secretly kept a Madlibs page behind the DM screen and used it for improv during roleplaying scenarios.
From what I understand, Mad Libs is basically about right; the algorithms can generate strings of comprehensible narrative for a while but always ultimately degenerate into gibberish.

The problem with current AI research is that computer scientists have a bad tendency to think of the brain as just a computer, and the mind as merely software that runs on it, when the reality is much, much, much more complicated than that, as any neuroscientist will tell you, and the computer scientists think that just throwing more processing power at the problem will eventually allow purely software-based sapient hard AI. But the reality is that the brain/body/mind is a non-divisible unit; sensory feedback from nerves in the gut and other parts of the body play an extremely important part in cognition and emotion. The mind isn't just an operating system that runs on a brain computer; the mind/consciousness is an emergent phenomenon that bubbles up out of the friction of all the various cooperating and conflicting brain structures each trying to do their jobs and struggling with each other to take priority. The mind arises out of what is essentially the brain talking to itself as a way to make decisions and work out which processes should take priority at any given moment, and without all those conflicting structures and brain processes alternately working alongside and also struggling against each other for priority, a mind just isn't going to emerge. The classic thought experiment of "How do I know I'm not just a brain in a jar being fed fake sensory data?" wouldn't actually work out; without a full body and all of its nerve endings constantly providing all the massive amounts of the proper sensory experiences there would be no way you couldn't know something was drastically wrong.

Call of Cthulhu actually addresses this situation with Mi-Go Brain Cylinders, and they even do a pretty decent job of it, with any characters who have ended up in the unfortunate situation of having had the Fun Guys From Yuggoth remove their brain and put it in a jar losing more and more Sanity each day until they hit 0 San and go completely bonkers, their minds escaping the situation in delusional catatonia. But each prosthetic device added to the brain cylinder as a sensory apparatus (such as cameras to replicate sight, microphones to replicate hearing, and even placing the brain cylinder atop a mannequin torso in order to provide some small psychological relief through the knowledge that they have some kind of "body" again) provides a certain amount of relief from the continuing Sanity loss, until enough prosthetic additions achieve a certain balanced stability. A character stuck in a Mi-Go Brain Cylinder situation wouldn't ever really be sane again, but they wouldn't continue to degrade into dissociative catatonic oblivion anymore, either.

The only way to create an actual sapient AI with human-like intelligence would be to create a full artificial brain with all the proper analogous structures to a human brain, and then put it in an artificial body that also has all the analogous structures to a human body in order to generate the proper sensory input and nervous feedback... and even then you're also probably going to need to do it with organic components to provide the required flexibility, plasticity, and malleability of structure to make it work (etched silicon circuit pathways can't rewrite themselves on the fly to create new neural paths.) And at that point you're just reproducing a human being anyway by creating a biomechanical android; there's just no way all of that can be simulated through pure software alone.

In short, The Singularity just ain't gonna happen. Humans may be able to create some sort of near-sapient intelligence through software modeling/emulation of brain structure someday in the far future, but it isn't going to be human-like intelligence; it will be something completely new and different. In order to have human-like intelligence you need to have a human brain and a human body, with all the quirks and inefficiencies that go along with them. Because it's those quirks and inefficiencies that create the friction that generates consciousness.
 
Last edited:


gyor

Legend

In short, The Singularity just ain't gonna happen. Humans may be able to create some sort of near-sapient intelligence through software modeling/emulation of brain structure someday in the far future, but it isn't going to be human-like intelligence; it will be something completely new and different. In order to have human-like intelligence you need to have a human brain and a human body, with all the quirks and inefficiencies that go along with them. Because it's those quirks and inefficiencies that create the friction that generates consciousness.

Just because you can do something doesn't mean you should, and I think that this is one of those cases. In addition to all the reasons you stated just think how boring interacting with something like this would be, intelligence doesn't equal personality. As noble and of good intentions the search to create AI may be, I think the efforts could be directed to other things like making actually humans live better longer lives.
 

Coroc

Hero
From what I understand, Mad Libs is basically about right; the algorithms can generate strings of comprehensible narrative for a while but always ultimately degenerate into gibberish.

The problem with current AI research is that computer scientists have a bad tendency to think of the brain as just a computer, and the mind as merely software that runs on it, when the reality is much, much, much more complicated than that, as any neuroscientist will tell you, and the computer scientists think that just throwing more processing power at the problem will eventually allow purely software-based sapient hard AI. But the reality is that the brain/body/mind is a non-divisible unit; sensory feedback from nerves in the gut and other parts of the body play an extremely important part in cognition and emotion. The mind isn't just an operating system that runs on a brain computer; the mind/consciousness is an emergent phenomenon that bubbles up out of the friction of all the various cooperating and conflicting brain structures each trying to do their jobs and struggling with each other to take priority. The mind arises out of what is essentially the brain talking to itself as a way to make decisions and work out which processes should take priority at any given moment, and without all those conflicting structures and brain processes alternately working alongside and also struggling against each other for priority, a mind just isn't going to emerge. The classic thought experiment of "How do I know I'm not just a brain in a jar being fed fake sensory data?" wouldn't actually work out; without a full body and all of its nerve endings constantly providing all the massive amounts of the proper sensory experiences there would be no way you couldn't know something was drastically wrong.

Call of Cthulhu actually addresses this situation with Mi-Go Brain Cylinders, and they even do a pretty decent job of it, with any characters who have ended up in the unfortunate situation of having had the Fun Guys From Yuggoth remove their brain and put it in a jar losing more and more Sanity each day until they hit 0 San and go completely bonkers, their minds escaping the situation in delusional catatonia. But each prosthetic device added to the brain cylinder as a sensory apparatus (such as cameras to replicate sight, microphones to replicate hearing, and even placing the brain cylinder atop a mannequin torso in order to provide some small psychological relief through the knowledge that they have some kind of "body" again) provides a certain amount of relief from the continuing Sanity loss, until enough prosthetic additions achieve a certain balanced stability. A character stuck in a Mi-Go Brain Cylinder situation wouldn't ever really be sane again, but they wouldn't continue to degrade into dissociative catatonic oblivion anymore, either.

The only way to create an actual sapient AI with human-like intelligence would be to create a full artificial brain with all the proper analogous structures to a human brain, and then put it in an artificial body that also has all the analogous structures to a human body in order to generate the proper sensory input and nervous feedback... and even then you're also probably going to need to do it with organic components to provide the required flexibility, plasticity, and malleability of structure to make it work (etched silicon circuit pathways can't rewrite themselves on the fly to create new neural paths.) And at that point you're just reproducing a human being anyway by creating a biomechanical android; there's just no way all of that can be simulated through pure software alone.

In short, The Singularity just ain't gonna happen. Humans may be able to create some sort of near-sapient intelligence through software modeling/emulation of brain structure someday in the far future, but it isn't going to be human-like intelligence; it will be something completely new and different. In order to have human-like intelligence you need to have a human brain and a human body, with all the quirks and inefficiencies that go along with them. Because it's those quirks and inefficiencies that create the friction that generates consciousness.

Wow, I wish I could give you 10 upvotes for that essay, it sums up some less known facts in a perfect understandable way.

You should read books from Stanislaw Lem, especially the stories with Trurl and Klapauzius, I think you would enjoy it.
 

Aaron L

Hero
Wow, I wish I could give you 10 upvotes for that essay, it sums up some less known facts in a perfect understandable way.

You should read books from Stanislaw Lem, especially the stories with Trurl and Klapauzius, I think you would enjoy it.
Thank you very much! :) I had the misfortune of coming across some writings about AI from some hardcore "Singularitarians" right after finding that Wired article (specifically some stuff about Ray Kurzweil) and it triggered my BS overflow gag reflex until I just had to write something about it. I am in no way a mind/body dualist and don't buy into quantum consciousness woo or any kind of spirit or soul, but there is a whole lot more to the mind and consciousness than processing power and it's something that software alone could never be made to emulate. The "hardware" structure of the brain and body as a whole is what makes it possible.

For some good reading on the subject, check this out:

And I will make sure to check those books ! I've heard of Lem, of course, but never gotten around to reading any of his work. I will definitely work on correcting that mistake! :)
 
Last edited:

Aaron L

Hero
Just because you can do something doesn't mean you should, and I think that this is one of those cases. In addition to all the reasons you stated just think how boring interacting with something like this would be, intelligence doesn't equal personality. As noble and of good intentions the search to create AI may be, I think the efforts could be directed to other things like making actually humans live better longer lives.

There is also the problem of governments and organizations having an improper reliance on algorithms while absolutely not understanding how and what they should and should not be used for, because algorithms are not people, don't think like people, and don't understand what people think, need, and/or want, and so they will output bizarre/absurd/impossible mathematically generated responses to human problems that will only end up making situations worse.

Also AIs/algorithms written to be used for tasks with the idea that the AI will remove human prejudices from the equation, but with the people creating and using them not understanding that the human prejudices have been baked into the AIs by the programmers, and so the supposedly impartial AIs only end up doubly reinforcing the prejudices because of their supposed but deeply flawed false impartiality.

I am specifically thinking of those facial recognition criminal profiling algorithms that could supposedly accurately predict people with criminal tendencies based on nothing but impartially, mathematically analyzing facial characteristics... except when it was shown that the supposedly "criminal" facial characteristics the algorithms learned to identify as "criminal" were actually just the common facial characteristics of minorities, since most of the "criminals" used to teach the algorithm were minorities.

Also, a lot of companies today now are using algorithms in Human Resource departments to supposedly analyze potential employees to determine who will "best fit in with the company culture" and relying on the AI to determine which applicants should even get interviews at all... so a lot of people who would be perfect for jobs are being rejected out of hand by computers because companies are relying on flawed AIs to filter applicants.

There are a lot of problems with over-reliance on and misuse of AIs and algorithms, and things have gotten increasingly worse over the past decade or so as organizations rely on AI more and more without understanding its strenghts and limitations. AI absolutely has its place, but that place is absolutely not as a substitute for human judgement. Unfortunately that is exactly what many organizations are using it for.
 


Epic Threats

An Advertisement

Advertisement4

Top