D&D General Would you buy an AI-generated Castle Greyhawk "by" Gary Gygax?" Should you?

Whizbang Dustyboots

Gnometown Hero
I only remember one of my passwords: the one to get into my password manager. Every other password I have is just a random long string that the password manager generated for me and I'll never memorize them.
That always makes me worry about having a single point of failure, as there have been password managers that have been hacked. Still, probably better than me using "password123" for everything, like I do currently.
 

log in or register to remove this ad

Whizbang Dustyboots

Gnometown Hero
And then the AI logically declares the shareholders are bad for company stability and they should be removed because its looked at the data and seen one too many cases where shareholders have just throttled a company after making their profits

CEOs and shareholders aren't going to give AI power the moment it declares they're surplus to requirements.
"Removing the shareholders" requires their acquiescence, since it means taking a company private. That happens, but it's much less common than going public, to my understanding.

Removing the CEO just requires activist shareholders, who already oust plenty of CEOs. Once enough of them become convinced that an AI can do a CEO's very expensive job for much cheaper, it's going to happen.
I don't think we can get rid of AI hallucinations because I suspect that's actually a problem with the actual system. Its got to try and predict things and tends to just run off with what it has. Which is limited simply due to how the tech gets data inputted into it
Hallucinations will stop happening because future iterations of generative AI are going to fact-check themselves. There's a lot of competition in this space, and providing generative AI that produces results that can be more relied on is where the commercial incentives are. The AI companies are not going to say "well, December 2023 is where we'll stop developing," any more than Sony and Panasonic stopped with $1,000 DVD units as big as a desk.
AI art is never going to be great though
"Never" is a very long time. I remember when people on this board laughed at Apple wanting to revive the tablet computer, a product that "no one" wanted and no one would ever want. They were talking about the previous laughable takes on tablet computers (enormous devices that sat on your laps and sterilized you until their batteries ran down), not about the continued development.

Yes, AI today still has a lot of issues. AI a year from now will have a lot less. Ten years from now, a lot of us will have memory-holed our concerns about AI, as it will be so integrated into our everyday lives.
AI isn't going to produce "The real full Castle Greyhawk as it was going to be!" by looking at the pre-existing modules.
I don't think anyone on this thread has suggested otherwise.
Its going to look at those modules, go "This is how a dungeon is", and then try to slap them together with less grace than a randomly generated room in a roguelike.
I picked Castle Greyhawk deliberately for this thread.

Things worth noting:

1) There is no definitive Castle Greyhawk, and there never was. Gary was constantly in the process of adding content, removing content and changing what was happening in existing areas.

2) There was never a single document, as we'd understand it, bringing it all together in a single pile that he could have, in theory, handed to a developer and said "here, turn this into a salable product." That's one of the reasons TSR never did it, along with the draaaaama it was experiencing in the mid-1980s.

3) Even the unfinished Castle Zagyg line largely involved EGG sitting down, looking through his notes, trying to remember what he was thinking 20 years earlier, and creating new stuff that was him emulating his previous work. Even if he had lived to finish Castle Zagyg, it would have been EGG's simulation of the original dungeon complex. And honestly, a lot of it was pretty roguelike, even back in the day.

4) And like Yoko Ono with John Lennon, EGG has a widow who speaks for his estate today (when the courts aren't speaking on behalf of them both). She (or the courts) could give the green light to such a project, although I would be surprised if the very lo-fi Troll Lords, who seem to have had a genuine human relationship with EGG, would ever ask for an AI project to happen. Still, they've lost control of Castle Zagyg once before. It wouldn't be jaw-dropping if someone else was in charge of the property in 2028, when AI will be a lot better.
 


Whizbang Dustyboots

Gnometown Hero
I wouldn’t buy any of it knowingly, but I admit I could be fooled.

But don’t most of us usually start from the premise that necromancy is bad, or at least, morally unsavory?

Hey, Black Mirror had that heartwarming episode about how a woman had her late husband's social media presence scraped and turned into an AI inside a robot. I didn't see the end of the episode, but I bet it all turned out OK.
 

tetrasodium

Legend
Supporter
Epic
"Removing the shareholders" requires their acquiescence, since it means taking a company private. That happens, but it's much less common than going public, to my understanding.

Removing the CEO just requires activist shareholders, who already oust plenty of CEOs. Once enough of them become convinced that an AI can do a CEO's very expensive job for much cheaper, it's going to happen.

Hallucinations will stop happening because future iterations of generative AI are going to fact-check themselves. There's a lot of competition in this space, and providing generative AI that produces results that can be more relied on is where the commercial incentives are. The AI companies are not going to say "well, December 2023 is where we'll stop developing," any more than Sony and Panasonic stopped with $1,000 DVD units as big as a desk.

"Never" is a very long time. I remember when people on this board laughed at Apple wanting to revive the tablet computer, a product that "no one" wanted and no one would ever want. They were talking about the previous laughable takes on tablet computers (enormous devices that sat on your laps and sterilized you until their batteries ran down), not about the continued development.
I agree that hallucinations will be a thing si develops beyond given time. With the luggavles though Hp (and others are bringing those back too like this 83wh battery equipped portable thing. There's definitely a market for ridiculous battery life
 

Mecheon

Sacabambaspis
"Removing the shareholders" requires their acquiescence, since it means taking a company private. That happens, but it's much less common than going public, to my understanding.

Removing the CEO just requires activist shareholders, who already oust plenty of CEOs. Once enough of them become convinced that an AI can do a CEO's very expensive job for much cheaper, it's going to happen.
Until Bobby can't get his yaucht in which case the AI suddenly disappears or finds itself replaced by a dumber model. CEOs will eat anyone below them, but they absolutely have will have a killswitch ready to stop AI hurting them.

Hallucinations will stop happening because future iterations of generative AI are going to fact-check themselves. There's a lot of competition in this space, and providing generative AI that produces results that can be more relied on is where the commercial incentives are. The AI companies are not going to say "well, December 2023 is where we'll stop developing," any more than Sony and Panasonic stopped with $1,000 DVD units as big as a desk.
The thing is, we're talking data anaylsis. We get that very same thing. Hallucinations is just a funny way of saying the AI has data in it that isn't expected. This is just, a thing that happens with computers at the moment

AI companies will keep going as long as they can convince tech companies to funnel money into them, insisting they're the next big thing like crypto and NFTs. AI will have more uses in what its good at (IE: Data analysis), but this current fad of AI driven content is producing stuff which is bad, . Therefore the image you get from using it is being lazy, and they are doing basically nothing at the moment to shake this image

"Never" is a very long time. I remember when people on this board laughed at Apple wanting to revive the tablet computer, a product that "no one" wanted and no one would ever want. They were talking about the previous laughable takes on tablet computers (enormous devices that sat on your laps and sterilized you until their batteries ran down), not about the continued development.

Yes, AI today still has a lot of issues. AI a year from now will have a lot less. Ten years from now, a lot of us will have memory-holed our concerns about AI, as it will be so integrated into our everyday lives.

How? What use is AI going to have in my everyday life? Its the realm of the investors licking their lips at some new buzzword but the only impact its had on my life so far is just the humour I derived from a website getting made a fool of by thinking we're getting Glorbo and playable Mantid in Warcraft. The best it can do at the moment is just let me tab to type "Regards," in an email when I'm mid-way through the word at the moment, and that ain't a renovation. That's just auto-correct

I picked Castle Greyhawk deliberately for this thread.

Things worth noting:

1) There is no definitive Castle Greyhawk, and there never was. Gary was constantly in the process of adding content, removing content and changing what was happening in existing areas.

2) There was never a single document, as we'd understand it, bringing it all together in a single pile that he could have, in theory, handed to a developer and said "here, turn this into a salable product." That's one of the reasons TSR never did it, along with the draaaaama it was experiencing in the mid-1980s.

3) Even the unfinished Castle Zagyg line largely involved EGG sitting down, looking through his notes, trying to remember what he was thinking 20 years earlier, and creating new stuff that was him emulating his previous work. Even if he had lived to finish Castle Zagyg, it would have been EGG's simulation of the original dungeon complex. And honestly, a lot of it was pretty roguelike, even back in the day.

4) And like Yoko Ono with John Lennon, EGG has a widow who speaks for his estate today (when the courts aren't speaking on behalf of them both). She (or the courts) could give the green light to such a project, although I would be surprised if the very lo-fi Troll Lords, who seem to have had a genuine human relationship with EGG, would ever ask for an AI project to happen. Still, they've lost control of Castle Zagyg once before. It wouldn't be jaw-dropping if someone else was in charge of the property in 2028, when AI will be a lot better.
The way the song works is completely different from making a whole book, though. They're not comparable situations.

Like, from a purely technical perspectrive, what the AI did there was extract a voice and negate the music part. That's time consuming work you can do without an AI, but it helps. It then used what it had to simulate what his voice would be like for the parts it didn't have, and then basically use tech we've had for ages, tech that's probably behind Hatsune Miku and the like, to edit those collected sounds together into a song. People have done this with smaller stuff before and its not all that new. Morshu has like, 20 seconds of dialogue in Legend of Zelda, Faces of Evil, and yet that alone lets someone come out with all sorts of crazy remixes.

But, here's the thing, that's something you can do because that's sound. Sound works like that. That's why deepfakes are so common. That's why Volcaloids, programs loaded with pre-made soundbytes you can edit together to have Miku (or whatever other volcaloid you have) to sing a song. Here's Sand Planet for her 10th anniversay, I like it! But, this is the thing, this is completely different tech to the side of AI that produces text automatically. They're not comparable. You're wanting to take letters written and expect a new song extrapolated from unrelated material. Which, can't be done.

You can't do that with text. You can't do that with writing. You can, at best, mimic someone's writing style, which is something people can just, do in this day and age. There's nothing impressive about getting an AI. You're just feeding it in modules that already exist and hoping it'll produce something that matchhes up but is all new, but.... It can't produce all new. Its an AI. It only has the data its been fed with. It'll remix it, shove it around, but its inherantly just that exact same stuff. Predictive text is just that, predictive, and its wrong frequently

A far better thing would be getting who we still have alive who played with Gary to work on producing something like that. AI will nevere produce you something worthy of being Castle Greyhawk. If you fed it everything Gygax ever wrote, every single thing ever mentioned about it, it'll just give you a rug that you can blatently see the parts of other dungeons stitched together to try and give you the shape of a new dungeon. Nothing will alleviate that or get around it
 

Although EGG doesn't have quite the volume of published writing that some prominent psychologists have, he's probably got more out there (including posts on ENWorld and Dragonsfoot) than one might think.

More than an AI-generated Castle Greyhawk, I'd be interested in an "Ask Gary" bot trained on all the various Q&A material and interviews on the internet. I think it would be fun to interact with, and would offer the occasional insight.

If nothing else, it would be a much easier way to search through the many thousands of words Gygax published online.
 

Retros_x

Explorer
Current AI models can never reproduce human creativity, because they are statistical models on steroids. So no, I am not interested in novels or D&D adventures that are produced based on empirical probability.


A short side rant: We don't even know how human brains work exactly, we are not even close to build AIs that are coming close to human brains. I always found it amusing when I read that current AI technology is "at the same intelligence level as a X years old human". No, its not. Computers are real good at calculating and doing it fast, but they are not very intelligent. You feed 10.ks of animal pictures into a image classifaction AI and than let it classify a picture of a panda and the result is: "This is to 99,8% a panda". You show a 3 year old human child 2-3 pictures of a panda and it will immediately recognize any panda in most visual contexts. That is pattern recognition, not generative, but the same holds true for generative AI. Most professional writers have read not even a tiny fraction of the amount of text that ChatGPT gets trained on and yet their texts will feel more innovative and creative than any text ChatGPT could ever produce.

Thats the reason why some computer scientists are suggesting to get rid of the term "artificial intelligence", because its an obfuscation and not really a precise, fitting name. "cognitive technology", "complex information processing", "cognitive automation", "applied optimization" are some of the contenders, I personally like to also be just more specific and name the underlying principles like "machine learning" or for example even more specific "natural language processing". These sound of course not so fancy (and good to use for marketing) like "artificial intelligence", but are more precise and demystifying (which is a good thing when talking about technology).
 
Last edited:

General_Tangent

Adventurer
More than an AI-generated Castle Greyhawk, I'd be interested in an "Ask Gary" bot trained on all the various Q&A material and interviews on the internet. I think it would be fun to interact with, and would offer the occasional insight.

If nothing else, it would be a much easier way to search through the many thousands of words Gygax published online.
I wonder if Chatgpt has already been trained on Gary's text as it might be in the corpus of text that has been scraped from the Web.
 


Remove ads

Top