With the right software an AI could create not only the geography of a fictional world but also to describe the global climate and the movement of the tectonic plates (and consequently the areas with the most earthquakes or volcanic eruptions).
Okay there's a lot going on here in this one claim.
"With the right software" is purely speculative. Software to do this does not exist, and while someone may be working on it - it is entirely unconnected to Generative AI and specifically the Generative AI that Chris Cocks has championed. He's talking about ChatGPT but its also trained on all the historic rulebooks, modules, novels, etc.
"describe the global climate and the movement of the tectonic plates (and consequently the areas with the most earthquakes or volcanic eruptions)" the actual processing to do this is not within the scope of machine learning - we only have data from one planet - Earth. We don't have the masses of data required to allow machine learning to generate predictive outcome rules, and adapt them. This kind of software would need to be designed by specialists for this special role.
Also I'm not convinced it would add any value to a campaign where there are dragons. Just saying.
This is like taking an old car to a hairdresser and then explaining that with the right tools, workspace, training and experience - this hairdresser could convert the broken down gasoline dependent car to a self-driving EV. It is not technically untrue, it is not within the scope of reasonable expectations and ignores everything else that would have to happen to make it possible.
The AI could be useful for underage or rookie DMs who can't to create new stories for the next games.
I'm not convinced anyone has ever had a first session go wrong because they got the information about tectonic plates wrong. I am sure people have had first sessions go wrong by being overloaded with information that they don't understand, being given bad advice or advice that isn't relevant to their particular group, etc.
And just to be super clear on this point, there are things that machine learning could do for D&D, but it's not going to help individual DMs - the tasks its more suited for are to do things like analyse all the data in D&D Beyond character sheets and adventure logs, and come back with discoveries our biases prevent us from looking for - thus making the people who make the game aware of trends that may or may not be interesting to them.
Because questions like "at what level do Rogues start frequently making skill checks for stealth and sleight of hand" is obvious but we not think to check how many Paladins take the Skilled Feat at level 8 so that they can add Arcana.
* There are serious risks, for example a DM could want a story about a rebellion against the slavery but the WotC's AI said it was too controversial threat.
Other serious risks:
- The AI will derail the adventure but mixing up things - telling the DM to teleport the party who are on a light hearted adventure to rescue a gnome into Tomb of Horrors, then creating a Wanton Wench to advise them their job is now to kill Strahd.
- The AI may get all onboard with the slavery campaign, but advise the slavers are Lawful Good and the slaves Chaotic Evil - telling the DM to make the adventure about suppressing the slaves who need to remain slaves for their own good.
- The AI may get confused mid-way and start writing about how the PCs actually deserve to be slaves and should be grateful for the slavers helping them
- The AI might be right onboard up until the final act then decide it'd be a cool plot twist if the slaves turned on the PCs, due to a "we wanted to be slaves" plot it invents on the spot, thus creating a campaign which frames itself as about helping people but just leaves you with a pile of corpses.
- The AI may get very confused with the training information and start making the campaign just nonsensical.
Try asking DeepSeek r1 (671b) to make a D&D 5e encounter (you can add more specifics), the results will probably be interesting. But the real gem for new DMs is in the 'thinking' process, that is absolute gold. Not only do they have decent output (probably better then they could make at that time), they are also learning how that output is made..
You know what works more reliably and even better than this? Just like, looking at existing guides written by actual humans with lived experiences who share opinions, ideas, etc.
Seth Skorowsky has, at time of writing, 324 videos up on gaming - everything from particular modules to specific systems and modules. He's 1 guy.
You can also try communicating with your fellow humans - there are countless platforms to talk about nerd stuff and share knowledge and ideas.
Again, I encourage people to look at the actual studies I posted - AI kinda sucks at doing anything specific. "will probably be interesting" is highly speculative and works entirely on the premise you claimed you don't abide by - believing the sales pitch. DeepSeek can't pick an "interesting" idea because it doesn't know what is interesting to you.
One of the funniest things (to me) on Reddit is a post where someone announces they've been using the "rules sheet" someone made for Google Gemini to make good D&D content... the rules sheet is like 8 pages long because it has to be specific about literally everything because, it turns out, they have a particular idea of what "good" content is.
Imagine how much time people doing this spend thinking of ways to write prompts for the AI rather than thinking about stuff that's actually interesting to them.
And so with that considered, why would anyone assign any significance to the thinking process? It's not a human thinking process, it's not the process of someone with lots of experience - it doesn't show you a compelling clash of ideas. It's just a search history from a computer that doesn't know anything until it starts searching. There's not reason to believe it vetted its sources, searched for the best, etc. It just grabbed what was at hand for it.
I mean I tried the experiment and all it did in the "thinking" section was babble out generic ideas - many of which were insultingly obvious and kinda condescending if you were to give them to a newbie DM. Also it doesn't follow its own ideas/advice so like, there's an endorsement. If you do it multiple times you quickly find that it falls into a pattern, because it only knows one way to structure because... it's AI.
It also actively encourages bad DMing practices like planning the players actions for them (eg "The druid will wildshape into a mastiff to track the bandit"), gives strategic advice that just isn't sound (recommending the PCs will use spells and abilities they might not have), and makes a random encounter that would totally unbalance/derail your plot.
It's not good. Just like various crypto blockchains are not an authority on ownership and the NFTs on them are worthless as such, so generative AI is not an authority on anything. It can't know what is good encounter design, it can only try to correlate information and sadly - there's a lot more bad advice that labels itself as good than there is accurately labelled good advice. Remember these things needs tetrabytes of training data.
The reality of it is it's not very good at anything, other than being a thing for people to project their dreams and fantasies onto. The sales pitch is that you won't have to do research, thinking etc, for yourself - but its actually going to have someone or something else do it for you - it's just going to pretend it did and smile.