Revisiting AI as a GM Support Tool

Starfox

Hero
And therein lies one of the issues - generative AIs will, kind of frequently, just make stuff up. You may point it at a source, but you cannot rely on them to keep only to the source.

So, while it can be a powerful tool, at the moment you can't rely on their output for any topic you cannot double check yourself.
Which makes it perfect for make-up things like RPGs. :)
 

log in or register to remove this ad

Reynard

Legend
And therein lies one of the issues - generative AIs will, kind of frequently, just make stuff up. You may point it at a source, but you cannot rely on them to keep only to the source.

So, while it can be a powerful tool, at the moment you can't rely on their output for any topic you cannot double check yourself.
It was still faster that trying to parse a few thousand words of regulation text, but you are right: you have to use it like the tool it is, not assume it is perfect. When I run drainage calculations through software, I have to check those, too.
 

grimmgoose

Explorer
I find myself using ChatGPT less and less for brainstorming, but I'm using it more-and-more for time-consuming tasks.

For example, I run my games using the Obsidian note-taking app; I like to have all my spells, statblocks, etc in Obsidian so I can utilize the multiple-frame function or hover states.

Prior to ChatGPT, I would have to copy over all statblocks, magic items, spells, etc. into Obsidian using Markdown. Now, I copy/paste the statblock, throw it into ChatGPT, and say, "Convert this into markdown for Obsidian". It even knows to use the dice-rolling nomenclature, so Obsidian can automatically roll the dice for me.

TL;DR: I still find ChatGPT's creative capabilities lacking, but it has a good amount of utility in saving time on more straightforward-but-time-consuming tasks.
 

I recently needed to reskin some themes for D&D4E to make them specific to worshipping Moander, the god of rot and decay. I used ChatGPT to do so and it definitely saved me some time. Simple prompts like “show me a beast master theme from D&D4E, but focused on using a rot beast” did pretty well on getting the basics up there, and when it did give stats they were Isabel and level-appropriate. But there were a lot of missing bits - it rarely said what actions were required for a power, and it would ask for reflex saving throws and stuff merged in from other games. But worth using.
 

And therein lies one of the issues - generative AIs will, kind of frequently, just make stuff up. You may point it at a source, but you cannot rely on them to keep only to the source.

So, while it can be a powerful tool, at the moment you can't rely on their output for any topic you cannot double check yourself.

So, I work as an AI and ML architect for a health care company, and get to review a lot of government and regulatory documents over AI. One of the strong principles we adhere to is that nothing goes directly from an AI to a consumer (patient, clinician, etc.) Everything must be seen, usually edited, and then committed to by a person with authority to do so.

One commentator suggested that you think of an AI as an enthusiastic but somewhat naive intern who occasionally shows up drunk.

It’s a developing field right now, and using RAG and few-shot examples is pretty effective for improving AI in terms of hallucination, but the bottom line is that the core technology is essentially choosing a random string of words with probability proportional to how likely you are to see bits of that sequence somewhere on the internet. AIs have no concept of truth or falsehood. Their goal is simply to be plausible. Most often the truth is pretty plausible, so you get the truth, but truth is an only a side effect of plausibility, and so not to be depended on.
 

Umbran

Mod Squad
Staff member
Supporter
So, I work as an AI and ML architect for a health care company, and get to review a lot of government and regulatory documents over AI. One of the strong principles we adhere to is that nothing goes directly from an AI to a consumer (patient, clinician, etc.) Everything must be seen, usually edited, and then committed to by a person with authority to do so.

A wise policy, in general, especially if PII or PHI are involved.

One commentator suggested that you think of an AI as an enthusiastic but somewhat naive intern who occasionally shows up drunk.

Hm. Nice phrasing, that.
 

Celebrim

Legend
I've used it some to try to automate idea generation and some of the grunt work of filling out details about a setting. It's helpful, but far from being able to replace human guidance and content. It is probably more practical than rolling on random tables and does let you quickly build random tables of your own.
 


Remove ads

Top