How well can a dedicated RPG GenAI perform?

FrogReaver

As long as i get to be the frog
And you won't see a lot of difference between "seems like a friend" and "is a friend" until after you find out they've been skimming from your wallet. Teh difference is in details.

A passage can seem coherent when each sentence looks grammatically sound, and individually reads well enough. But they fail to be actually coherent when strung together if one does not logically follow from another, information is missing, there is repetition that is not internally consistent, or topics change without explanation.



I do not have an actual example document at hand, and this discussion is not important enough to me to go and build one myself for supporting my position.

I have, however, read narratives created by ChatGPT, for example. You may start with Greg, Sam, and Beth as characters, but on page two Sally appears with no note of who they are or how they got there. Greg's hair color changes several times over the course of several pages, and in one paragraph we are told that they are driving along in Beth's VW, and a page later they are standing in an ice cream parlor with no transition.

The AI can't form a narrative, in which event A causally leads to B leads to C. Because, when it is forming paragraph 17, it is not referring to any prior paragraphs for content, context or continuity, because a LLM doesn't construct its bits based on content, context, or continuity. It doesn't have a concept of causality, of an "event" in a narrative that has "impact and consequences" later in the narrative. It doesn't have the concept of a character that is a person who needs to have consistency of personality or behavior or desires, etc.

The LLM is effectively only doing short-range pattern matching of words and punctuation. The semantic content isn't relevant.



You don't see the problem???

Well, let me ask you - how many people marry their laptops? Pretty much none, right? And the ones that try to do so would be looked at as... strange, right? Ergo, the person-to-person relationship is not the same as the person-to-machine relationship. Therein lies the problem.

What people expect from other people, and what they expect from machines are not the same. We know that our fellow humans can be unreliable. But, we tend to expect our machines to be reliable at what they do. That's pretty much the entire point of having a machine to do stuff for you rather than have another human do it.



I've already laid that out in broad strokes - the LLM does not have abstract thought, or understanding of the content of the words it is putting out.
@Gorgon Zee brought forth the main points of my would be reply much more eloquently than me. So I’ll leave it at that.

One thing I would push back on toward you both is that without a strong theory of human cognition/thought it’s hard to really say whether machines think/understand or not. It’s been an open question in computer science for a long time. Probably the most well known proposal to the question has been the Turing test. But even at best that would just show an amazing similarity of machine responses to human responses, something ChatGPT already amazes us with. Anyways, back to the main point. We really don’t understand enough about thought to properly define it in the first place. And then there’s always the possibility of conflation of human thought as the only type of thought.

I mean what would a thinking machine look like, what could it do to prove it thinks. What can you do to prove you think?

What does the human brain do to understand language? Can we even explain that? Etc.

I think we are a long way from definitive statements on almost any of this.
 

log in or register to remove this ad

Umbran

Mod Squad
Staff member
Supporter
One thing I would push back on toward you both is that without a strong theory of human cognition/thought it’s hard to really say whether machines think/understand or not.

Look, I'm sorry, but for generative AI machines at this time, it is easy to say whether they think/understand or not. They do not.

They are not magic. What goes on inside them is very well understood - maybe not by you, but by humans who make them. There's no mystery there.

I mean what would a thinking machine look like, what could it do to prove it thinks. What can you do to prove you think?

I can display complex behavior, solve novel problems that require understanding of the operation of the universe around me, have a sense of self, a sense that others also have similar senses of self, form abstractions and make deductions based on my own observations. And, as a really cognitively advanced critter, I can communicate about all those activities without even being asked, and spontaneously choose to display the pain of human cognition in the form of interpretive dance.

Human and animal cognitive sciences don't know how everything works, but they can still say a lot about cognitive processes.

Meanwhile, a LMM is hard pressed to pass a Turing Test.

What does the human brain do to understand language? Can we even explain that? Etc.

The answer is, "somewhat, yes, we can". There's elements that are still unknown. Like most of science.

On the other hand we know EXACTLY how the LLM is processing information.
 

FrogReaver

As long as i get to be the frog
Look, I'm sorry, but for generative AI machines at this time, it is easy to say whether they think/understand or not. They do not.

They are not magic. What goes on inside them is very well understood - maybe not by you, but by humans who make them. There's no mystery there.



I can display complex behavior, solve novel problems that require understanding of the operation of the universe around me, have a sense of self, a sense that others also have similar senses of self, form abstractions and make deductions based on my own observations. And, as a really cognitively advanced critter, I can communicate about all those activities without even being asked, and spontaneously choose to display the pain of human cognition in the form of interpretive dance.

Human and animal cognitive sciences don't know how everything works, but they can still say a lot about cognitive processes.

Meanwhile, a LMM is hard pressed to pass a Turing Test.



The answer is, "somewhat, yes, we can". There's elements that are still unknown. Like most of science.

On the other hand we know EXACTLY how the LLM is processing information.
To summarize your claims - we know exactly how the LLM is processing info. We don't fully know how we do. So the open question would be, why can't the underlying processes be the same, or at least extremely similar?

I think you are ruling things out that we cannot yet.

They are not magic. What goes on inside them is very well understood - maybe not by you, but by humans who make them. There's no mystery there.
This part comes across incredibly rude. I didn't claim they were magic or that we didn't understand them. That's not the basis for my thoughts.
 

aramis erak

Legend
What I learned of human cognition in my 2008 master's program in education was very clear - we know the large scale, and we know the fine scale, but we don't know how the brain generates consciousness, or if consciousness is even real. Nothing I've seen since says there's understanding how to connect the neural electrochemical multiplication to the higher level cognition. We do know that neurons do analogue maths via electrochemistry... and LLMs use mathematical multiplication to emulate neurons. Can digital neurons generate consciousness? Until we know whether physical neurons do,

We don't know if we're just a specialized LLM or not. We do know that neuronal connections do the same kinds of calculation as LLMs do... save that it's mixed electrical and chemical signalling, rather than the binary multiplication of LLMs. Analog calc vs digital. But still just calculations.

Oh, and supposedly, the current gen models have passed the turing test well enough for the execs at Hasbro...
I'll note that I don't know this source... and don't quite trust them... but if true, get used to AI in D&D.

Oh, and as for passing the turing test? This is close enough for me... This source, Gary Explains, is a fairly reliable and credible source. It's an example of skilled use of prompts. Something a good programmer could do in a couple hours. Something that would probably take me a solid week. (wait, no, thanks to python slice notation, I've programmed an RPN interpreter with memory and die rolling... which is about the same complexity of task.) Sure, it's not conversational... but it's doing complex programming task from conceptual level directives. Including debugging. Only to a point. It's doing better than just predicting the next word... because it is translating concept into code.

Translation matrices are a special case of LLM, but Google's had issue with the LLM in Google Translate generating a pseudo-code for internal use between the decode and reencode steps. (2017 or so IIRC). And that resulted in developing a new model with safegards against it.

Many of the LLMs have unexpected behaviors. And there have been "major advancements" (OpenAI exec in an interview last week) in the last two months. Open AI O-1 (It's not GPT-O1. ) O-1 is...
We trained these models to spend more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.

They also claim performance comparable to postgraduate students...

And they speak of it in terms of understanding and thinking. The engineers, at least, have personified the O-1 model; the way their press reads, it's passed the turing test of its own engineers.
 

I'm surprised one person hasn't entered the conversation or maybe he has i dunno 🤷‍♀️
A point of clarification here. First, LLMs do base their predictions based on previous paragraphs and do indeed try for context and clarity. This is the "context window" which you hear a lot about. For generating stories, you would need a pretty decent context window, so if you use a small window (E.g ChatGPT 3.5) It will forget pretty fast. If you use a model with a large context window (say a 128K one) it will be generating the last paragraph of your Great Gatsby - sized novel using every bit of context you entered before.
So, this part stood out to me because I have started to use chatgpt to work on my book and even use it to see how it would read if i did it from a different view point. I would take a section from my wip, and tell it do "rewrite the following but from a third person view point"

The one time i applied it to gaming needs was when I ran what i had typed up for a setting, it pointed out aa continuity error and asked which one i wanted it go with after giving me the two choices. I posted about it here I used ChatGP for my setting and it was awesome
 

Squared

Explorer
This is a really interesting thread. It got me thinking, given the strengths and weaknesses, what sort of gameplay could generative “AI” support.

I don’t think that generative programs will be able to replace or even significantly help either traditional or narrative GMs. The complexity and social interactions would, I think, ultimately be beyond the tech for the foreseeable future.

Instead, what if we leaned into the strengths and away from standard advice on how to run an RPG. What I am referring to would be what is normally called “Adversarial GMing”. Such game play is normally socially damaging and pretty silly as the GM can win at any time.

But what if instead it is humans versus the machine? The machine is constrained by the rules of the game in a defined mega dungeon where interactions can be kept pretty simple.

I think this would be quite doable, my only question is, how expensive would that be? Say you have 10,000 tables all playing at once, having the “AI” make a decision once a minute, on average. What kind of scale of computing resources would that require? How much would that cost?

Note, this would not be a TTRPG, this would just be a computer game that simulates a TTRPG.

^2
 

I'm surprised one person hasn't entered the conversation or maybe he has i dunno 🤷‍♀️

So, this part stood out to me because I have started to use chatgpt to work on my book and even use it to see how it would read if i did it from a different view point. I would take a section from my wip, and tell it do "rewrite the following but from a third person view point"

The one time i applied it to gaming needs was when I ran what i had typed up for a setting, it pointed out aa continuity error and asked which one i wanted it go with after giving me the two choices. I posted about it here I used ChatGP for my setting and it was awesome
I believe this in a lot of industries now. It's the idea of the "co-pilot". Excuse the Microsoft term, but I think it's apt.

I'm told a lot of commercial artists use AI to generate a number of concept storyboards for inspiration and then either work fresh or modify one to make their own works.

In corporate accounting AI we are using AI to generate initial analysis of financial data which then is reviewed and adjusted or scrapped entirely if not appropriate and the done manually.
 

This is a really interesting thread. It got me thinking, given the strengths and weaknesses, what sort of gameplay could generative “AI” support.

I don’t think that generative programs will be able to replace or even significantly help either traditional or narrative GMs. The complexity and social interactions would, I think, ultimately be beyond the tech for the foreseeable future.

Instead, what if we leaned into the strengths and away from standard advice on how to run an RPG. What I am referring to would be what is normally called “Adversarial GMing”. Such game play is normally socially damaging and pretty silly as the GM can win at any time.

But what if instead it is humans versus the machine? The machine is constrained by the rules of the game in a defined mega dungeon where interactions can be kept pretty simple.

I think this would be quite doable, my only question is, how expensive would that be? Say you have 10,000 tables all playing at once, having the “AI” make a decision once a minute, on average. What kind of scale of computing resources would that require? How much would that cost?

Note, this would not be a TTRPG, this would just be a computer game that simulates a TTRPG.

^2
I am really keen to see how AI might be used for computer games, particularly for NPC behaviour. If developed well it could be quite revolutionary in that space.
 

"Bury heads in the sand" is not the same as "deliberately stealing another writers work to train ai chatbot."

The point is that I am trying to shed light on the way that OpenAI trains its models (which is the opposite of "head in sand"). If you don't understand that training a model in this way is taking away the rights of the creators, I don't know what to tell you.

This isn't even factoring in the disastrous effects on the environment. Thank you, but we are very much aware how the tech works, and we are aware of the damage it is causing.

The only people with their head in the sand are those who are ignoring the facts about how genAI is trained. Unethical data scraping, no consent (opt-out only or change of terms to platforms), and currently facing countless lawsuits. Respected authors like George RR Martin are currently suing OpenAI and other ai companies, I don't think that this is a coincidence.

Maybe if the OP used publicly available information, instead of data that is copyright protected, we wouldn't be having this conversation.

Two problems with this

1.) You're confusing the law with morality. Just because something is illegal doesn't mean it's immoral. And it's questionable whether this is even illegal, which brings us to item 2:

2.) Research and transformative use are both covered under fair use. So by any reasonable interpretation this isn't even illegal. The only way this gats banned is if some corrupt judge maliciously misinterprets the law in order to prop up an obsolete industry.
 

And you won't see a lot of difference between "seems like a friend" and "is a friend" until after you find out they've been skimming from your wallet. Teh difference is in details.

A passage can seem coherent when each sentence looks grammatically sound, and individually reads well enough. But they fail to be actually coherent when strung together if one does not logically follow from another, information is missing, there is repetition that is not internally consistent, or topics change without explanation.



I do not have an actual example document at hand, and this discussion is not important enough to me to go and build one myself for supporting my position.

I have, however, read narratives created by ChatGPT, for example. You may start with Greg, Sam, and Beth as characters, but on page two Sally appears with no note of who they are or how they got there. Greg's hair color changes several times over the course of several pages, and in one paragraph we are told that they are driving along in Beth's VW, and a page later they are standing in an ice cream parlor with no transition.

The AI can't form a narrative, in which event A causally leads to B leads to C. Because, when it is forming paragraph 17, it is not referring to any prior paragraphs for content, context or continuity, because a LLM doesn't construct its bits based on content, context, or continuity. It doesn't have a concept of causality, of an "event" in a narrative that has "impact and consequences" later in the narrative. It doesn't have the concept of a character that is a person who needs to have consistency of personality or behavior or desires, etc.

The LLM is effectively only doing short-range pattern matching of words and punctuation. The semantic content isn't relevant.
This, however, is correct. LLMs have the memory of a goldfish
 

Split the Hoard


Split the Hoard
Negotiate, demand, or steal the loot you desire!

A competitive card game for 2-5 players
Remove ads

Top