Skill Challenges and random walks.

Hi,

Reading Rel's commentary on his 4E campaign, and seeing the handling of "Escaping a Mine" as a skill challenge, I was drawn to the similarity between skill challenges and random walks. My (personal) insight was that random walks seem to be a very good framework for understanding skill challenges, and I am wondering if this has been commented on already. (You might also go all the way to Markov Chains and Stochastic Processes, but I am not wanting to get that deep into the math.)

The mapping is:

Start at state "0".
The goal is to reach state "+X".
There is a failure if state "-Y" is reached.
At state "A" there is a chance to reach nearby states, usually "A-1", "A", or "A+1". (Transitions that skip states may be possible, allowing for a degrees of success.)

That model already helps to figure out what probabilities "work" for set values of "+X" and "-Y".

That model also allows for variations (one of which Rel used):

A transition from "A" to "A-1" does not occur: Instead, the state stays at "A" but resources must be spent. (In Rel's example: There is an encounter.)

Also, there could be:

A transition from "A" to "A-1" is obvious, and has an effect on transition probabilities. That would be the case, for example, when the goal is to escape the mine, it is obvious that the last decision has led to a deeper part of the mine. Then, the transition back to "A" would be to backtrack (so a 100% chance to transit back to "A"), or, the task of telling which direction leads upwards could simply be easier to tell.

My apologies if this is ground already covered.

Thx!
 

log in or register to remove this ad

Hi,

Reading Rel's commentary on his 4E campaign, and seeing the handling of "Escaping a Mine" as a skill challenge, I was drawn to the similarity between skill challenges and random walks. My (personal) insight was that random walks seem to be a very good framework for understanding skill challenges, and I am wondering if this has been commented on already. (You might also go all the way to Markov Chains and Stochastic Processes, but I am not wanting to get that deep into the math.)

The mapping is:

Start at state "0".
The goal is to reach state "+X".
There is a failure if state "-Y" is reached.
At state "A" there is a chance to reach nearby states, usually "A-1", "A", or "A+1". (Transitions that skip states may be possible, allowing for a degrees of success.)

That model already helps to figure out what probabilities "work" for set values of "+X" and "-Y".

That model also allows for variations (one of which Rel used):

A transition from "A" to "A-1" does not occur: Instead, the state stays at "A" but resources must be spent. (In Rel's example: There is an encounter.)

Also, there could be:

A transition from "A" to "A-1" is obvious, and has an effect on transition probabilities. That would be the case, for example, when the goal is to escape the mine, it is obvious that the last decision has led to a deeper part of the mine. Then, the transition back to "A" would be to backtrack (so a 100% chance to transit back to "A"), or, the task of telling which direction leads upwards could simply be easier to tell.

My apologies if this is ground already covered.

Thx!

I don't think we went much into the mathematical modelling for this. Stalker0 skill challenge system was based on statistical concerns, IIRC (e.g the likelihood of failure/success).

It certainly sounds interesting. ;)

I always wondered how one could put an entire adventure in this kind of framework. Individual encounters (be it combat or skill challenges or any other type of situation where you determine success vs failure) form the skill challenge equivalent of each individual check, and lead to different adventure outcomes (possibly more than just pass/fail).
 

I'm not sure ppl have done too much statistical modeling on this. Among other things, because the negative binomial really covers the simple skill challenge quite well.

Modeling a skill challenge as a random walk is quite compelling, but has a couple of challenges:

A) Your state space may be larger than just the number of successes. For example, some checks only give you a bonus or penalty to the next roll, hence changing your transition matrix. You could model that by introducing states like k(B), indicating that you have k successes and a bonus to your next roll. But especially if boni can be cumulative, that may make things quite unwieldy.

B) You may have to model any Markov chain as time dependent. If for example you may use skill X only 3 times in the entire challenge, players would use different skills at different times. You can probably assume, that each player uses his optimal skill whenever he rolls, and thus predict how the transition probabilities change over time.
 
Last edited:

At state "A" there is a chance to reach nearby states, usually "A-1", "A", or "A+1". (Transitions that skip states may be possible, allowing for a degrees of success.)

A transition from "A" to "A-1" does not occur: Instead, the state stays at "A" but resources must be spent. (In Rel's example: There is an encounter.)
This makes it sound like you want to model the skill challenge as a Finite State AutomataSo that instead of simple pass/fail, there can be many identifiable intermediate states and many outcomes.
 

That model also allows for variations (one of which Rel used):

A transition from "A" to "A-1" does not occur: Instead, the state stays at "A" but resources must be spent. (In Rel's example: There is an encounter.)

Also, there could be:

A transition from "A" to "A-1" is obvious, and has an effect on transition probabilities. That would be the case, for example, when the goal is to escape the mine, it is obvious that the last decision has led to a deeper part of the mine. Then, the transition back to "A" would be to backtrack (so a 100% chance to transit back to "A"), or, the task of telling which direction leads upwards could simply be easier to tell.

Rather than have the transition probabilities change, you could just split each state A into many states A_1, A_2, A_3, ....

For example, once resources have been spent, you transition from, say, A_1 to A_2, and from that point on you cannot go to other states with index 1; e.g., you can go from A_2 to B_2, but not from A_2 to B_1 -- that is, unless you allow for the possibility of regaining spent resources. Similarly for the mine example.

Of course, this increases the state space and therefore the number of transition probabilities (size of the transition matrix), but this sounds like good, clean fun.
 

I am not sure I understand. Is the OP saying that based on the results of the first roll, the second roll of a skill challenge should be influenced or is he saying that skill challenge results should belong to a dimension so we can have sequences of skill challenges that form random walks?
 

Pets & Sidekicks

Remove ads

Top