tomBitonti
Hero
Hi,
Reading Rel's commentary on his 4E campaign, and seeing the handling of "Escaping a Mine" as a skill challenge, I was drawn to the similarity between skill challenges and random walks. My (personal) insight was that random walks seem to be a very good framework for understanding skill challenges, and I am wondering if this has been commented on already. (You might also go all the way to Markov Chains and Stochastic Processes, but I am not wanting to get that deep into the math.)
The mapping is:
Start at state "0".
The goal is to reach state "+X".
There is a failure if state "-Y" is reached.
At state "A" there is a chance to reach nearby states, usually "A-1", "A", or "A+1". (Transitions that skip states may be possible, allowing for a degrees of success.)
That model already helps to figure out what probabilities "work" for set values of "+X" and "-Y".
That model also allows for variations (one of which Rel used):
A transition from "A" to "A-1" does not occur: Instead, the state stays at "A" but resources must be spent. (In Rel's example: There is an encounter.)
Also, there could be:
A transition from "A" to "A-1" is obvious, and has an effect on transition probabilities. That would be the case, for example, when the goal is to escape the mine, it is obvious that the last decision has led to a deeper part of the mine. Then, the transition back to "A" would be to backtrack (so a 100% chance to transit back to "A"), or, the task of telling which direction leads upwards could simply be easier to tell.
My apologies if this is ground already covered.
Thx!
Reading Rel's commentary on his 4E campaign, and seeing the handling of "Escaping a Mine" as a skill challenge, I was drawn to the similarity between skill challenges and random walks. My (personal) insight was that random walks seem to be a very good framework for understanding skill challenges, and I am wondering if this has been commented on already. (You might also go all the way to Markov Chains and Stochastic Processes, but I am not wanting to get that deep into the math.)
The mapping is:
Start at state "0".
The goal is to reach state "+X".
There is a failure if state "-Y" is reached.
At state "A" there is a chance to reach nearby states, usually "A-1", "A", or "A+1". (Transitions that skip states may be possible, allowing for a degrees of success.)
That model already helps to figure out what probabilities "work" for set values of "+X" and "-Y".
That model also allows for variations (one of which Rel used):
A transition from "A" to "A-1" does not occur: Instead, the state stays at "A" but resources must be spent. (In Rel's example: There is an encounter.)
Also, there could be:
A transition from "A" to "A-1" is obvious, and has an effect on transition probabilities. That would be the case, for example, when the goal is to escape the mine, it is obvious that the last decision has led to a deeper part of the mine. Then, the transition back to "A" would be to backtrack (so a 100% chance to transit back to "A"), or, the task of telling which direction leads upwards could simply be easier to tell.
My apologies if this is ground already covered.
Thx!