nopantsyet
First Post
This is a starter discussion related to the thread Open Source d20 API/Engine. The purpose of this thread is to begin the type of discussion that I imagine will be necessary in order to develop a portable, reusable software implementation of the d20 SRD. As I mentioned in the original thread, If discussion indicates that this is a viable project, we can discuss how to turn these dialogs into practical results in a consistent manner.
To start out, I thought I would go in the same order the SRD does, so my first topic is the core mechanic. It might seem odd to start the discussion here, but it's a very simple topic that illustrates a number of the design decisions that need to be made.
The SRD presents the core mechanic as d20 + modifier vs. target number. But it goes on to explain that other dice may be used, and those results may be used in different ways. Now, at some level this has to know where the d20 mechanic is applied, and where other dice mechanics are applied. Furthermore, it has to provide for rule-specific mechanics (e.g. turning, Jump skill).
It seems all resolution scenarios break down to as many as three discrete components: a determination of success, a determination of degree, and an adjudication of effect. Leaving effect to the rule to which it applies, we should consider mechanic in relation to both success and degree.
Why abstract it like this rather than just put it in the code? Well, we want to allow alternate rules to be used, and they might have alternate resolution mechanics for certain actions. We need to be able to override the mechanic for any given entity.
What does this imply about the data model? First of all, that it will be object-oriented to some degree, allowing entities to be defined and reused or defined inline. Second, that there will be some extensibility points for implementing and replacing the necessary functions. For example, I might consider using a true entropic random number generator (random.org), but for the most part I would like use dice for resolution. So one implementation might get the random number from random.org, and another might pop up a window telling me to enter the die rolls.
Unfortunately, this leads us to the point of having to impose some aspect of implementation--every implementation must support the same integration points. But is that inferior to requiring that every implementation correctly implement the rule set? It's probably easier to reimplement the extensibility model than to reimplement the rules. I think it is important that rules be self-describing in order to support portability and overrides.
So I would propose a base entity, which I'll describe as:
Let's ignore for the moment the question of doing some Kung Fu like d20 being an inherited implementation of dX. What I want to know is your thoughts about the design questions I've raised and my suggested approaches.
.:npy:.
To start out, I thought I would go in the same order the SRD does, so my first topic is the core mechanic. It might seem odd to start the discussion here, but it's a very simple topic that illustrates a number of the design decisions that need to be made.
The SRD presents the core mechanic as d20 + modifier vs. target number. But it goes on to explain that other dice may be used, and those results may be used in different ways. Now, at some level this has to know where the d20 mechanic is applied, and where other dice mechanics are applied. Furthermore, it has to provide for rule-specific mechanics (e.g. turning, Jump skill).
It seems all resolution scenarios break down to as many as three discrete components: a determination of success, a determination of degree, and an adjudication of effect. Leaving effect to the rule to which it applies, we should consider mechanic in relation to both success and degree.
Why abstract it like this rather than just put it in the code? Well, we want to allow alternate rules to be used, and they might have alternate resolution mechanics for certain actions. We need to be able to override the mechanic for any given entity.
What does this imply about the data model? First of all, that it will be object-oriented to some degree, allowing entities to be defined and reused or defined inline. Second, that there will be some extensibility points for implementing and replacing the necessary functions. For example, I might consider using a true entropic random number generator (random.org), but for the most part I would like use dice for resolution. So one implementation might get the random number from random.org, and another might pop up a window telling me to enter the die rolls.
Unfortunately, this leads us to the point of having to impose some aspect of implementation--every implementation must support the same integration points. But is that inferior to requiring that every implementation correctly implement the rule set? It's probably easier to reimplement the extensibility model than to reimplement the rules. I think it is important that rules be self-describing in order to support portability and overrides.
So I would propose a base entity, which I'll describe as:
Code:
<mechanic id="dX">
<parameter id="rolls" type="int"/>
<parameter id="faces" type="int"/>
<parameter id="bonus" type="int"/>
<implementation>
<!-- Implementation via extensibility points -->
</implementation>
</mechanic>
<mechanic id="d20">
<parameter id="faces" type="int"/>
<parameter id="bonus" type="int"/>
<implementation>
<!-- Implementation via extensibility points -->
</implementation>
</mechanic>
.:npy:.
Last edited: