• NOW LIVE! Into the Woods--new character species, eerie monsters, and haunting villains to populate the woodlands of your D&D games.

d20Engine: Core Mechanic

Firzair said:
Yeah that's about what I figured a universally useable chargen need to be.
What's a bit of a problem with this approach is all the XML-coding to be done (I don't like XML).
But seeing what you've done there I think perhaps I should learn C++... seems good at building javascript-enabled objects.

I use C++, because that's what I'm confortable with. I used Mozilla's Spidermonkey Javascript engine. But to be fair to Java, there's also Rhino/XPConnect which provides the same type of functionality.

I chose Javascript, since it's a well known language and perfect for object scripting.

Firzair said:
The distinction between data and code is good, as long as the code doesn't define the data. My first approach in my program was to code the whole rulesets in the objects data. So I got to have the rules for gaining new feats in the rules for the character level-data. While this could be easily changed using the object administration GUI it's not easy to just change this rule by a modification rule. So I will just put the rules that define data directly into the data.

I agree, as long as the rules are not program code dependent (and herein lies the challenge). Either using scripting, like Javascript, with a common API, or use abstract rules, like RDF-OWL/RuleML/SWRL that a program can deal with as it pleases.

Firzair said:
What I want to know is what size of a footprint (memory size) your program would need. Does it parse all the XML files and do all objects always exist in memory? Those XML-files can get realy big, how fast is XML-parsing? While I see the XML as a somewhat unified export format I wonder if it's useable for an application...

Yes, it can get big. Even in my demo, with a small subset of the SRD, it reaches 25 MB. But I haven't made any attempts to optimize memory usage, the XML being loaded and used as is. I've since come to the opinion that an internal binary representation would probably be better (i.e. take the XML as a definition of an object that is created internally to the program, rather than the XML actually being the object).

Andargor
 

log in or register to remove this ad

Okay...finally reading up on RDF-OWL. More thoughts to come.

As far as tools go, I was thinking I could post a Wiki on my web space to try to flesh more of this out. We could start nailing down some of the design principles and then start defining the actual model. I'll set it up if there's an interest.
 

There's a good high-level presentation on RDF and OWL here, for those interested.

Looks like my current projects are aligned with your needs. I'm exploring the possibility of semi-auto generating an ontology for all elements in the SRD from my database. I'm looking into using Protégé to help with this, combined with some scripting to feed it.

It has plug-ins for OWL, RDF, DAML+OIL, RuleML, SWRL, XML, so any work can be exported in the appropriate format. Also, there are plug-ins for merging ontologies, so it's possible for several people to work on it.

Oh, why I need this? Well, I'm working on my next-generation offline searchable reference tool, and I've hit a snag with automated hyperlinking: ambiguous terms. So I'm jumping into the "semantic web" bandwagon, and drafting the ontology to help better target what gets linked to what. Still, the ontology is also immensely useful for character generators.

Andargor
 

The ontological approach is conceptually close to what I have been imagining. It looks like OWL provides a good framework for defining this. It brings up the question of whether all of the mechanical aspects of the game can be adequately described within that framework. How complete is your implementation? What are your plans for it?
 

Planesdragon said:
AOP? Care to elaborate for the peanut gallery?

Aspect-oriented programming. I am not the best person to explain it (try google), but basically it is a programming model that instead of defining what you can do with a particular object, you instead define what an object has to have for you to be able to do something with it. In a D20-type example, take Strength checks, for instance. In the OOP world you would define a creature and say that creatures can make strength checks. In the AOP world you would ask if a creature (or table or character or skill or whatever) has a Strength, and if it does it could make a strength check. As I said, I'm not the person to explain this. :)

Yes, they would. Or, rather, they would for a program that runs the rules. For those people, the programmers, we can make a standard that use specific rules and treats them consistently--someone who rules that, say, special wounds cannot be treated normally would have condition tracking, that states that [cannot heal HP] or [all [heal] checks automatically fail]

I support having a very small set of functionality that a program can use to build other functionality (as hinted at by my examples abive). I worked on it some more this weekend as well. I have an XML language that can now be used to define the entire first chapter of the SRD from only about 1-2 dozen functions.

Yahoo has file storage that d20-XML and the FGA have used. PHP seems to be a common enough skill in the RPG community that we might be able to find something there.

I have a website I can host it on. And I suppose I COULD do something up in PHP. I was just hoping someone knew of some sort of simple collaboration software already out there (like SharePoint integrated with Office, only free).

The key, IME, is setting whatever we/you do so that someone else can come along and pick up where we left off. (Look at Avangor; he did a lot of work, and he's eager for someoen else to pick up where HE left off.)

Yeah, I haven't had a lot of time (been excitedly working on the XML rules language) to check it out, but if someone already has a good set of data (like all the spells in XML, etc.) that would be great. Even if whatever it is doesn't quite work with a more general design, we should be able to do an XSLT transormation on it to get it to work for us (and save us HUGE amounts of time doing data entry).
 

andargor said:
First, my take on any XML or other implementation of an engine, in summary:

- There are three basic layers: data, engine, and GUI.

I see it as a bit more complicated than that. I see rules, data, engine, application, and gui. I see data (as in character data) and rules (which you describe as data) as being two very similar, yet separate layers (both XML data, one higly mutable, one not). Engine is a simplistic mechanism for interpreting the rules layer. The application works with the engine layer to "apply" the rules to the data in the goal of transforming the data into a new, internal set (I guess you could call this the tranformation layer) that is then exposed to the GUI layer for final presentation and interaction.

I feel that the data, engine, gui abstraction works well for transactional business software (especially for load balancing and security), but in a small application (small in the sense that is designed to run locally and it's not critical software), it simplifies things in a way that makes it difficult (though not impossible) to do certain things.

- Data should be separate from code. That is, data is static, what comes from the books, and should never changed except for errata (which is the bane of PCGen: when there are code changes, everything follows, since the LST language mixes processing with static data).

Can't disagree there (at least not ideologically). But I do think the code (in this case what I call the rules layer) should be represented and exposed as data so that it can be easily modified, tranformed, erratad, house ruled, etc.

- There is a gray zone between code and data, what I term "business rules": how do you describe what a feat or other special ability does, for example. A way to do this in a code independent way is using markup such as RDF-OWL (which has been proposed) or rule markup like RuleML or SWRL (which is a W3C Recommendation)

This is what I call the rules layer. I am not familiar with SWRL, but I do think RDF-OWL and to a lesser extent, RuleML are kind of the wrong angle of approach. I think coming up with a dedicated markup language could greatly simplify the end users' (house rulers') lives. I would take some convincing to believe those standards would be worth implementing for this purpose. Seems like overkill in development that would only slow things down later.

- The engine is what is the core of any program, using your language of preference. It loads the data into an internal representation which may be custom, and implements the "business rules" on the data.

The only thing I would like to add is that I think the .NET framework is worth looking at as it has the best balance of cross-platform and language-independence.

- GUI interfaces with the engine and provides user interaction.

If you've looked at the prototype engine I had posted, data is encoded in XML, and the business rules are implemented in javascript (although RuleML looks more attractive, since it is more portable). The engine and GUI code is in C++, but that becomes irrelevant if the base data and rule implementation is correctly done.


Finally, this is an overview of my personal vision of what an open framework is:

(snipped image, see above)

Don't take this the wrong way (it's mostly a joke), but it is my humble opinion that flow charts and data models are for presenting to the executives so that they can have input and thus slow down the progress of developers.
 

Firzair said:
Yeah that's about what I figured a universally useable chargen need to be.
What's a bit of a problem with this approach is all the XML-coding to be done (I don't like XML).

You can always write in some other kind of structured way and automate a transformation into XML. That's why XML is used so widely. Further, XML being used widely is the reason XML should be used, in my opinion.

But seeing what you've done there I think perhaps I should learn C++... seems good at building javascript-enabled objects.

I just want to say that learning C++ is not something you do because it is particularly good at anything, but because it is fairly good at everything. There is almost always something better to use for a particular task. In this case Java would probably be the better choice.

The distinction between data and code is good, as long as the code doesn't define the data. My first approach in my program was to code the whole rulesets in the objects data. So I got to have the rules for gaining new feats in the rules for the character level-data. While this could be easily changed using the object administration GUI it's not easy to just change this rule by a modification rule. So I will just put the rules that define data directly into the data.

What I want to know is what size of a footprint (memory size) your program would need. Does it parse all the XML files and do all objects always exist in memory? Those XML-files can get realy big, how fast is XML-parsing? While I see the XML as a somewhat unified export format I wonder if it's useable for an application...

Greetings
Firzair

While I can't speak to his particular app, I see he is using the traditional open source XML library (libxml2) which is an all extremely good XML processing engine. It is fast, small footprint, the works. With a good XML processing engine, it is almost always more efficient (memory and speed-wise) to use XML than a custom binary set of data because the the engine is so well optimized. You, of course, can always create something faster and smaller using low-level code, but the time to beat the work of thousands of developers before you is not worth it.
 

andargor said:
It has plug-ins for OWL, RDF, DAML+OIL, RuleML, SWRL, XML, so any work can be exported in the appropriate format. Also, there are plug-ins for merging ontologies, so it's possible for several people to work on it.

Oh, why I need this? Well, I'm working on my next-generation offline searchable reference tool, and I've hit a snag with automated hyperlinking: ambiguous terms. So I'm jumping into the "semantic web" bandwagon, and drafting the ontology to help better target what gets linked to what. Still, the ontology is also immensely useful for character generators.

Andargor

I have a licensing question. I believe you said you were using GPL, but is that only for your own code with the rest LGPL, or have you integrated other GPL apps into your own?
 

nopantsyet said:
The ontological approach is conceptually close to what I have been imagining. It looks like OWL provides a good framework for defining this. It brings up the question of whether all of the mechanical aspects of the game can be adequately described within that framework. How complete is your implementation? What are your plans for it?

I'm a novice in OWL, just having been convinced that it is the way to go. A year ago, when I made my attempt at an object-oriented (not in the programming sense) character generator, my data wasn't in the shape it is in now. I viewed an ontology as being unnecessary drugery, but now I see that most of it can be generated from script (I hate manual work), and then adjusted. :)

I'm leaving for a biz trip next week, but in a couple of weeks, I'll make a run through my database and generate classes, subclasses, and instances. I'll make it available to everyone, so you can be skeptical about the quality of the results if you wish until then :D . Note that I hate debating over vaporware ad infinitum, so I'll just do something and then we can argue over it. Again, if RL doesn't interfere...

The beauty of ontologies is that they can be modified easily. Because, and get ready for this, we are going to argue over the model. With all due respect (a large amount of respect) to d20-XML, this is why it didn't go anywhere: everyone has their opinion on the "correct model". Perhaps because "using XML" was too generic, and each person's idea of chosen implementation (whether language or approach) was too different.

Ontologies make abstraction of that. As soon as you start thinking about "how my program will load this", you know you are going in the wrong direction. Ontologies are all about "what is this and what does it do?" instead of "how do I use this?", and hence greater potential for agreement on a model (I think everyone will agree, for example, that there are Special Abilities, and that they come in Sp, Su, Ps, or Ex flavors). And you can work on a specific sub-tree of definitions, to aggregate later. Or, if you don't like the "consensus" (or compromise, which will probably be the case), then you replace a branch with your own definitions.

reanjr said:
This is what I call the rules layer. I am not familiar with SWRL, but I do think RDF-OWL and to a lesser extent, RuleML are kind of the wrong angle of approach. I think coming up with a dedicated markup language could greatly simplify the end users' (house rulers') lives. I would take some convincing to believe those standards would be worth implementing for this purpose. Seems like overkill in development that would only slow things down later.
...
The only thing I would like to add is that I think the .NET framework is worth looking at as it has the best balance of cross-platform and language-independence.

For me, anyway, the ontology will serve to produce "intermediate" libraries which my programs will use (e.g. javascript code generation based on rules, maybe even C++ classes). Each person's program implementation will be different (I see Java, C++, .NET, libxml2, Delphi, Lisp mentioned), but using standards for the ontology and rules means that they will always be able to regenerate the intermediate libraries required for each different implementation. I personally am not going to install a Lisp interpreter in my program, however nice the language is, nor would I be looking forward to coding a Lisp to javascript converter. I much prefer to XSLT/XPath/Parse from OWL to get what I need, and regenerate at will. :)

This is why I will resist anything other than a standards-based ontology, such as using a custom scripting language to define rules (after the PCGen experience and trying to make my tools interoperable with it, I can say that a constantly moving target is no fun).

reanjr said:
Don't take this the wrong way (it's mostly a joke), but it is my humble opinion that flow charts and data models are for presenting to the executives so that they can have input and thus slow down the progress of developers.

Mostly? ;)

Bah, my job is to babysit executives so they understand what needs to be done (it's called "pre-chewing the food"). So Visio and Powerpoint are my main weapons. :)

reanjr said:
I have a licensing question. I believe you said you were using GPL, but is that only for your own code with the rest LGPL, or have you integrated other GPL apps into your own?

You mean for the demo, or for all the other apps on my site? Well, if I say it's GPL, then the only snippets I have used should be GPL. Otherwise I usually go for the strictest licence in the code I reuse, such as "free for non commrecial use", or some such, to respect the original author's wishes. I may be wrong though, it has been known to happen occasionally. ;)

Andargor
 

Hi, joining this from the other thread. Semi-random comments follow.

I don't yet have a formal ontology for my project (using generic RDF at the moment) but it is pretty clearly the right way to go. If someone wants to propose or generate one, that would be a good place to start analysis - but not necessarily here on the forum.

1) AI engines consume RDF (and, by extension, OWL). For our purposes, that means we can feed rules into an engine to detect contradictions and discover implicit relationships.

2) RDF understands URLs and can be written as XML, so rules can be scattered around the web and still be pulled together.

3) It's perfectly fine to start concrete work with raw XML and later turn it into RDF. Just use URLs as the rule identifiers, and hrefs to refer to rules. So brainstorming stuff in XML works just fine.

4) Not everything needs to be defined in the core ontology, and we should ignore stuff that is not in the SRD for the initial version. Future work can extend and generalize.

To elaborate on the last point - I don't think we need to generalize the core mechanic. Every program can (and probably will) hard-code that assumption. I don't have any books that require dice pools, and when I get one I can owl:import the core ontology and create some new terms and relationships. Unlike OO programming, I can retroactively add superclasses and properties.

On language choices, everyone is entitled to their own opinion. I think initially it would make sense to use Python because:
1) It's cross-platform.
2) It's dynamic - early on we'll do lots of prototyping and refactoring; having static types will trip up people.
3) Code snippets are short and readable.
4) A lot of the existing RDF/OWL tools are written in Python, short-circuiting the parser problem. Sparta in particular is a cool little library.

For collaboration, a wiki makes the most sense. I'll put together a wiki, mailing list, and source code repository this weekend if no one else has done so by then (space on my website permitting).
 

Into the Woods

Remove ads

Top