Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Community
General Tabletop Discussion
*Geek Talk & Media
Character Generation [technical/theoretical]
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="MJEggertson" data-source="post: 374127" data-attributes="member: 845"><p>This is very fascinating, and I applaud the discussion so far. Also, I would love to be part of the email discussion. I can’t rifle a message off right now, I’m at work, so I’ll do so when I’m at home. But if any of you are composing something right now, please add me to the list, mike@rpgprofiler.net.</p><p></p><p>I think there are also two discussions happening. One, about being a generator, and another about how to implement a totally customizable data/ruleset. I’m for the moment going to focus on the data/ruleset, and some theory on how one would go about the design of a program that uses absolutely no hard-coded data.</p><p></p><p>We know the program has to be flexible, to allow for rule mods, splat books, etc. The question is how do we handle the data, and how do we handle the mods?</p><p></p><p>Sanglant’s idea of a character being an entity collection I think is bang on. If we focus on d20 products, this is rather easily done. The system is designed around incremental changes to properties (entities) of a character. But as a general theory, the question rises, how are these entities defined? We need to prevent a hard-coded approach, otherwise we’ve defeated the purpose of this discourse.</p><p></p><p>I use a program on a daily basis that I think we can learn a lot from. It’s a raw numerical data crunching tool, but it is very flexible, customizable, and most of all, it’s rediculously fast. Probably the biggest concern when implementing a custom data library of undefined size is program speed, memory consumption, and general performance, as I think it was Luke that mentioned. Even back in the day of MacIIs, when a few megs of ram was hard to come by, this program could throw around data matrices a meg in size and operate on them in a few seconds. Even on user-written ‘scripted’ tools. How?</p><p></p><p>Through the combination of compiled and scripted languages. The program has a C-like language that it reads. Depending on the context, this language can serve as a script that is parsed at run-time, and making calls to other pre or user defined functions (acting essentially as a macro language), which is of course slow. But in another context, the program reads this language, compiles it at start up, and the resulting functions/methods/objects are kept in memory and can be executed just as fast as any hard-coded function call or operation.</p><p></p><p>I think a project like this needs something similar to the above. AI routines, interpolative procedures are nice, but in the end, they will not be perfect, and run the possibility of introducing quirky behavior to the program. The only way to really accomplish this is to let the author of a dataset/tool/addon to define it themselves.</p><p></p><p>The structure of the language that the theoretical application would use is irrelevant to the current discussion. It could be data-driven like xml, it could be language driven, like a C-style, whatever…</p><p></p><p>But how to use it? To be flexible and be able to accommodate <u>any</u> new material, the entire data set, the operations, and everything needs to be defined outside of the actual application. What your program essentially ends up being is a wrapper for a language of some type, that generates a UI.</p><p></p><p>At start up, the program scans the data files, the procedure files, the rules files, compiles them, and once done, the program operates without lag or slowdowns, as if everything was hard-coded. Compiling at start up is time-consuming, but this can be worked around. Say, for example, you look at each file, and save an md5 digest or something. At start up, the program digests the code of a rule, and only re-compiles it if it has changed. Startup is lightning fast after the first compile.</p><p></p><p>The program resolves conflicts by building a hierarchy of the data and methods it compiles. If the base rule set defines MethodA, but it is redefined in the implementation of a splat book mod, then the program determines which implementation takes priority.</p><p></p><p>The bad side to the above, is it will obviously require anyone who wants to add a tool to be reasonably proficient at programming. And of course, the author of such a program needs to develop a language. But the language need not be complex.</p><p></p><p>What need be done? As I see it, you need to define three things. What a character is. This will be an entity collection. You then need to define your entities. What are they? In the end, they boil down to an arbitrarily complex structure of numeric and string data. Finally, you need to define some methods that operate on these entities. The methods are essentially operations that are performed on the entity data.</p></blockquote><p></p>
[QUOTE="MJEggertson, post: 374127, member: 845"] This is very fascinating, and I applaud the discussion so far. Also, I would love to be part of the email discussion. I can’t rifle a message off right now, I’m at work, so I’ll do so when I’m at home. But if any of you are composing something right now, please add me to the list, mike@rpgprofiler.net. I think there are also two discussions happening. One, about being a generator, and another about how to implement a totally customizable data/ruleset. I’m for the moment going to focus on the data/ruleset, and some theory on how one would go about the design of a program that uses absolutely no hard-coded data. We know the program has to be flexible, to allow for rule mods, splat books, etc. The question is how do we handle the data, and how do we handle the mods? Sanglant’s idea of a character being an entity collection I think is bang on. If we focus on d20 products, this is rather easily done. The system is designed around incremental changes to properties (entities) of a character. But as a general theory, the question rises, how are these entities defined? We need to prevent a hard-coded approach, otherwise we’ve defeated the purpose of this discourse. I use a program on a daily basis that I think we can learn a lot from. It’s a raw numerical data crunching tool, but it is very flexible, customizable, and most of all, it’s rediculously fast. Probably the biggest concern when implementing a custom data library of undefined size is program speed, memory consumption, and general performance, as I think it was Luke that mentioned. Even back in the day of MacIIs, when a few megs of ram was hard to come by, this program could throw around data matrices a meg in size and operate on them in a few seconds. Even on user-written ‘scripted’ tools. How? Through the combination of compiled and scripted languages. The program has a C-like language that it reads. Depending on the context, this language can serve as a script that is parsed at run-time, and making calls to other pre or user defined functions (acting essentially as a macro language), which is of course slow. But in another context, the program reads this language, compiles it at start up, and the resulting functions/methods/objects are kept in memory and can be executed just as fast as any hard-coded function call or operation. I think a project like this needs something similar to the above. AI routines, interpolative procedures are nice, but in the end, they will not be perfect, and run the possibility of introducing quirky behavior to the program. The only way to really accomplish this is to let the author of a dataset/tool/addon to define it themselves. The structure of the language that the theoretical application would use is irrelevant to the current discussion. It could be data-driven like xml, it could be language driven, like a C-style, whatever… But how to use it? To be flexible and be able to accommodate [u]any[/u] new material, the entire data set, the operations, and everything needs to be defined outside of the actual application. What your program essentially ends up being is a wrapper for a language of some type, that generates a UI. At start up, the program scans the data files, the procedure files, the rules files, compiles them, and once done, the program operates without lag or slowdowns, as if everything was hard-coded. Compiling at start up is time-consuming, but this can be worked around. Say, for example, you look at each file, and save an md5 digest or something. At start up, the program digests the code of a rule, and only re-compiles it if it has changed. Startup is lightning fast after the first compile. The program resolves conflicts by building a hierarchy of the data and methods it compiles. If the base rule set defines MethodA, but it is redefined in the implementation of a splat book mod, then the program determines which implementation takes priority. The bad side to the above, is it will obviously require anyone who wants to add a tool to be reasonably proficient at programming. And of course, the author of such a program needs to develop a language. But the language need not be complex. What need be done? As I see it, you need to define three things. What a character is. This will be an entity collection. You then need to define your entities. What are they? In the end, they boil down to an arbitrarily complex structure of numeric and string data. Finally, you need to define some methods that operate on these entities. The methods are essentially operations that are performed on the entity data. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Geek Talk & Media
Character Generation [technical/theoretical]
Top