D&D General Requesting permission to have something cool

Oofta

Legend
Indeed; but I think @EzekielRaiden has the right of it here: turning BG3 loose on this to run simulations, using both pre-set and randomized variables, over and over again as a sort of AI-adjacent data mine seems far easier (not to mention faster) than doing any of it by hand.

But there's still a lot of variables. You could get a good idea of what works in BG3 based on the logic they programmed into the system, that doesn't mean it would emulate real world play. The AI behind enemies in BG3 is pretty amazing, but it's still programmed with certain assumptions and choices that work with those assumptions.

But I also think that it's pointless as anything other than a thought exercise because I don't see it ever happening. Maybe someday when we have an AI DM. :)
 

log in or register to remove this ad

Lanefan

Victoria Rules
How is it a bad thing to have a CR system that actually tells you the general tendency of how difficult a monster is? How is it a bad thing to know that players could choose the Noodlergy or Saucery subclasses of Pastamancer and overall be statistically similar?

Your first question is bizarre; it is like asking, "Is it good to know if a machine works or not?" I would argue that, barring purposes which genuinely should not be (e.g. "exterminate all life" or "enslave the minds of others" or other morally objectionable things), it is always better to know that something successfully achieves the purpose for which it was designed.
Achieving the purpose for which it was designed has little if anything to do with how fine-tuned the underlying math is.

1e, for example, has rather coarsely-tuned math (and a flatter power curve); which IMO is a feature; in that the equivalent of a CR 5 creature can be a challenge to a much wider range of groups than in the other extreme, that being 3.xe. Coarser math also allows variance in character levels within a party without the highers dominating or the lowers feeling useless, and means there's much less need to worry about wealth-by-level or tweaking the advancement rates or anythng else: just do it.

Fine-tuning the math would ruin all that.
As for the second, the designers themselves. Hence why I advocate for designers having clear design goals. They made the game; they decide what the stuff in it is supposed to do. (5e's pillars are not clear design goals, but they are important principles from which design goals can be built, for example, if "socialization" is a critical component of the game, design goals related to that could include "every class has at least one tool useful for contributing to social encounters." The fact that social encounters exist and are important is not, in and of itself, a clear design goal, but it gives the foundation for building clear design goals.)
Thing is, if the designers' goals don't happen to match those of a large enough segment of the intended audience the result is a bit of a mess once that design hits the airwaves; and I need look no further than 4e for an example of just this.
As I have said repeatedly, there are many things in D&D (or any game) that cannot, even in principle, be tested with this kind of modeling. Those things will always require real humans, with thought and judgment, doing testing. But a sword is designed to do a certain amount of damage. A spell of level N is meant to do less damage than a comparable spell of level N+1 and more damage than a spell of level N-1. Two subclasses of the same class should, in general, be comparable in their contributions to the party. Etc.
On this, we agree. There's a real use for computer modelling if one's intent is to fine-tune the hard-crunchy bits and-or determine how often unwanted or unforeseen results might occur. At the same time human testing is needed for other parts, as well as to push the exploit-and-loophole envelope far harder than a computer sim likely ever will.
 

Lanefan

Victoria Rules
But there's still a lot of variables. You could get a good idea of what works in BG3 based on the logic they programmed into the system, that doesn't mean it would emulate real world play.
Sure. It'd need a broader base of options for sure, with even the setting itself as a variable. But I think it could be done.

That said, were I designing the game I almost certainly wouldn't go this route, as I wouldn't be nearly as concerned about fine-tuning things as WotC seem to be.
The AI behind enemies in BG3 is pretty amazing, but it's still programmed with certain assumptions and choices that work with those assumptions.

But I also think that it's pointless as anything other than a thought exercise because I don't see it ever happening. Maybe someday when we have an AI DM. :)
That day's getting closer than we probably want it to. :)
 

Oofta

Legend
Sure. It'd need a broader base of options for sure, with even the setting itself as a variable. But I think it could be done.

That said, were I designing the game I almost certainly wouldn't go this route, as I wouldn't be nearly as concerned about fine-tuning things as WotC seem to be.

That day's getting closer than we probably want it to. :)

I sometimes debate if an AI DM would be a good idea. Especially when my wife volun-tells me that I'm going to be running a game for yet another group. An AI that generated concepts based on specifics about my campaign world and style would be pretty cool though.
 

Pedantic

Legend
Achieving the purpose for which it was designed has little if anything to do with how fine-tuned the underlying math is.

1e, for example, has rather coarsely-tuned math (and a flatter power curve); which IMO is a feature; in that the equivalent of a CR 5 creature can be a challenge to a much wider range of groups than in the other extreme, that being 3.xe. Coarser math also allows variance in character levels within a party without the highers dominating or the lowers feeling useless, and means there's much less need to worry about wealth-by-level or tweaking the advancement rates or anythng else: just do it.

Fine-tuning the math would ruin all that.
"Fine-tuning" doesn't mean anything intrinsically. If you want relatively small scaling and high variability in outcomes, you can just set those as design goals and then go achieve them. Perhaps it would require more rigorous work to achieve a different outcome, I think limiting progression makes it a lot easier to keep more outcomes within your range of expectation, and if you want high variability, then you just have to flatten the curve of outcomes by tweaking your inputs to get there, but you'd still benefit from running some simulations to ensure that's what you're getting.

You're changing the question you're asking, not the process of answering it.
Thing is, if the designers' goals don't happen to match those of a large enough segment of the intended audience the result is a bit of a mess once that design hits the airwaves; and I need look no further than 4e for an example of just this.

On this, we agree. There's a real use for computer modelling if one's intent is to fine-tune the hard-crunchy bits and-or determine how often unwanted or unforeseen results might occur. At the same time human testing is needed for other parts, as well as to push the exploit-and-loophole envelope far harder than a computer sim likely ever will.
Again, your objection isn't to the process of design, it's to the design goal, which are different things. It's weird to go after tools because they could be used to produce something you don't like. Frankly, if your goal is quite high variability play, I'd be worried outcomes are too normative, and want some testing to ensure the distribution was producing unlikely events on a regular basis. It would be all too easy, and very human, to accidentally produce a less random game than intended.
 

Scribe

Legend
Edit: And if WotC thinks they can reduce their (slow, expensive, time-consuming) in-house playtesting by having a computer run a bazillion combats overnight, they'd do it. Cutting costs has been one of the big watchwords of 5e's lifespan; for the first several years, they were operating with a skeleton crew. Given the recent high-profile outrage over attempting to invalidate the OGL, a quiet, completely internal method to speed up playtesting and reduce manpower expenditure sounds like it would be very, very tempting to the current management of WotC.

If they released their math and assumptions, this would be coded and built out on a website when they woke up the next day, I'm sure.
 

Lanefan

Victoria Rules
"Fine-tuning" doesn't mean anything intrinsically. If you want relatively small scaling and high variability in outcomes, you can just set those as design goals and then go achieve them. Perhaps it would require more rigorous work to achieve a different outcome, I think limiting progression makes it a lot easier to keep more outcomes within your range of expectation, and if you want high variability, then you just have to flatten the curve of outcomes by tweaking your inputs to get there, but you'd still benefit from running some simulations to ensure that's what you're getting.

You're changing the question you're asking, not the process of answering it.

Again, your objection isn't to the process of design, it's to the design goal, which are different things.
They're somewhat tied together, though, in that different goals are probably best achieved via different processes. What I'm trying to get at here, however, is that the computer-sim process is valid even if the goal they'd be using it for isn't one I'd like to see reached.
 

Remove ads

Top