buzz said:I'm going to reiterate my earlier point that if you're going to define "fudging" so broadly as to encompass everything GMs are given responsibility for in most well-known RPGs, and now even include material from the publisher, then any meaningful discussion is going to be impossible.
That's an absurd rendering of my position. On the other hand, it does speak to my point that lots of fudging/"cheating" is really hidden, and that many systems simply incorporate an identical process.
Adjudication (i.e., applying the ruleset) is not fudging. Adjudication is the GM's job.
This implicitly postulates some objective way to render rulings, delinked from play as it happens, that not only does not exist but probably can't exist. I've seen lots of D&D games. DMs screw with rules interpretations on the fly to affect outcomes *all the time.* In fact, I would say that this way of doing things is *more* common than making objectively correct rulings. There is even a tradition of player empowerment when it comes to controlling the narrative. It's called "rules lawyering."
There is *no* difference between this and fiddling with a d20 roll to change the outcome, except that I guess people can pretend they beat some kind of objective challenge if the situation gets screwed with instead of the die roll.
Fudging, as I am using it, is when the GM applies the ruleset and then ignores the results, overtly or covertly, in order to apply their will. "I think it would be cooler if X happened, so I'll just pretend I rolled a 20."
Well, the problem is that the thrust of your argument doesn't hold when you explain *why* fudging isn't desirable. If it isn't desirable by your rationale, then many typical things in games are also undesirable.
The other problem with your argument is that it is incoherent regarding what actually happens in games. You have said that a well designed system won't need fudging because it will always output good stuff. But this assertion just isn't true. The best a system can due is provide a trend that looks good over time when it comes to a subset of the things participants might do. Unfortunately:
* You cannot derive expectations for how individual instances of play will turn out from these trends. It's a fallacy to believe you can assess a system's robustness this way. Therefore, you can't make any coherent claims about whether a system "needs" fudging, because you don't know what the output will be in a single instance or a chain of instances. Yes, it is part of the fun to see where the die go -- but the source of that fun is incompatible with making anything but fairly week predictions about what *will* happen -- and I doubt that *any* game can be run coherently without some predictions.
[D&D has a neat dodge for this by making things increasingly deterministic as you go up in level -- the bonuses increase to a point where the actual dice roll becomes less important outside of a certain range.]
* It is impossible to fully playtest any traditional general purpose RPG system. There's a reason they don't lay off R&D people once the design is done. There are just too many possible interactions between elements.
Keep in mind that this is different from games where there is a problem with the procedure and no good indicators about where or what to roll.
To paraphrase something awesome Mearls once said, a good rule is one that makes play more fun than it would have been without it.
My corollary would be that every rule that does that has the potential to be no fun as well. The only difference, in the end, is that somebody wrote it down, invoking a social convention that makes the outcome feel better to participants.