And I just see this as, frankly, legitimizing Extremely Lazy Design. I don't really have a whole lot to say to the preceding stuff, because it all ultimately hinges on this specific problem. I think a game that is being sold to people shouldn't be "and now it's your job to actually build a game out of this", unless it is explicitly sold as a build-your-own-game product.
D&D hasn't been that since at least WotC editions. I'd argue it hasn't been that since 2nd edition, given certain turnarounds Gygax had that bled into the game design, but whether that's true or not is a philosophical debate I'm not interested in right now.
Not a legitimisation. A description. D&D has a staggering number of
explicit options. Also trying to run it based on
only what is written would reveal it as an utterly incomplete and unplayable mess. I would claim it absolutely relly on the GM and the group to impose some structure not explicitely stated into it to make it playable.
Is it lazy design? Perhaps. Still if we are to try to talk about how this game that dominates (when can I start saying dominated) our hobby actually works, that is more or less impossible without nailing down some parameters ourselves.
But this just makes the whole thing circular. The narration takes the form it did because of the skills, but now you're saying we know it must be about skill because of how it was narrated--you've inverted the causation to conclude that it did the right thing. That's circular logic.
Well, to be fair it
could have been narrated that way due to pure luck. This is not a circular reasoning, it is a Bayesan one. We try to determine if the narration took it's form because the referee tok into account skill. You appeared to reject this notion. We cannot directly observe the referee's mindset, hence the truth value of the claim "The GM used skill as an input to narration" (A) is unknown. However we have an observation in terms of what was narrated (B).
My observation was that the probability for having a narration with the carracteristics of B if A was true appeared much higher than the probability if A was not true. That is P(B|A) >> P(B|not A). Simple application of Baye's theorem show that this require either P(A|B) is much greater than P(not A|B) or P(not A) is much greater than P(A) (or both). In the first case my assessment that A is true is well funded as we observe B (though it
could be dumb luck).
However if indeed it is in general very unlikely that A happens at all, P(B) = P(B|A)P(A) + P(B|not A)P(not A) is going to be small, as both terms contains a factor that is known to be much smaller than something else (and hence small). And if this is the case I wonder why you brought up such an unlikely scenario as an example? (Anyway, I guess this
could be just dumb luck as well..)
I don't have a problem with this sort of thing. I find it perfectly acceptable.
But you, and others, have specifically brought up the problem of retrocausal situations. Hell, it literally just came up in a conversation I had with someone else. The idea that the failed lockpicking roll "creates" the person walking down the hallway to find it. That's exactly what is happening here. The failed perception roll creates the distractedness of the character--but that distractedness had to have been the cause of the bad result, not the effect of it.
Such retrocausal resolution is something most offensive to pretty much everyone I've spoken to who advocates vociferously for simulation.
Ah, ok. I am not in the retrocausality as an issue camp. My observations in the situations this has been a controversy has been related to
correlations if systemically used over time. I don't think this particular instance of "retrocausality" produces that kind of problematic correlations. It can hence be interesting to see what anyone actually speaking up against the actual concept of retrocausallity thinks of this approach
