Well, not necessarily....see the mention of bad models in recent posts.
But aside from that...the two systema we’ve been discussing each have a nod toward the real world. One mirrors real world sequentiality (it is a word!) and the other mirrors the ability of a criminal to effectively plan for a crime.
Is one of these objectively better than the other? Or is it just a matter of preference?
Okay...then how do you quantify this? Both the approaches above are reflective of reality as we know it.
Quantifying it is very easy in one respect at least: whether or not things happen at the table in the same order as they would in reality; or - put another way - whether cause and effect at the table and in the fiction mirror what would happen in real life.
The end-result tale of the score that appears in the game log is (or certainly can be) perfectly reflective of reality in either system, but the end tale isn't the point. The point here is the process: whether reality is reflected (as best as reasonably possible) in the moment as things occur, which D&D in this case does better than BitD simply due to sequentiality. Choices are made up front, and for better or worse those choices may have consequences later.
Yes a D&D character might well have decided to pack some meat along, and on meeting the dog will be happy she did so. This is fine. She just as easily might not have brought any meat, and thus ended up with an unexpected and possibly unsolvable problem. This is also fine.
With BitD this same scenario can't happen as long as the PC has a slot left, as that slot can be used
at any time of the player's choosing to bypass a problem.
The highlighted bit there is the meta-part: on seeing the dog the character doesn't suddenly choose to have meat in her pack as it's far too late in the fiction to be making such choices. The player, however, can make this choice here and now at the table but retroactively in the fiction (i.e. in the fiction the meat was there all along); and the ability to make such a retroactive choice is what pulls it into the meta realm. Retroactive choice-making doesn't happen in reality unless you've got a handy time-travel machine stowed somewhere (and if you do, I want on!).
Which can lead to another issue. The meat example doesn't work here but I'll use it anyway: though it'll perhaps sound a bit absurd I ask you to look at the intent rather than the actual rather silly example.
The issue is this: if after having done 90% of the score the PC encounters a dog and declares she's using her last item slot on meat to feed it with, this means she had the meat with her the whole time. But what if the presence of the meat would have or could have had some other significant effect or consequence earlier in the score had its presence been known then, e.g. (and here's the silly example) what if the door to the loot chamber had a trap on it that sounded an alarm if any non-living meat entered the space?
In a cause-and-effect based D&D-like system this all takes care of itself: the GM knows (or can ask) what the PC has on hand when she reaches the trapped door and can take appropriate measures at the time e.g. call for a traps roll or narrate the alarm going off or whatever.
But where the dog (and thus meat) haven't come up yet and the character's items-in-slots at the time of passing through the door thus don't include any meat the GM has no reason to do anything with the meat-based trap. But when the PC uses that last slot for meat the GM is suddenly stuck with saying "By the way, you doing this means we'll have to retcon half the score 'cause that meat would have caused a problem earlier", which is utterly awful on many levels.
Unless, of course, it's taken as a given that the meat or any other item doesn't actually appear in her pack until she at the table uses the slot for it, at which point it just shows up. This might be fine as a game mechanic and avoids all those messy retcon possibilities but blows away much hope of reflecting reality.