A discussion of metagame concepts in game design

Ovinomancer

No flips for you!
Sure, but if my 6th grader tells me they learned about kingdoms and phyla in science class, and I tell them "Actually, that wasn't a real science class, you're really in more of 'learning basic background observations that will someday let you do actual science' class", I'm still kind of a douche. :)
Yes, you would be. Not sure where you get that I would advocate such a thing.

The fact that a more precise definition of "science" exists doesn't render the more casual usage any less useful. There's nothing wrong with referring to a guy who spends 5 years in the Amazon rain forest cataloging the different kinds of ants that live there as a "scientist" even if he doesn't make a single hypothesis.

Yes, it does make the casual usage less useful. This should be evident. Less useful doesn't mean not useful, though, hence the casual usage. However, as this discussion isn't about the casual usage but about the precise one, I'm not sure what you're trying to get to except strangely implying I'd say 'not science' to a 6th grader talking about what they've learned at school. I have evidence with having had my own 6th grader that I do not.
 

log in or register to remove this ad

Maxperson

Morkus from Orkus
I'm sorry, I'm still a bit thrown by the oxymoron that is "serious astrologer".

There is no oxymoron. Just because you don't take it seriously, doesn't mean that they and others do not.

I]Complexity[/I] is not the same as falsifiability. No matter how many and varied the calculations an astrologer makes, it's all for nothing if their prediction is phrased in such a way that its truth value is open to interpretation after the fact. No horoscope I've read, be it ever so "serious", has ever clearly stated, "We will be able to observe Event E happening at Time T and Place P, and if we do not observe it, then I am wrong and will need to modify or discard the theory under which I made this prediction."

The horoscope my mother had done about me when I was born said very clearly that I would either become an interior decorator or a military officer when I became an adult. That's a very specific prediction. The closest I came to one of those two professions was when I was in my mid 20's. For a couple of days I gave serious thought to joining the air force. So much for predictions being accurate. Clearly stated, yes. Correct, no. And clearly able to be established as false. I'm not now, nor will I ever be, one of those professions.
 

[MENTION=16814]Ovinomancer[/MENTION], we need to stop. This is clearly getting out of hand. Look, I get it. I know how it annoying it can feel to have some layman lecture you about the nature of your own damn field. (And if I didn't earlier, I certainly would have once you started talking about history.) But your tone is getting more and more openly abusive, while at the same time the positions you're abusing are slipping further and further from what I'm actually saying. There's already been a couple of flat-out "That's not what I said" moments over the course of this conversation, and if I kept responding to your most recent posts point-by-point as I have been, there'd be a few more. As for my own contribution to this mess, I can't know exactly how derisive and unfair I've come across to others, but looking back I can't imagine I made a good showing. For that I am sorry.

Let's look at how else we could be carrying on. One example really jumps out at me. I gave a shorthand summary of science as "seeing what works". In your response to that, you describe science as "a tool to learn what is true". You seemed to believe that I meant something totally different from and opposed to what you meant, but come on, just look at it: "seeing what works", "a tool to learn what is true". Doesn't it seem possible that we're using modestly different phrasing to say the same thing? Or at least that there is some common ground there, enough to build something on rather than sniping at each other over? I honestly don't understand how you could say that something is "a tool to do X" and deny that it's outcome-oriented; X is obviously (to me) the outcome in question. But it's clear you just as honestly don't understand how I could deny that such a thing is process-oriented. I think we have different understandings of what it means to be "outcome-oriented" or "process-oriented" more than we actually disagree on what science is. And I'm still interested in understanding your meaning. But only if we can bring this back to someplace civil and constructive. Deal?
 

The horoscope my mother had done about me when I was born said very clearly that I would either become an interior decorator or a military officer when I became an adult. That's a very specific prediction. The closest I came to one of those two professions was when I was in my mid 20's. For a couple of days I gave serious thought to joining the air force. So much for predictions being accurate. Clearly stated, yes. Correct, no. And clearly able to be established as false. I'm not now, nor will I ever be, one of those professions.
Thank you. Yes, you're absolutely right, that's a falsifiable prediction. I haven't seen a horoscope like that.
 

Maxperson

Morkus from Orkus
Thank you. Yes, you're absolutely right, that's a falsifiable prediction. I haven't seen a horoscope like that.

I haven't have much opportunity to see horoscopes from serious astrologers, mostly because I don't give astrology the time of day. I mean, I know my signs so that I can answer the usual questions, but beyond that... The one my mother had done is the only one that I've seen, and I was rather surprised at how many specific things it included. Before that, my only experience was glancing periodically at the new paper or online horoscopes for a chuckle. And yes, those are incredibly vague. "Keep an eye out, because something, somewhere will make you smile within the next month." and the like.
 

Ovinomancer

No flips for you!
[MENTION=16814]Ovinomancer[/MENTION], we need to stop. This is clearly getting out of hand. Look, I get it. I know how it annoying it can feel to have some layman lecture you about the nature of your own damn field. (And if I didn't earlier, I certainly would have once you started talking about history.) But your tone is getting more and more openly abusive, while at the same time the positions you're abusing are slipping further and further from what I'm actually saying. There's already been a couple of flat-out "That's not what I said" moments over the course of this conversation, and if I kept responding to your most recent posts point-by-point as I have been, there'd be a few more. As for my own contribution to this mess, I can't know exactly how derisive and unfair I've come across to others, but looking back I can't imagine I made a good showing. For that I am sorry.

Let's look at how else we could be carrying on. One example really jumps out at me. I gave a shorthand summary of science as "seeing what works". In your response to that, you describe science as "a tool to learn what is true". You seemed to believe that I meant something totally different from and opposed to what you meant, but come on, just look at it: "seeing what works", "a tool to learn what is true". Doesn't it seem possible that we're using modestly different phrasing to say the same thing? Or at least that there is some common ground there, enough to build something on rather than sniping at each other over? I honestly don't understand how you could say that something is "a tool to do X" and deny that it's outcome-oriented; X is obviously (to me) the outcome in question. But it's clear you just as honestly don't understand how I could deny that such a thing is process-oriented. I think we have different understandings of what it means to be "outcome-oriented" or "process-oriented" more than we actually disagree on what science is. And I'm still interested in understanding your meaning. But only if we can bring this back to someplace civil and constructive. Deal?

I'm not trying to be abusive. I actually like and generally respect your posts -- you're one of the posters I perk up when I see the handle, as you usually have an interesting take on whatever's being discussed. I usually try to avoid the fisking style of discussion, as it usually leads to gross mischaractization of points and gets into :):):) for tat, but failed here. I also apologize for my contributions to the tone.

Moving forward, process oriented is where the process is the important part. If the process isn't followed, the results are usually diminished or useless (luck happens sometimes). Outcome oriented, on the other hand, really is about results over process -- it doesn't matter how you did it if it worked. A good example of this is sports. It doesn't matter how you score, the score is the thing. Bringing this around to science, there's no amount of being right that justifies not following the method. The method is what allows science to be ultimately self-correcting* and to be shown to others so that they can believe the outcome rather than have to accept it on faith. Being correct in a prediction doesn't require science, so science cannot be just being correct in a prediction (and, arguably, some of the most important science done has been showing predictions to be wrong).

This is why I say outcomes cannot be the definition of science. Ultimately, science is not just the result but also the completely reviewable chain of evidence that led to that result.

And, to again say it, statistics is subjective in all cases -- you choose a model to use with the associated parameters because you think it will provide the best fit. That choice, the choice of model, is subjective, and each choice provides differing answers. Further, if you select data input to your model based on how you think they did rather than a rule (and Silver evaluates polls individually), then that is also subjective. This isn't to say that the subjective choice cannot be informed or subject to expertise, but, by nature, it isn't dealing with clean data, and anything you do past the initial step is working with modeled data rather than real data. Silver is exceeding good, and I certainly do not recommend dismissing him, but what he does is fundamentally not science in the same way that sports betting isn't science.


* Although, there's likely a case where any tool used by humans cannot self-correct at a certain point -- a hammer cannot exceed the skill of the carpenter.
 

I haven't have much opportunity to see horoscopes from serious astrologers, mostly because I don't give astrology the time of day. I mean, I know my signs so that I can answer the usual questions, but beyond that... The one my mother had done is the only one that I've seen, and I was rather surprised at how many specific things it included. Before that, my only experience was glancing periodically at the new paper or online horoscopes for a chuckle. And yes, those are incredibly vague. "Keep an eye out, because something, somewhere will make you smile within the next month." and the like.
It's not just them. This one is from a certain 17th-Century astrologer named Galileo Galilei, concerning his daughter Virginia:

"The Moon is very debilitated and in a sign which obeys. She is dominated by family relationships. Saturn signifies submission and severe customs which gives her a sad demeanour, but Jupiter is very well with Mercury, and well-aspected corrects this. She is patient and happy to work very hard. She likes to be alone, does not talk too much, eats little with a strong will but she is not always in condition and may not fulfil her promise."

Which, in retrospect, actually seems uncannily accurate: she became a nun and is best known for her devoted and supporting correspondence with her father, in which she shows considerable intelligence that was probably not being best fulfilled in a convent. But, y'know, that's how this works. She could have done a lot of different things and it would still have looked accurate. Definitely not falsifiable. (Also, she was in a convent because Dad sent her there as a child.)
 

So how do you guys with my own sentiments (or at least some sympathy for my sentiments) handle these things. What house rules have you developed? Is the game salvageable for someone like us?

I have similar sentiments regarding D&D (I'm more flexible for certain other systems), and I'll say that you can't really totally get rid of it in 5e. However, overall I just think 5e is the best D&D, so I'll tell you what I do (or think could be done) to minimize the problematic elements.

I'll start with your examples and go from there.

1. The player chooses the number of hit dice to apply towards healing during a short rest. There seems to be no analog for the character. There also seems to be a resource being consumed but what is that resource? Potential healing?

The best way is to look at current hit points as your ability to soak up immediate trauma, and HD as representing the rest of your hp. Basically, you can only handle taking about half of your total resiliency's worth of damage in a short period of time, but if you rest up, you can get back into the action.

To minimize metagaming, there are two possibilities. The first is to just say that you have to use HD as the first opportunity. You might make an exception for when you are within 1 or 2 hp of your maximum, but otherwise you have to use them.

The second (and compatible) option is the use the optional Healer's Kit dependency rule from the DMG (must use a healer's kit to spend HD during a short rest). That means spending HD represents the benefit you get from bandaging yourself up and applying ointments and such.

I also use Slower Recovery (puts healing on par with say 3e, or even a bit slower).

So the character chooses to pull out the healer's kit and bandage up, and that gives them a bit more strength to get into the fight. They can only benefit from so much of this before they are worn out and need more than bandages (ie, long rests).

Overall, I haven't found my players using HD unless they plan to get themselves at least close to full, and the kit dependency helps with encouraging that style I think.

2. Action surge. Why is this limited (besides game balance) early on to once between short rests? Can a fighter really only once in the course of a battle choose an exact moment to make an extra effort and then not again? This again seems like the player is choosing something the fighter would know nothing about.

3. Second Wind. A player decides to give his character a surge of energy. The character just gets it apparently unexpectedly. It happens in the fast and furious furer of combat so it's not even something the character could think about much.

I don't think it's too much of stretch to say they can only pull these off occasionally and that they might be choosing the moment of when to do them. But it is more difficult to explain than the HD, and no optional rules are provided.

I just go with it, and it works fairly well as an in-character choice.

Mechanically, however, Second Wind can be problematic because a fighter can just sit around for a few hours doing it repeatedly and recover all his hp (though not HD). I've house-ruled that a character can benefit from no more than 4 short rests per day, primarily to address Second Wind, but it also works for other things that might be problematic.. So far, we've never run up against a situation where we would exceed those 4, and it does fix the conceptual problem.

You didn't mention Battle Master maneuvers, which are similar to Action Surge and Second Wind. You have a single pool of Superiority Dice you spend on the maneuvers which replenishes on a short rest, so at least it isn't by maneuver recharging. By the book, you don't have to declare using the maneuvers until you hit (or it otherwise comes up). That is simple to fix however. You just say the player has to declare what maneuver they are attempting, and if they fail to hit then the opportunity to complete the maneuver just didn't occur, and hence the full effort wasn't expended (so the die isn't lost). Battle Master's are masters of this stuff, so they know when not to overexert themselves. Of course, you could go hardcore and say the die is expended whether or not the maneuver succeeds, but I think that's unnecessary. The simpler method rarely changes the way the way the game goes (or balance) to any noticeable degree.

In fact, I've decided not worry about it and just let the player declare after success...but my player rarely remembers that and usually declares before taking the action anyway. :)

Another potentially problematic one is the rogue's Sneak Attack, which you don't have to declare until after the attack hits (though based on other rules, I do believe the intent is that you do declare before you roll damage, thus not allowing you to see if an opponent dies before deciding to use Sneak Attack, so there isn't that particular problem) but you can only use 1/turn. However, when they fail on the first attack and get to try again on the second one, that's where it is weird.

One interpretation is to say that the rogue can spot when the opportunities are there to make a hit be a really good hit. You could say the player has to declare that their character is going for a Sneak Attack before the roll, which shouldn't change how things play out.

Another interpretation of why this is once a turn is that it takes more focus than you can consistently pull off when you are stabbing quickly. I can relate to that general sort of restriction from playing action video games. It really does take more focus than I can maintain to play "at my best" every moment. It doesn't work as well as the first, but works well in combination with the first.

Sneak Attack in 5e works against everything, so it needs to be interpreted a bit differently anyway.

I don't worry about requiring a pre-declaration on this one. I assume the character is always trying to make the best hit they can (ie, they use Sneak Attack at the first opportunity--which my players actually do, so it works), and they just can't maintain the focus on every attack.

4. Inspiration. Since this part of the game is pretty optional (and my guess is anyone close to my thinking ignores it anyway), it's not that big a deal.

Yeah, it's definitely unapologetically metagame. Ignoring it works perfectly fine.

Overall, 5e design seems to have embraced more metagaming than most D&D, but not really done what 4e did. Or at least when they borrowed ideas from 4e in those regards they attempted to tone it down so you could creatively interpret it (like I described above).

If you can work with the sorts of explanations I gave above, you can probably work with pretty much all of the metagame elements in 5e without a serious problem.

Probably one of the biggest things that bugs me is how learning cantrips works. You are limited to only ever knowing a small number of what are supposed to be the simplest spells, even when you can know everything else on your class list automatically (clerics/druids), or theoretically learn them all, including wish (wizards). Drives me crazy. My house rule is that prepared casters can use a spell preparation slot to prepare other cantrips (which means wizards can add them to their spellbook like any other spell). Prepared cantrips can still be cast at-will. It's not really a problem with casters who know a limited number of spells anyway, though I would allow them to switch a cantrip when they level up, since they can do so with spells of other levels.

This one bothers me so much from a suspension of disbelief perspective that if I'm playing a prepared caster in someone else's campaign I would actually be so bold as to request my house rule be in effect. I'm not sure I could stomach playing a wizard otherwise.

In practice, no one has ever used my house rule for that. Spell preparations are so precious that my players just haven't wanted to take up a slot with a cantrip, even though it would add an at-will spell to their complement.

I haven't looked at Pathfinder 2 yet. Too crunchy for me, though I'll probably at least scan the SRD to look at the design elements.
 

pemerton

Legend
Sure, but if my 6th grader tells me they learned about kingdoms and phyla in science class, and I tell them "Actually, that wasn't a real science class, you're really in more of 'learning basic background observations that will someday let you do actual science' class", I'm still kind of a douche. :)

The fact that a more precise definition of "science" exists doesn't render the more casual usage any less useful. There's nothing wrong with referring to a guy who spends 5 years in the Amazon rain forest cataloging the different kinds of ants that live there as a "scientist" even if he doesn't make a single hypothesis.
Well, for what it's worth, I don't think this is just about manners.

There's a small matter of usage - if everyone in his day described Joseph Banks as a scientist, and made him President of their most important scientific society for more than 40 years, it seems odd to deny that he is one.

But there's also the issue of accurately describing a human practice. Science is a human practice aimed at generating a body of knowledge that is systematised and (in part because of that) disseminable and usable. Hypothesis formation, and testing by way of experiment (= controlled observation and measurement), is one way of generating such knowledge. Careful observation and measurement of natural phenomena is another. Such observation and measurement does at least three things:

(i) in itself, it may produce systematised, disseminable and usable knowledge (think of how important scientific cartography, surveying, etc is to much of contemporary life, from road maps and GPS to urban planning to international trade);

(ii) it may help with hypothesis formation (eg it seems unlikely that anyone would start thinking about plate tectonics without first having the data provided by scientific cartography);

(iii) it may help with hypothesis confirmation or enrichment (eg the way that Joseph Banks' collection and classification of botanical data helps show the utility of, and further develop techniques of, biological categorisation, which are themselves a, perhaps the, major part of biological knowledge before the invention of modern biochemistry over the past 50 to 100 years).​

The idea that science is equivalent to, in some confined, sense, the scientific method as that is taught in high school or first year lectures, is inaccurate as a matter of history, is misleading about the nature and richness of the bodies of scientific knowledge that have been developed over the past 400-odd years (much longer for astronomy, of course), and leads to a type of methodological fetishism that generates distorted descriptions (eg astronomical observatin gets described as "experimentation" when it obviously is not) and prioritises a certain privileged set of means (the classic chemistry or physics lab) over the actual ends of science (a body of systematised, disseminable and usable knowledge).
 
Last edited:

Ovinomancer said:
I disagree history could ever be a branch of science -- there's no way to experiment.
It's certainly never gonna be a hard science.
Pemerton said:
My opinion of history is that it is not a science

FWIW, here are my 4a.m. insomniac thoughts. I won't make any ambitious prophecies about eternity, though :p:

1. History is an intrinsically trans-disciplinary discipline; I would argue that in order to practice “good” history, reasonable fluency is required in sociology, psychology, anthropology, philology, climatology etc. Further, history makes routine use of data gathered by scientific methods: carbon dating, materials science, dendrochronolgy, core samples, DNA evidence, molecular archaeology etc. This does not make history science, as historical judgements still proceed from inference; what it does do, however, is provide lots of data, so hold that thought for a second.

2. Caveat: We are pattern-seeking apes. Where no pattern exists, we try to invent them.

3. There have been a number of appeals to science by historians over the past two centuries. Comte had a positivist model; the French and German academics who established the modern practice of history in the 19th century saw themselves as “scientists;” Ranke’s (completely debunked) historiographical theories; Bloch; the Annales school of historigraphy; more recently, human cycles theory and cliodynamics have attempted to model “big picture” historical processes. None of this makes history science, either; but it starts to move things in the right direction. Most importantly, it describes the recurring human desire to construe history in objective, measurable terms which can then be used predictively. Humans are also nothing, if not tenacious apes.

4. Verifying historical data is inherently problematic. Because we cannot observe history directly, all historical pronouncements are probabilistic inferences. Bayes’ theorem – and probably others, of which I have no understanding – offer ways to frame these pronouncements.

5. Further caveat: Bayes’ can also be used to justify all kinds of whacko pseudoscience. Garbage in, garbage out, and all that.

6. We need big computers to crunch lots of data.

7. Prediction: History-as-science – if achievable – will concern itself (initially, at least) with large, long-term processes. It will involve predicting the interactions of fields with varying degrees of uncertainty. Perhaps it will resemble quantum theory more than history-as-we-understand-it-today.
 
Last edited:

Remove ads

Top