A discussion of metagame concepts in game design

Lanefan

Victoria Rules
The study of cloud formation and other measurable things? Sure.
Also the record-keeping and analysis of the data thus generated.

Problem is, for weather at least, reliable and consistent record-keeping hasn't been around that long (in the grand scheme of things) and in many cases still isn't reliable and consistent - even something as simple as moving a weather station from one part of town to another throws consistency out the window.

The actual predictions? Not so much. They are using science to try and make a stab at letting us know what the weather will be like, and are very often wrong.
A trivial-scale comparison would be trying to predict the outcome of a chess game - not just who will win but what the final position will be and how many moves it'll take.

Given enough data from previous games played by the specific players involved, and a bunch of computing power, one could arrive at a tolerably-accurate guess...but that's all it would ever be.

For weather, take all that and dial it up off the scale. Short-term - within 24 hours, say - is pretty easy to get right in most cases; and medium-term (2-4 days) is rapidly getting better. Get out much beyond that and the inaccuracies really start to rear their heads; anything beyond a week is more or less an estimate, anything beyond a month is really just a guess. It would take a galloping leap in available computing power to get it any better.

Illnesses have specific symptoms.
Most of the time...but not all.

Lanefan
 

log in or register to remove this ad

Emerikol

Adventurer
I think what he's getting at is that the math has always been there, even from a time before we knew about math at all. Calculus wasn't finding new math, but rather finding math that was new to us, but which already existed. So it was a discovery from following math where it lead, rather than making up a new math.

This is exactly my point. All abstract mathematical concepts exist in all possible universes in all possible times. They are eternal truths. Some would add "in the mind of God" and I'd agree but that isn't necessary for this discussion.
 

Emerikol

Adventurer
Preaching to the choir.
You're dismissing, not discussing. When someone like Nate Silver can statistically model political preferences well enough to accurately predict the results of all fifty states in a US presidential election, that sure looks like science to me.

I think Nate does some good work but he can't predict perfectly. He can present probabilities. That fits very well with the original point. Psychology has some successes so it can't be dismissed but there is no unified theory. Whereas in physics, things can often be predicted with 100% reliability. It is why physics is considered a "hard" science and psychology a "soft" science. One day that may not be true. We are just further along in some areas than others.
 

For weather, take all that and dial it up off the scale. Short-term - within 24 hours, say - is pretty easy to get right in most cases; and medium-term (2-4 days) is rapidly getting better. Get out much beyond that and the inaccuracies really start to rear their heads; anything beyond a week is more or less an estimate, anything beyond a month is really just a guess. It would take a galloping leap in available computing power to get it any better.
Lanefan
But oddly, when we look at the really big picture, we can make some very reliable predictions again, no computer necessary. We can be quite certain that it will get colder in the winter. And we can me quite certain that Magnus Carlsen will wipe the floor with me.
 

Ovinomancer

No flips for you!
Did I say they did?
Reading your post again, I still come away with you saying a wee p is validation of the model as correct. Did you intend to convey a different meaning? In the context you responded, something other would be very unclearly stated.

You keep using that word.
With intent. Statistics usually invokes reification in it's users, to their error.
Again: dismissive. You're using terms of abuse and avoiding addressing the key question: does it work? And perhaps some corollary questions, like: if it doesn't work, how should we approach research on massive and/or chaotic systems like human health, the weather, and politics? Do we just throw up our hands and say, "Not science, we can't learn anything about this"?
Well, yes, it's dismissive, as I'm saying you're wrong. I provided reasons for this in the same post -- p-values are not measures of reality, but of model parameters, and that reification of model parameters into real things shown is rampant. I tried to chose a humorous way to put that.

If your model is a statistical one, all you can possibly show is correlation. Causation is outside the realm of statistics. Saying that Silver is doing science when he builds a statistical model that uses heavily weighted and adjusted poll results (themselves very imperfect data sets) is ludicrous, regardless of his success rate. Nate Silver is, foremost, an astute political observer. He has a knack for putting his predictions in math-y format. Absent his keen observations, which lead to how he weights his data inputs, his models wouldn't have much skill. The success of Nate Silver is not due to his statistical methods, but his interpretation and massaging of the data inputs into his statistical methods. Plenty of other keen political observers had similar predictions to Silver's without the stats. Sliver is engaged in political prognostication wrapped in stats. This does not make what he does science (use of stats does not science make anywhere) -- it's still just politics watching.
 

Ovinomancer

No flips for you!
But with all I've said previously, this is where I think perhaps you've gone a bit far afield from the middle.

There are two things you can do with statistics.
- Use for discovery
- Use for support

The problem with using statistics for support is bias. You can make numbers look like they mean anything if you try hard enough. What population did you use? What do your error bars look like? How did your power analysis work out with such a small population? Oh, you didn't think about that? Wow.

Nonetheless, statistical modeling gives us the ability to optimize workflow, aim at deployments, and generally predict high liklihood outcomes. The value you get out of the numbers has a lot to do with the person doing the work and the effort put in to getting a clean data set. It could be argued that the reason why tenure exists is to allow folks to have the structured time to get clean data and look at outcomes with little bias.

So horoscopes with math is possible. So is enablement by math.

Be well
KB

I use stats in my day job for science reasons. I do it because it creates a predictable model with great skill to do radio frequency work. However, the stats I use aren't reality, they're just a good tool, one that has shown repeated reliability for decades for this kind of work. I don't confuse statistics as being the useful part of this tool -- this tool stands on it's own merits, and statistics doesn't inherent any good faith from me from this one (actually a family) of tool.

Stats builds a model. The old saw, "All models are wrong; some are useful," is true. There are many useful tools in stats, but they require the user to be aware of their limitations, use clean data, and understand what the model actually says. Whenever someone says 'statistics prove it' I cringe. This is fundamentally incorrect statement. Stats aren't run on the data, they're run on parameters of the data, and those parameters are all assumptions the user is making. The math then works, and spits out an answer, and the user is now in the easy position of thinking that answer is based on the data rather than the parameterization. The answer is also very often precise -- perhaps a wee p-value is obtained. This leads to overconfidence in the result. In short, it's very easy to both lie to yourself (and others) with stats and to also be overconfident in your results.

This doesn't mean statistics isn't useful -- I make a comfortable living uses statistical approaches in my job. But, most often, stats are just guesses cloaked in the justifying garb of numbers. I have a generally negative view of statistics due to how often it's misused. I can be very favorable of specific statistical models, given they have shown to have good skill and don't mistake themselves for truth.

So, yes, under that understanding I know that statistics can be useful for discovery (correlations only) and can provide some support to a theory. They cannot ever prove a theory, though.
 


pemerton

Legend
Improving a language would require the language to use less words to accomplish higher levels of understanding.

<snip>

Adding words and phrases to a language, that can be misunderstood without appropriate context is not improvement.
On this I tend to go with Orwell in his essay on Newspeak.

More words allows nuance, rhythm, assonance, alliteration, etc. It increases the expressive power of the language.

Reading on, I see that [MENTION=4303]Sepulchrave II[/MENTION] has said the same in reply.

Also, on statistics and causation: scientific knowledge isn't limited to knowledge of causal processes. Statistically confirmed correlations may enable predictions to be made, even though the causal process that generates the correlation is not known. This is starting to push my knowledge of the history of science, but I would have thought that Mendelian genetics and 19th century statistical mechanics of gasses would be examples of scientific knowledge of correlations in ignorance of the actual causal process.
 

Reading your post again, I still come away with you saying a wee p is validation of the model as correct. Did you intend to convey a different meaning? In the context you responded, something other would be very unclearly stated.
The claim "They just got lucky" is not the end of the discussion, but the beginning, and in asserting it [MENTION=23751]Maxperson[/MENTION] was grasping towards the concept of statistical significance.

Don't get me wrong: I'm not a scientist, it's been a long time since I've used this math in any serious way, and it's entirely probable that I will make a mistake along the line here. But I do know the difference between statistical significance and proof.

Well, yes, it's dismissive, as I'm saying you're wrong. I provided reasons for this in the same post -- p-values are not measures of reality, but of model parameters, and that reification of model parameters into real things shown is rampant. I tried to chose a humorous way to put that.
It seems like you're throwing out the baby with the bathwater here: using a mistake that people make as reason to discard the whole pursuit rather than to say, "oh, we should be careful not to make that mistake".

If your model is a statistical one, all you can possibly show is correlation. Causation is outside the realm of statistics. Saying that Silver is doing science when he builds a statistical model that uses heavily weighted and adjusted poll results (themselves very imperfect data sets) is ludicrous, regardless of his success rate. Nate Silver is, foremost, an astute political observer. He has a knack for putting his predictions in math-y format. Absent his keen observations, which lead to how he weights his data inputs, his models wouldn't have much skill. The success of Nate Silver is not due to his statistical methods, but his interpretation and massaging of the data inputs into his statistical methods. Plenty of other keen political observers had similar predictions to Silver's without the stats. Sliver is engaged in political prognostication wrapped in stats. This does not make what he does science (use of stats does not science make anywhere) -- it's still just politics watching.
I think the fundamental disagreement here is just that you have a narrower definition of "science" than I would consider conventional. I'm with [MENTION=42582]pemerton[/MENTION]: if a statistical model built through observation and experimentation allows us to make predictions better than otherwise, then we have learned something in a way I would call "science", even if we don't understand the causation we're capturing in the model yet.
 

Maxperson

Morkus from Orkus
But oddly, when we look at the really big picture, we can make some very reliable predictions again, no computer necessary. We can be quite certain that it will get colder in the winter. And we can me quite certain that Magnus Carlsen will wipe the floor with me.

Yep, but as Paul the octupus shows, prediction isn't science. It's prediction. We can predict that an object I drop tomorrow will fall to the ground. That's not science. The science was all the observation and testing that went into allowing us to predict gravity. We can predict that it will be colder in the winter, but that is also not science. Science was measuring the temperature at during all of the seasons including winter and recording that data. Reliable predictions are not science.
 

Remove ads

Top