I am no expert in psychology or bias so I will leave the current state of research to the experts but I have listened to those Ted talks and this strikes me as an unusual application of what they were talking about as well as an overstatement of their positions. They were talking about serious stuff like the criminal justice system where even a single instance of bias resulting in a wrongful conviction is a huge deal. We're talking about games, where the stakes are low.
The cases where the stakes are high naturally have more impact. But the models being formed don't apply based on impact - they apply to certain types of decision or judgement. And judgements about how "realistic" things in a fantasy world are fit smack in the centre of the area they draw doubts over.
With respect, this is a particularly... tortured application of the concept of cognitive bias. Cognitive biases are things that influence how we accept and process data. The concept is really useful in dealing with how we process information that conflicts with our preconceptions, how we make decisions, and how those decisions may not be as based on real data and reason as we may think.
That was the way they were initially slanted, yes, but I am thinking specifically of the way that what Dan Kahneman has dubbed "System 1 thinking" works - which is arguably the root of many types of cognitive bias. It'll take more time than I really have, but I'll try to explain what I mean a bit, below.
But we shouldn't be invoking cognitive bias on how the GM answers the question, "Is there a florist's shop on the block?" This is not an issue of how the GM accepts, rejects or processes real data - it is ultimately a creative decision, not a failure to process real-world information rationally. It is a fictional world, there is no particular rational process that will tell us what "should" be there.
I should start by saying that it may be a creative decision, but it's not a completely arbitrary one. The choices are not generally expected to be completely uncorrelated or surreal - there is expected to be some sort of unifying pattern to the selections, because there are others involved. The players would have no game to play - no basis for decisions of their own - if the choices were purely arbitrary. Rather than "is there a haberdasher in town?" you might as well ask "is there a gun shop?" or "is there a vacc suit supplier?"
So the model underlying the choices has to be at least to an extent shared or understood by the players. And the general concept of a fantasy-adapted pseudo-medieval society frequently forms the basis of that shared base of assumptions. So, yes, it is a creative decision, but it is formed based on an assumption of some underlying model society, the outline of which is understood, we hope, by all involved.
Making decisions based on such a model requires judgement, or "instinct", both common terms for the operation of "system 1 thinking". This system is exceptionally good at making snap judgements related to survival - threat assessments, fight or flight decisions, mating choices, body movement decisions and a host of others. It is exceptionally poor at estimates of risk or probability, complex assessments involving multiple factors and anything involving any sort of maths (including adding two small numbers together).
As an exercise, here is an easy question:
If Barrack Obama were as tall as he is intelligent, how tall would he be?
Now, I don't need to know your answer - they will likely be very diverse and their implications too political for this venue - but the point I am trying to make is not a political one at all. It is that, even as you finished reading the question, I would be amazed if you didn't already have an answer in mind
despite the fact that the question makes no objective sense whatsoever.
Assuming you had an answer, this is a nice example of system 1 thinking. It works by assigning intensities to things (like "intelligence" and "height") and it constantly monitors and assesses the world around us in these terms. The intensity scale it uses is the same for everything, and it translates those intensities (fluent language is another thing system 1 handles) into whatever model is appropriate for the topic at hand. Hence, if crimes are colours, homicide is a deeper red than theft and failing to pay a parking ticket is rose pink, maybe. Note that what is remarkable here is that you have an idea what I am saying.
So, system 1 will use intensities to swiftly, effortlessly and often involuntarily come up with an answer to any question you pose it. Going back to the "is there a haberdasher in town?", it will likely compare <intensity> size of town with <intensity> number of haberdashers in (pseudo-)medieval europe - which is higher? Scale with respect to <intensity> number of cobblers in (pseudo-)medieval europe. Maybe also set an intensity required for the omission of such an establishment from town qualifying as a "mistake"...
Hmm - we begin to see a problem. The question is too complex for an easy, instant answer. How does system 1 cope with this? Actually, it has a very well tried and tested trick. It cheats. If it can't answer a complex, hard question, it finds a simple question that it can answer that is superficially related to the complex question - this is called "substitution" and is implicated in many biases. It crops up all the time. "Which of those cars do you think is faster?" is a question that requires extensive technical knowledge and a consideration of the conditions of various potential tests to fully answer - but we don't have time for that! "Which car is more sporty looking?" is much easier to answer and close enough to form a heuristic for the car's speed if we don't think too hard about it. Voila! Instant answer. Of course, system 2 - the rational, logical part of the brain - could veto this substitution. System 2 has universal veto power. But it's trying to prioritise and resolve a whole load of problems and questions (if you are a typical GM running a game, say) and, besides, it takes way more time and effort than system 1, and it's lazy, so it very often gives heuristics a pass without too much consideration.
So, how does this relate to the haberdasher? Well, the question "what is the probability distribution of the number of haberdashers in a (pseudo-)medieval town of this size and how does that relate to my threshold level for reconsidering my (lack of) placing one here?" is way too hard a question for system 1. It will substitute another, heuristic, question. "How much do I like hats?", maybe. Or - using a very common substitution technique, especially where probabilities are concerned - "how many examples of haberdashers in fairy tales/fantasy novels/history books can I think of off the top of my head?"
Thinking about my own (involuntary) assessment of the question when I first read it, I think "I can think of fairy tales about cobblers, but none about haberdashers" figured in my instinctual, "gut feel" answer to the question. But, there is a problem with this type of heuristic very specifically; it is susceptible to another source of bias, often called "anchoring", or "framing".
Consider for a moment the original question:
"Is there a haberdasher in the town?"
Now consider this one:
"The town's merchants and aldermen must get their fancy, draped hats from somewhere - is there a hatmaker or clothier who deals in hats in town?"
Objectively, these are the same question. But my guess is that the second will garner many more positive responses than the first, because it guides the GM's system 1 to a specific availability heuristic - "how many fancy hats with cloth hanging down beside the face can you think of in (pseudo-)medieval stories, texts and (especially) picture books?" - that is likely to return a lot more hits than tales about haberdashers.
This is how the two key features of system 1 thinking - intensity mapping and substitution - lead to decisions about what is "likely" or "correct" in a fantasy world that are both non-objective and manipulable. But, of course, if the GM sticks rigidly to "how much do I like hats?" as a heuristic, the would-be manipulator is probably still out of luck
I suspect, in such extremes, D&D runs best as a middle-ground. Sometimes it's about objective simulation, sometimes it's about story-telling.
Good response, in general - but I am far from sure that "objective simulation" is even possible, for the reasons just outlined...