I don't think we're quite on the same page, but will attempt to answer.
I'm struggling not to be insulting here, but I got you the first time. I'm not behind you in understanding of probabilities. I've taken the college level course and got an A if I recall correctly. I've got 30 hours of college math. I've worked professionally in bioinformatics.
You are just repeating yourself and adding no new information. I understand what you believe. It's absolute and utter bollocks but you seem to find it charming, so go ahead.
The reason you roll 3d6 is to determine where you fall on a normal distribution curve. Half the population (in game) will have a score 9 - 12, so no bonus or penalty would apply, since that is "normal" and a bonus/penalty signifies "not normal".
Well, you might as well argue that grass is green because doghouses aren't made of pancakes. Both statements are true, but they don't have a logical connection. Let's say we had a "Speed" attribute and it modelled the 100 meter dash. If you measured 10,000 people's 100 meter dash time and then plotted it on a curve grouping according to same distribution as a 3d6 roll, you'd not find the "law" you think you are describing to hold true. The gap between the 12 and 13 group would be bigger than between the 17 and 18 group. The bonuses as translated to how much better you were than the group next to you would start out very large and gradually shrink over the course of the graph. The size of the groups would probably fit well in your normal distribution (in how rare a particular time was) but the "bonuses" would not. The middle 68% would cover a huge range of like between 16 and 45 seconds, while the top 5% would narrow down to a difference in just a few seconds or even factions of a second. (I'm just pulling numbers out of the air, but the general idea is true). Thus "bonuses" we'd need to describe this variation wouldn't follow your RPG created biases.
You seem to forget that the numbers are intended to describe something.
As you roll higher or lower, the differences between numbers become more significant, i.e., 12 compared to 13 is not the same as 16 compared to 17 despite both sets having a difference of 1, therefor the bonuses should be more significant as well to reflect this.
No. That's illogical. It might be true of somethings, but it's not a generally true observation. Just because something is a statistical outlier in terms of probability, doesn't mean that the variation is necessarily increasing. Like it's almost certainly true of something like height (just guessing) because length is something that I imagine fits a gaussian curve pretty well, but as the sprinting example showed it's not generally true of everything we're trying to measure. If I was trying to be realistic, I'd have no idea without study what numbers I should like for a given normal distribution.
I think there might be some confusion here. Let's take a grading distribution, for example:
I assure you, I'm not confused.
View attachment 432722
We're talking about the difference between a C-student and a C+ student essentially. I think it's fair to say they're both about "average" in their subject.
But it's not fair to say that the gap in ability between the upper and lower end of that range is smaller than the gap in ability between students at the high end of the range. For math it might be, but mostly because humans are terrible at math and actual mathematical ability is practically a mutant superpower in humans. But depending on the task involved, the gap in actual performance between the average person and a person in the 75% percentile might be bigger than the person in the 75% and the 100%.
Gygax's numbers don't come about by thoughtful understanding and measurement of the thing being modeled. They come about by the coarse granularity has available to him using small integers. They have to be small relative to a D20. They have to be meaningfully different than each other. But they don't actually represent anything.
UPDATE: And there is an actual RPGism that doesn't follow this "law" you think you've discovered because you've over generalized how it might work for "height" - the ability score check. In an ability score check the bonus you have to the task is linearly increasing even though the grouping you are in follows the gaussian curve. 3's are a lot rarer than 4's but the delta of the bonus is the same there and everywhere else. "Threes" complete the task 20% of the time and "fours" complete it 25% of the time because they have a +1 bonus compared to a "3". In fact the gap here if you want to think of it that is decreasing as we go from left to right on the chart. Fours are 25% better at the task than threes, but 18s are only like 5.8% better at the task than 17s. And for a lot things this is perfectly cromulent rough modeling of how it really works. For something though, the difference in success rate might actually be decreasing faster than that. 18's succeed only 1% more often than 17's, who succeed only 2% more than 16's, and so forth. There isn't a law on task resolution that says the more of a statistical outlier you are, the bigger the gap in success between you and the nearest statistical group is.