So if I burnt toast in your house that would be a problem? I'm using heat (which is all fire is) and creating smoke. But, burning toast is also very different than catching your house on fire.
So, which is the real issue? Damage to your likely expensive home (which could come from fire, water or wind) or fire and the smell of smoke? And hence, my point. You find the lack of realism unappealing. But it isn't just the lack of realism, because if it was, you wouldn't be fine with other things that lack realism. This particular lack of realism though is unappealing.
My issue is fire damage. I've told it to you. My daughter burns popcorn or overcooks things in the oven more often than I'd like, but that's only a minor irritation. I don't ban her from popcorn or cooking.
If you reward the things the ranger is good at, sure. I've got no problems with the Ranger in general, I just notice that Natural Explorer and Favored Enemy and Primeval Awareness are horrifically designed messes. An issue the wizard doesn't have, There is no feature the wizard is stuck with that is as actively detrimental to them as the PHB Ranger's Primeval Awareness.
And yes, 5e is quite forgiving, but that doesn't change the point that we shouldn't be okay with a class printed with multiple nearly useless abilities and poor design. And if you roll, and get less than a 16, well, you chose to roll. That's the risk that comes with potentially starting with a 20. But the baseline average the game is looking for is a 16. That is the mid-point.
I'm not understanding your issue with Primeval Awareness. What's detrimental about knowing what the dangerous stuff within 1 or 6 miles of you is? It's a nice 3rd level ability for the cost of one 1st level spell.
Ranger: "Be alert. There are 2 celestials, 1 dragon and 6 fey within 6 miles of us. The fey may only be a nuisance, if they bother us at all, and the celestials will likely be friendly, but we want to avoid the dragon if we can."
You have enough people agreeing with the same math, presenting their math and supporting their math... then yeah, that math is likely a pretty solid foundation to build on. Might not be perfect, but it very very solid.
It's completely arbitrary, though. You said they picked 65% as the number the baseline. They could have picked 60% or even 55%, but they liked 65% better, so they decided to assuming that 65% is the baseline.
And look at literally the next sentence where I explain why the designers altered the array to not exactly match, and showed the math where they did so. I mean, it almost literally looks like the designers took this average, the took away the 16 and made the lowest number an 8. Wonder why they would have taken the average roll nearly identically then made it the standard, static array. A mystery for the ages, after all, these numbers were just conjured out of thin air and reference nothing. Certainly not the average roll.
And, again, matching identically is not the point.
When you say that they are equal, then identical IS the point. Now you are saying that they are not equal, and with that I agree.
Um... yes? That would be expected that if you rolled 10 more times you could have wildly different results. For example, in your 10 rolls, which I'm assuming were either 3d6 or 4d6d1, you only had one array under the average. That is unusual. You's expect to see something closer to a third greatly over, a third greatly under, and a third around the mid point. Well, you would for 3d6 and a bell curve, 4d6d1 does skew flatter so you would likely see a lot more mid range numbers.
But, again, this is how statistics and probability work. This is why sample size matters. Because, if you rolled 10,000 times it would show the average. Roll 10 times and you can claim that there is an 80% failure rate in achieving the average, which is silly and just demonstrates how small and inaccurate a small sample size is.
10,000 times is probably the number of character's that you'd need to make in order reach average over all the rolls, but I'm going to be nice and cut it down by a whole lot. Let's say that it would only take 1000 rolls to show average. That's 1000 characters needed.
My campaigns run for about a year. That's more than most, though, so let's say that the typical campaign ends in half that time. 6 months. Groups are typically 4-6 players, so we'll say 5. So 5 players making characters that go for 6 months and needing to reach 1000 characters.
So 10 characters a year, carry the 1, add pie and then multiply by the air speed of a coconut laden swallow. 100 years! It will take that group 100 years of constant playing to hit 1000 characters and see average from rolling. Now, if it's only 1 person who doesn't play with a consistent group, it will take him 500 years.
Look man, I'm a teacher, but I'm not getting paid to teach you stats 101. If you are really going to go forward claiming that a sample size of 10 is sufficient to disprove the mathematically proven and graphed average, done by multiple websites by statistical analysis software... I can't help you. I don't care that it is "two campaigns worth" of characters. The point is that 10 arrays is no where near enough of a sample size to prove anything. You need hundreds and thousands of arrays to try and prove the average wrong.
And since using hundreds of thousands of rolls is exactly how some of these computer programs have proven the math and arrived at the averages... I don't think you would actually disprove them.
That's a pretty hefty Strawman of my position. I haven't been arguing what the average is for rolling. In fact, your argument above actually makes my case for me. MY point is that gaming groups are nothing BUT insufficiently small sample sizes. They will never see average from rolling other than an occasional roll here and there. They don't have sufficient time to become a large enough sample size for average to matter.
On the other hand, just how often does average happen with an array? 100% of the time. 1 character, 10 characters or 100 characters, they will all be average.