Yup.
In fact, the
only time +1 is exactly a 5% increase is when you need to roll a 2 (e.g., +8 bonus vs. DC of 10), and a +1 would turn that into an automatic hit. That changes the success rate from 95% to 100%.
Two other points:
- This business about "nobody at the table is going to notice" is both true and not relevant. Humans are terrible at noticing statistical trends, even significant ones. Our brains seem to be wired to detect differences on logarithmic scales (which is why, for example, decibels are measured on a log scale.). But that doesn't negate the fact that we know, intellectually, the statistical impact.
- But, really, that doesn't matter, either. The fact remains that the +2 to the attribute is important to a lot of people, on both sides of the debate (that is, either in that it defines a racial archetype, or that they don't want to play a race/class combination without it) and trying to argue somebody out of their position...or, worse, to try to invalidate their position...because they "shouldn't care" is just asinine.
I am all on board with the fact that once people get things in their heads that they see things everywhere. There are some nice chapters in some statistical literacy books on that that are fun to go over. I can certainly imagine that if someone knows they are lacking a +1 that they'll start blaming many of the misses on that, even as someone else suggested, the DM honestly made the rolls hidden behind a screen. I bet there are some who would even believe they were doing worse in many cases (unless they kept tally marks) even if they came out tied or ahead over those 100 rolls. And so I have no argument against getting rid of the bonuses because of that - because of your point that it discourages people regardless of how big an mathematical impact it has. (I've conceded them not being worth the cost elsewhere).
My only arguments are with two statistical claims being made when discussing them and the firmness with which they are being used.
Warning: Repeats previous posts in part, but it's either that or do work. (Last time! Dishes and laundry are calling!)
I) As I noted previously, I don't think there is a good statistical argument that the +1 is a particularly noticable over 100 to hit rolls (I'm taking that as an adventure day) if the target (without the plus) is between 3 and 18 (simulation code and results previously). If it's a few hundred rolls, then sure. (Or say, as in another thread, you have a high elf in a tower watching a battle between a few 1000 soldiers where one side is +1 better...). If someone is getting a bunch of target 19 or 20 (or 21) things I would expect it would certainly start to be noticeable too and didn't run those settings (especially that 21

).
I'm curious how often high targets (players needing to roll a premodified 19+) occur in different games? If they occur frequently, does that greatly change the care with which other enhancements like magic weapons are given out? Or only the balance with which they are distributed?
2) Simply using the ratio of probabilities of successes is problematic. The problems in using them to describe the values of treatments is a common one in statistical literacy classes. Relative risk is a thing, but it doesn't feel like it relates in a nice way to how statistically significant a claim of disadvantage would be after 100 rolls (at least not as straightforwardly as the difference in probability and an endpoint effect seem to).
If someone really wants to go more in depth, we can probably go full out Bayesian and throw cost functions on things. We could give the probability the player would estimate themselves of missing the +1 after 100 rolls (and see.how accurate that assignment was), and we could give the actual expected cost of a missing +1 if we had a distribution of target ACs, and the cost/benefit of each hit and miss.