I'm not a math person. Can you quantify how drastic the difference is for a DC10 (or DC11) and a DC15 or DC20 under a bell curve like this?
For a uniform distribution like d20, +1 is always equivalent to 1/N, where N is the number of sides on the die. So long as failure remains an option, +1 is always an increase of 0.05 (5%) to the probability of success.
For the distribution you're discussing, it would be as follows, assuming that you are using "meet or beat" rules.
DC 9: 68.38%
DC 10: 59.38%
DC 11: 50% (exactly)
DC 12: 40.63%
DC 13: 31.54%
DC 14: 23.44%
DC 15: 16.41%
....
DC 19: 1.95%
DC 20: 0.78%
So, a +1 is the same as reducing the DC by 1. That means, at the center of the bell curve, +1 is equivalent to about 9.5%, roughly double the benefit of +1 in a uniform distribution. At DC 15, +1 is equivalent to almost exactly 7%, and at DC 20, +1 is equivalent to just over 1%.
At the edges of the bell curve, modifiers become nearly worthless unless they're huge. At the center of the bell curve, modifiers have a massive swing--the difference between DC9 and DC 12 is huge. It's equivalent to going from "you succeed twice as often as you fail," which feels like "normal" difficulty to most players, over to "you fail twice as often as you succeed," which is going to feel like absolute garbage for most players. A -2 or -3 penalty when near the center of the curve is terribly punishing; a +2 or +3 when near the center of the curve means nearly guaranteed success.
Perhaps these are desirable effects for you; but you may wish to consider the impact on player psychology, as noted.