And I think there are times, thanks to the exploding dice, where it's actually statistically better to be rolling a d4 than a d6. Not sure of that--I'm not a math guy--but it's an impression I've gotten.
That's not true if you don't know the target number beforehand, but it could be if you do.
Obviously a d1 in an exploding system wins everything, but after that using larger dice is always at least as good as using lower dice (assuming you don't know the result you need). And the equality holds only when comparing a d2 and a d3, so if the lowest dice in the game is a d3, using a larger one is always better on average.
The calculation is this, where d is the size of the die:
The probability of getting exactly i exploding rolls followed by exactly one non-exploding roll is p=(1/d)^i * (1-1/d). Every single possible roll can be described this way.
The average value of a roll that explodes i times and then doesn't explode is v=i*d + d/2. (In other words, If I rolled a d4 which exploded once I certainly have 1*4 from the first roll, and the second roll which didn't explode, and thus must be either 1, 2, or 3, has an average of 2).
My expected roll is the sum of the contribution from each possible roll times its probability of occurring, in other words the sum of all the possible p*v's. This is a sum from i=0 to infinity which, happily, can be calculated exactly: d/2 + 1/(d-1) + 1.
Code:
d exp (exact) exp (approximate)
2 3 3
3 3 3
4 10/3 3.33
5 15/4 3.75
6 21/5 4.2
7 14/3 4.67
8 36/7 5.14
9 45/8 5.63
10 55/9 6.11
11 33/5 6.6
12 78/11 7.09
13 91/12 7.58
14 105/13 8.08
15 60/7 8.57
16 136/15 9.07
17 153/16 9.56
18 171/17 10.06
19 95/9 10.56
20 210/19 11.05
Obviously that isn't the whole story, as the distribution about the expected values is important as well. I didn't calculate anything, but I did simulate it briefly (also to check the numbers above, which are correct.) For d>=3 every increase in dice size leads to the usual modest increase in the standard deviation. Going from d=2 to 3 actually decreases the standard deviation a slight amount, which is kinda interesting: the d2 is "swingier" than a d3, but they have the same average result. Also, even though going from a d3 to a d4 increases the standard deviation, for the d4 it is still less than it is for a d2.
Of course, since actually using "dice" smaller than a d4 in play is pretty rare those little anomalies will pretty much never show up at the table.
Back to the matter at hand, if you need a 6 and you know it, then you have a 1/6 (.1667) chance of getting at least that with a d6, and a 3/16 (.1875) chance with a d4. Similarly for other target numbers exactly equal to dice sizes.
Edit: Now that I think about it, interpreting this in terms of one's a priori knowledge of the needed result instead of just acknowledging the raw probabilities tacitly assumes one is trying to optimize the roll and is free to roll a smaller die if desired. I've never played Savage Worlds, but I'm guessing you can't do that. In which case the math has some wonky points, end of statement. Which is just what thatdarnedbob said in a considerably more concise fashion.
