Conaill said:
Keep in mind that Approval Voting encourages people to vote for multiple candidates, so the situation you just described will never happen. (The exact same situation can happen in IRV if everyone only ranks one choice per ballot - but that'll never happen in real life either.)
I gave teh extreme example as illustration of the pioint. The exact situation might not happen, something similar might. Specifically, the system you describe can easily find a winner without consensus of the voting populace. And if you want consensus, your Approval Voting system has no way to find it without having people go back to the ballot box.
Some more endorsements of Approval Voting;
- It is used by the American Statistical Association, the Mathematical Association of America, the Institute of Electrical and Electronics Engineers (>300K members), the US National Academy of Sciences, the United Nations, as well as numerous smaller professional organisations and universities.
Yes, but that's meaningless unless we know what they're using it for. If the application isn't the same, the endorsement means little. The Hugo and Nebula awards use IRV, for purposes very similar to ours, and those communities are also filled with mathematicians and tech-heads
I'm taking a couple of things out of order here, because it's a better way to get at the crux of the matter...
In statistics, that sort of estimated quality of an unknown entry is called a prior, and it's perfectly acceptable to have a prior estimate which is higher than one of your known datapoints.
It is statements like this that give truth to the old saw that "there's lies, damned lies, and statistics". In statistics, what is or is not perfectly acceptable actually depends upon the application. Why you want the numbers should influence what methods you use.
For a marketing firm polling to get the public's opinion before designing an ad campaign, the prior assessments are useful tools. However, I feel they are contrary to our goals with an awards program.
Seriously - why would it be unfair to assume that a product you are familiar with and have a strong dislike for is actually *worse* than another Ennie nominee you are not familiar with? After all, these nominations have been through an exstensive vetting process already, so you know *something* about their quality.
Well, now we run into a problem...
Personally, I think the judges already have a lot of input, and we shouldn't have the voters depending upon the judges even more in making their votes.
However, given that you already known somehting about the product through the judges, it becomes even more unfair to assume that an unknown is better than a known. After all, the judges chose this product that you detest, didn't they? That means that the other products they choose could likely be equally detestable. Given that the judges made one "bad" choice", you shouldn't assume that all their others are good ones.
Why would it be more fair to assume that the unknown product is even worse than the one you know you detest?
It isn't as if it's that simple.
In the end, what we want is a system that allows reasonable abstention. That would be most fair. For that, the numeric ranking system is actually preferrable, if you could depnd upon the voters to not try to finesse the system.
Your approval voting only allows two options for dealing with unknowns - either they are as good as the best known, or they are as bad as the worst known. There are no other options. Neither is really fair, and your ranking of unknowns is taken into account at the same time and with equal weight as the ranking of knows.
Consider, though, with IRV, that an abstention isn't quite as bad as you think, especially with a null or "no prize" option. IRV has leeway. If you know a given product, and think it is good enough to give a prize, you get to vote for it. The system doesn't get to your abstentions (or even your low-rankings) until after it has tried to count every single preference you had above that. What you chose to do with the unknowns may never be seen at all, if the contest is decided before they are reached. Thus, the opinions about unknowns are left out unless the race is particularly difficult.
And, in IRV, the voter does still have the option to rank an unknown over a known, if they really want to do so. But if they rank that bad known last, the unknown can still be fairly low, too, rather than as equal to the best thing out there. IRV allows some of the benefits of numeric ratings, while doing away with most of the weakness to finesse votes, while not forcing voters to vote on unknowns as equivalent to knowns.
Of course, this is more difficult to implement. In order to get the best of both worlds, you sometimes have to do a little more work.