Ovinomancer
No flips for you!
At this point the moral philosopher in me just wants to crack both your heads together.
Ratskinner: "The greatest good to the greatest number" can't even tell you unambiguously how to slice a birthday cake.
Ovinomancer: Psychologists, marketers, and pollsters scientifically define and quantify subjective preferences every day.
To step a bit further than KB's post, there's a lot of problems in social research. Data collection is often biased by collection method, sample composition, sample size, sample time, and sample location. Often, since new collection is hard, data from different surveys is combined, which taints everything. But, even if we can get all the data to be perfectly collected, there are still issues. Primarily, how do you measure the data? If you didn't put in a Likert scale to begin with, you'll have to come up with some way to categorize and value-ize the data, which is again open to bias. In fact, bias in that step is unavoidable because the very nature of the effort is subjectively valuing data. If you did do a Likert scale, then you have ordinal* data, not ratiometric data. The difference between a 5 and a 10 on a Likert scale is undefined and it's certain that 10 is not twice 5. However, in either case, since numbers are now presented the assumption is that you can do math. You can't, the math doesn't mean anything. In the former case, you're doing math on proxied data that you proxied -- you're doing math on your subjective preferences, not the participants. In the latter, you can't ever do math with ordinal data.
An even worse sin in the social sciences is p-hacking. They'll take a huge data set and then start crunching though various recipe-book statistics (another yuck) until some wee p-value pops out and then report this result as true. But ANY large volume of data will ALWAYS spit out some wee p-value. Further, for the scientific method, this is, at best, step one - observation. You haven't even reached a hypothesis to test yet! But, quite often, this is where research is left. A p-hacked result is found and presented as true without any of the actual scientific process taking place.
The social sciences have major issues with how they do business right now (medicine has many of these issues as well). Fundamentally, though, they can't avoid many of these issues as they're trying to work with data that's inherently subjective to begin with. There's some good work, psychology has had some success for instance, but even there any approach is at best a 50/50 and most psychologists bring multiple approaches to find which works best on a given subject. That's because you can't just measure and model people's subjective beliefs and wants. You can't measure happiness. Setting aside you can't define it, no matter what definition you use people will have a subjective opinion of where they are in relation to that definition. Dressing things up in statistics does not science make.
*Ordinal scales do not have defined steps between numbers. And example is race finishes. If you have a race of 10 people, they runners will finish in ordinal order -- 1st through 10th. But, armed with that ordinal order, you cannot say anything about how fast they ran the race, only which was faster or slower than others. You can't say that the 1st place runner was 10x faster than then 10th place runner, but only that there were 8 other runners that were slower than 1st and faster than 10th. You can't average this data, nor can you do any statistics on it. Yet, in the case of ordinal data in social science research, stats are often run on ordinal data because once you have a number, people mistake it and assume math will work because you use numbers in math.