I don't take most of that as given at all. Is there any evidence that people who score low (or at least lower than the 50th percentile) are less likely to succeed in med school?
Moreover, does the test information tell us anything that a reasonably intelligent person could not already conclude from looking at their transcripts? To me, looking at grades, courses taken, and the rigor of where they were taken is a far better way of assessing the same thing. I doubt that the MCAT weeds out many people who have a good academic record but then bomb this particular test for some reason
This study concluded that, while the MCAT was not a good predictor by itself and should be used in conjunction with analysis of undergraduate GPA, the MCAT is indeed a good overall predictor of success in med school, and was better than examining undergraduate GPA alone. IOW, while using both is best, if you're only going to consider one, use the MCAT; if assigning weight to both when considering both, weight the MCAT more heavily.
http://medical-mastermind-community...ts-med-school-academics-better-than-uGPAs.pdf
This study, 4 years later, supports the conclusions of that one, and made additional findings that are being incorporated to improve the MCAT. (Certain things will be de-emphasized as their relevance decreases; certain subsections noted as being poorly correlative will be radically revised; the gender-bias- which actually predicts success in med school of women
better than for men- will be addressed, etc.)
http://journals.lww.com/academicmed...ive_Validity_of_Three_Versions_of_the.20.aspx
And most doctors will tell you that they've forgotten most of what they learned in medical school. The point of standardized professional education is essentially to combat fraud (given what constituted a "doctor" before the Flexner report), which is a legitimate problem. However, I remain unconvinced that the standardized tests are really part of the solution.
I'm an attorney who is the sn of an MD. We both have that same position. So did my law-school teachers, who said that they couldn't pass the bar without studying for it.
And the primary reason why goes back to the Dartmouth study: most of what we learned, we don't use in daily practice. (The secondary reason is that what we learned at that level has often changed in validity' completeness and relevance- no need to know info that is no longer good.)
But again, as the Dartmouth study stated, because we have learned it once, we will find it easier to recall that information than someone seeking to learn it from scratch.
To put it in practical terms, if you're in a rural area, where there is only one MD for a huge geographic area, you'd want someone who actually took a broad base of classes in med school than someone who just focused on a specialty. While neither may have ever performed a particular operation or treated a particular disease, the generalist will get up to speed much faster than the guy who never studied it at all.
How about nothing? I don't think colleges and professional schools would be unable to make admissions decisions without the tests, and I'm not convinced that their decisions would be any worse.
When you double (or more) the number of students they must consider for admissions, you're going to increase the error rate by simple statistics. My dad served on the admissions board for Tulane med school for a while, pre-MCAT. It was a nightmare of looking at huge stacks of applications that were often essentially indistinguishable. MCATs- and similar tests- give you another evaluative tool- one proven to work (see above).
The other major point of these tests, professional licensure and certification, is largely a peer evaluation anyway; I don't see that cutting out the exams would really change the professions that much.
The Bar exams got their name because their intent is to "bar" the unqualified from practice. They're like any other standardized test out there, just harder. In my state, they recently added a practical; not the norm, yet, AFAIK.
But what is the quality that peer evaluation, really? It's assessing your skills against a known standard, just like the standardized tests claim to do...sometimes without the safeguards of an anonymous standardized test, and at greater expense.
The Texas practical for dentists, for instance, assesses your skills on live patients. That means a certain number of people have to volunteer to have their mouths poked around in by unlicensed, unproven dentists. Without weeder tests, you'd have to increase the number of volunteers...and examiners. There simply aren't enough qualified dentists willing to take the time to evaluate the prospective licensees- they have their own patients to treat. Then here's the costs of the infrastructure needed in those additional spaces in which to test the applicants, which need manual tools, dental chairs, water, sanitization, x-rays, etc.
Furthermore, unlike the anonymous standrdized tests, anyone can be failed at any time during the practical. You will not be told why you got the tap on the shoulder. It could be because you were incompetent. It could be because they know you're from out of state. It could be because the examiner is a racist or he knows you dated his cousin. You'll never know. Which means, as you sit out a year for the next test date, you won't know what areas you need to study more of in order to pass...or if you shouldn't even bother.
The tests aren't perfect, no. But they do help.