Christian Persecution vs Persecuted Christians

Status
Not open for further replies.
I agree that people have racial biases, but how is it measurable?

Statistically - while this study is from 2003, it demonstrates one methodology that could be used: Send out a bunch of resumes that are *identical*, except for the name at the top. Give some "white" names, and others names more likely associated with various racial minorities. See which ones get more callbacks.

http://www.nber.org/digest/sep03/w9873.html

You can do the same for traditionally male and female names:

"Moss-Racusin wanted to figure out if faculty at academic institutions, despite their training in conducting scientifically objective research, held implicit gender biases that were disadvantaging women who were pursuing STEM careers.

In their study, Moss-Racusin and her colleagues created a fictitious resume of an applicant for a lab manager position. Two versions of the resume were produced that varied in only one, very significant, detail: the name at the top. One applicant was named Jennifer and the other John. Moss-Racusin and her colleagues then asked STEM professors from across the country to assess the resume. Over one hundred biologists, chemists, and physicists at academic institutions agreed to do so. Each scientist was randomly assigned to review either Jennifer or John's resume.

The results were surprising—they show that the decision makers did not evaluate the resume purely on its merits. Despite having the exact same qualifications and experience as John, Jennifer was perceived as significantly less competent. As a result, Jenifer experienced a number of disadvantages that would have hindered her career advancement if she were a real applicant. Because they perceived the female candidate as less competent, the scientists in the study were less willing to mentor Jennifer or to hire her as a lab manager. They also recommended paying her a lower salary. Jennifer was offered, on average, $4,000 per year (13%) less than John."

http://gender.stanford.edu/news/2014/why-does-john-get-stem-job-rather-jennifer
 

log in or register to remove this ad


Issues with IATs:
1) the don't measure anything directly -- it's all indirect measurement of other things that are then assumed to be meaningful measurements of the desired thing.

1a) the measurement taken is the time difference between different associations, with the assumption that a longer time to associate indicates a negative attitude.

1b) however, there's no objective measure here, instead the difference between positive and negative associations is instead based on a formula that takes the mean response time of the participant and then calculates ranges where a response can be scored as positive or negative. Responses that don't meet the thresholds are discarded. This means that the calculated positive/negative scorings are entirely subjective to each respondent -- each person's score will not be calculated base on thresholds that are determined after the fact by the respondent's response times.

1c) What does that mean? It means that you can only attempt to compare responses after you've made at least one full adjustment to the data via a model. What you're comparing is not longer the collected data, but a model of the collected data. The conversion introduces uncertainty, but this uncertainty is ignored in later computations and comparisons. It is, in fact, hidden uncertainty in the model. Meaning that any result you get, aside from being based on a model and not the data, will always be more certain than it should be.

2) the relative accuracy between tests of the same subjects averages out to .5. That's coin toss reliability. Some tests reportedly do better, some worse, but every measurement is fully ignoring the hidden uncertainty in 1c above.

So, no, that doesn't really measure bias, it measures response time, which is then modeled in a way that assumes it reflects bias. It has a spotty track record, and even it's overly generous reviews against statistical standards find that it's has a poor record of reliability. Better than other things, but still poor.
 

Statistically - while this study is from 2003, it demonstrates one methodology that could be used: Send out a bunch of resumes that are *identical*, except for the name at the top. Give some "white" names, and others names more likely associated with various racial minorities. See which ones get more callbacks.

http://www.nber.org/digest/sep03/w9873.html

You can do the same for traditionally male and female names:

"Moss-Racusin wanted to figure out if faculty at academic institutions, despite their training in conducting scientifically objective research, held implicit gender biases that were disadvantaging women who were pursuing STEM careers.

In their study, Moss-Racusin and her colleagues created a fictitious resume of an applicant for a lab manager position. Two versions of the resume were produced that varied in only one, very significant, detail: the name at the top. One applicant was named Jennifer and the other John. Moss-Racusin and her colleagues then asked STEM professors from across the country to assess the resume. Over one hundred biologists, chemists, and physicists at academic institutions agreed to do so. Each scientist was randomly assigned to review either Jennifer or John's resume.

The results were surprising—they show that the decision makers did not evaluate the resume purely on its merits. Despite having the exact same qualifications and experience as John, Jennifer was perceived as significantly less competent. As a result, Jenifer experienced a number of disadvantages that would have hindered her career advancement if she were a real applicant. Because they perceived the female candidate as less competent, the scientists in the study were less willing to mentor Jennifer or to hire her as a lab manager. They also recommended paying her a lower salary. Jennifer was offered, on average, $4,000 per year (13%) less than John."

http://gender.stanford.edu/news/2014/why-does-john-get-stem-job-rather-jennifer

That very study was used in a different discussion (having trouble locating it, probably a fault in my mental filing system) that opined that the difference wasn't racially based, but based on cultural belief that people with non-standard names are less likely to be a cultural fit to the company and/or may not perform as well. Arguably, perhaps spitting hairs, but there you go.

But to the point of measurement, I will concede that you can demonstrate an effect, but I still disagree that it was measured.
 


Talk about reframing racism.

the difference wasn't racially based, but based on cultural belief (racial prejudice) that people with non-standard (white sounding) names are less likely to be a cultural fit (smell weird) to the company and/or may not perform as well (be lazy).
 

That very study was used in a different discussion (having trouble locating it, probably a fault in my mental filing system) that opined that the difference wasn't racially based, but based on cultural belief that people with non-standard names are less likely to be a cultural fit to the company and/or may not perform as well. Arguably, perhaps spitting hairs, but there you go.

Arguably splitting hairs? Dude, "a cultural belief that people with different names are less likely to fit in or perform," is *TEXTBOOK* racism. If that's a real consideration, the root issue is not that the new person won't fit in, but that you've fostered an environment where fellow human beings are not considered as equals.

You want me to go back to the UN definition of racism?

"...any distinction, exclusion, restriction, or preference based on race, colour, descent, or national or ethnic origin that has the purpose or effect of nullifying or impairing the recognition, enjoyment or exercise, on an equal footing, of human rights and fundamental freedoms in the political, economic, social, cultural or any other field of public life."

How does, "he/she is of another national or ethnic origin (as guessed by name, without even seeing them), and so we should exclude him or her" not fit in this definition?

Note that the effect can be seen for women, as well as minorities. Is, "Well, she's a woman, so she won't fit in or won't perform as well," somehow not sexism?

If the latter is sexism, how is the former not also an -ism?
 

Oh, dear me, no. You've just committed the sin of reification. Statistics are just a model, and are not the data. The data have no mean, your model of the data has a mean. That may seem trivial, but it's critical to understanding that you fool yourself if you ever believe that statistics have any real truth. They don't, they're just a model of the real world that occasionally are useful. Cue the old saw about all models are false, but some are useful.


If you draw conclusions from statistics, you are lying to yourself. You may be accidentally correct, but that's it. Statistics has nothing to do with causation. To draw correct conclusions you need to go look at causation, and you cannot do that with statistics. Stats can be useful to point you in the direction of some possibly causations, or to see if your hypothesized causation holds water, but that's the extent of their use. We, as in the Western world, seem to hold stats in much higher regard than they should be held -- just another imperfect tool in the box that shouldn't substitute for thinking.

Please reference post# 481 for response.
 

Look, you asked a question, and I gave you an answer. If all you were going to do was automatically reject the answers given to you, you should state that upfront so people don't waste their time thinking you are interested in actually discussing a topic.
No, you're right. I asked a rhetorical question and failed to provide my answer for it. This is a good example of why I try to do that. My bad.
 

Oh, dear me, no. You've just committed the sin of reification. Statistics are just a model, and are not the data. The data have no mean, your model of the data has a mean.

Um, no.

The data has a mean. The data is a sub-population for which you have (near) perfect knowledge of the things you've measured, and you can certainly get a mean from them. I say near perfect, because there's always some measuring and sampling error.

The real world also has a mean. You just don't know what it is, unless you measure the entire population in question.

Your model is the thing that gets you to think the data's mean and the real world's mean are related.


If you draw conclusions from statistics, you are lying to yourself. You may be accidentally correct, but that's it. Statistics has nothing to do with causation. To draw correct conclusions you need to go look at causation, and you cannot do that with statistics.

Again, I have to say no. Statistics are like a microscope - they give you a way to see the state of a system. There is no way to "look for causation" without a way to see the state of the system you are looking at! Just like with a microscope, there's some distortion when you take image - the image is not the thing itself - but you can minimize and correct for much of that.

One should not look at statistics from only one experiment, and then claim for certain that they know the cause, just like you don't take one picture of a cell with a microscope, and say you've found the cause of a biological effect. But then, a smart person who isn't a biologist or a sociologist does not themselves decide they know what is happening at all. They should turn to experts who know what they are doing.

When we discuss here, and we bring up statistics, we are being demonstrative, displaying some support, and referring to people who are (hopefully) experts.
 

No, you're right. I asked a rhetorical question and failed to provide my answer for it. This is a good example of why I try to do that. My bad.
To be honest, what's become clear is that you don't seem to understand measurements when it comes to a variable that you can't directly observe. It's interesting. You have this very linear logic process. It's apparent when you speak about baselines and measurements. It's also ver apparent when you attempt to discuss the law.
 

Status
Not open for further replies.
Remove ads

Top