Skip to main content

Verified by Psychology Today

Bias

Insight Into Bias

Have you enhanced yourself recently?

J. Krueger
Self-enhancement exercise: step on coffee table.
Source: J. Krueger

The man with insight enough to admit his limitations comes nearest to perfection. ~ J. W. von Goethe

In order not to be fooled by statistical data, it is therefore of utmost importance to control for measurement error and sampling error. [ . . .] It is also very common wisdom. ~ K. Fiedler

Post tenebras spero lucem. [After darkness I expect the light.] ~ Cervantes, Don Quijote, Book 2, chapter LXVIII, with a nod to Job 17:12 and contemporary work on the regression effect.

[I wrote this essay with Patrick Heck.]

One way to study the mind is to see where it goes wrong. The study of visual illusions has greatly enhanced our understanding of visual perception and its many triumphs. Moving to social psychology, the study of bias becomes laden with moralistic overtones. Bias is bad, we are often told. Unbiased people are fair and good. Biased people make errors, particularly self-serving ones.

An offshoot of the study of bias is the study of people’s awareness of their own biases. When people are biased and know it, we are tempted to ask why they remain biased. Perhaps they really want to be biased or they can’t help it. More intriguingly, there are unconscious biases, where there is no prima facie case for blaming people, but social psychologists often do it anyway, saying thou art supposed to know thine biases and cast them into the wilderness.

One prominent bias in social perception is self-enhancement. This bias is often shown as a difference between a self-judgment and a judgment of the average person in a reference group. People must be aware of this bias because they provide both judgments. If you say you are a safer driver than average, you're probably self-enhancing and you know it.

Another way of measuring bias is to ask people to judge only themselves on a positive trait and have them judged by others who know them well. The average judgment by these others is then taken to be a reflection of reality. If the person’s self-judgment is more positive than this criterion, there is evidence for self-enhancement (arguably, this difference index reflects an error and not a bias, but many investigators use these terms interchangeably.)

The self-observer method says nothing about people’s insight into their own bias. Hence, one might do a study to find out. In a recent article, Kathryn Bollich, a researcher at Washington University in St. Louis and her colleagues, collected self-judgments and observer judgments for a set of positive attributes — like intelligence and likability — and then asked the self-describers how biased they were in their descriptions of themselves. They even showed them their original self-ratings to jog their memories.

The result was a correlation of .45 between the discrepancy index of bias — self-judgment corrected for observer judgment — and the meta-cognitive insight rating. How might one explain this finding? Bollich et al. conclude that people are, "achieving this insight by relying on a simple and accurate heuristic: the more positive their self-views are, the more likely they are to be positively biased." We agree. It's probably as simple as that. But the researchers press on to suggest that people know which traits they're biased about. We disagree. Differential “knowledge” about the relative size of the bias over traits follows from the simple heuristic just described. Traits with the most extreme self-ratings are probably the traits with the largest true bias scores. This is the logic of statistical regression, which we all know but all too often cheerfully forget. Now that's a bias!

How may we unbias people? Let’s try to make their self-judgments more accurate. Recall that Bollich et al. used a popular research paradigm, in which observer judgments are aggregated, but self-judgments are not. Aggregation makes observer judgments more reliable and probably more valid as well. This is the well-known wisdom-of-the-crowd effect. Let us then allow the target persons to re-evaluate their own standing on these positive traits, average their self-judgments for each trait, and look again at the correlations of interest, i.e., the correlation between self-judgdments and observer judgments and between self-judgments and meta-judgments of bias.

We had a pretty good idea about what would happen, but we ran a computer simulation lest you don’t believe us. We sampled virtual judgments that could range from 0 to 10. All means were 5.0 and all standard deviations were 1.75. The judgments were: First self (S1), second self (S2), observers (O), insight (I.) We then assumed the following statistical associations: [1] S1 and S2 are correlated at .5; both S1 and S2 are somewhat accurate, i.e., they are each correlated with O at .5; and respondents rely heavily on the extremity of S1 when generating the I variable (r = .8.) We further assumed that the insight judgment I has no intrinsic association with either S2 or O. Both correlations with these variables may be estimated as the products of correlations already available. That is, the correlation between I and S2 is the product of the correlation between I and S1 and the correlation between S1 and S2 (.8 x .5 = .4.) The correlation between I and O is the product of the correlation between I and S1 and the correlation between S1 and O (.8 x .5 = .4.)

After computing M(S) as the average of S1 and S2, we found four new correlations, two of which are interesting and two are boring. The boring ones are the correlations between M(S) and S1 and S2. They were both .85. These correlations had to be high because S1 and S2 are part of M(S). Next, we saw that the correlation between M(S) and O is .55. This reflects a small increase in accuracy, since the correlation between S1 and O was .5. By averaging self-judgments we have, in other words, caught a glimpse of the wisdom of the crowd. Finally, we see that the correlation between M(S) and I is .66, which is less than the correlation between S1 and I (which was .8.)

In short, by allowing respondents to re-evaluate and unbias themselves, we have raised their accuracy while simultaneously eroding their meta-insight into their own bias. This seems odd. How can one intervention increase one type of accuracy and decrease another? Perhaps we were being unfair when we did not allow our virtual respondents to also reconsider their meta-cognitive assessment of their own bias. Then again, perhaps we did not have to. If we assume that people would again use the simple and accurate heuristic Bollich et al. proposed, then the correlation between M(S) and a revised I would again be .8. In this case, we’d still have an increase in estimation accuracy without a corresponding increase in meta-accuracy.

It is not entirely clear what Bollich et al. make of their own data. Perhaps they agree with us that use of the extremity-implies-bias heuristic is all there is and it could hardly be any other way. Alternatively, they might be inviting us to wonder how people can be so biased and yet so aware of it — and why they won't stop being biased? The title of their article "Knowing more than we can tell" makes this alternative explanation seem probable. We, however, don’t believe that the mystification of a simple and sufficiently explained finding enhances science much at all.

Bollich, K. L., Rogers, K. H., & Vazire, S. (2015). Knowing more than we can tell: People are aware of their biased self-perceptions. Personality and Social Psychology Bulletin, 41, 918-929.

Fiedler, K., & Krueger, J. I. (2012). More than an artifact: Regression as a theoretical construct. In J. I. Krueger (Ed.). Social judgment and decision-making (pp. 171-189). New York, NY: Psychology Press.

advertisement
More from Joachim I. Krueger Ph.D.
More from Psychology Today