Skip to main content

Verified by Psychology Today

Heuristics

Anchor What

It’s not a place in Cambodia.

Froth at the top, dregs at bottom, but the middle excellent. -Voltaire, anticipating the high-low anchor gambit

Παν μέτρον άριστον. Cleobolos of Lindos, anticipating Voltaire

How many inhabitants did Angkor Wat have at its peak? Few have any idea. Some have heard that Angkor Wat is a famous archeological site in Cambodia, and some have visited it. These glimmers of recognition don’t help much when trying to estimate its erstwhile population size. Presumably, Angkor Wat was large half a millennium ago, but what does this mean numerically? Surely, the population was greater than zero, but what is the minimum number that we can agree is definitely too large? Might it be 1 million?

Wikipedia, hopefully relying on credible sources, says that the population of Angkor Wat may have been around 1 million at some point. For my part, I would have made a lower guess. Why would one even want to make a guess when the uncertainty is so great, and what cues can one rely on when pressed to provide an estimate?

Of the judgmental heuristics, Tversky & Kahneman (e.g., 1974) introduced to psychology, anchoring occupies a special place (Krueger, 2010). An anchor is an estimate that is provided to the judge with full knowledge that this particular number is wrong. Having confirmed that this number cannot be correct, the judge proceeds to make an estimate she thinks has a chance of being right. The critical finding is that these estimates are correlated with the very anchors the judges have agreed to ignore. In the typical study, some judges receive a high anchor, while others receive a low anchor. The typical result is that the average estimates made by the former are higher than the average estimates made by the latter; hence the correlation.

In Kahneman’s (2011) heuristic-and-biases approach to judgment, the central psychological claim is that people attend to the focal stimulus, that is, the stimulus that happens to be right in front of them. They consider peripheral stimuli, samples and distributions, or counterfactual stimuli only with great effort. The focal stimulus engages the lazy but efficient intuitive system, whereas everything else must be ground out with diligence and toil. An anchor is a bad focal stimulus par excellence because the very people who use it agree that they should not, and perhaps remain unaware that they have. Of course, focal stimuli can be excellent cues in judgment, that is, when they happen to represent the quantity to be estimated. Tversky & Kahneman showed that anchors can bias judgment, not that they always do. Their analysis is only concerned with the difference between the mean judgment given a high anchor and the mean judgment given a low anchor. Any inference about accuracy is indirect. If these means differ, at least one of them must be incorrect (Furnham & Boo (2011).

When one imagines what anchoring does to judgmental accuracy, one might begin by thinking of a true value that lies somewhere in the middle of a spectrum or range. There can be a middle only if the range has clear bounds. Things that can be counted have no mathematical upper bound. The percentage scale, however, is well behaved in this way. It avoids hand-wringing and arbitrariness of what the largest anchor might be. Suppose that without anchors, mean judgments approximate the true value. The variation around this mean is judgmental error. A higher anchor will displace the distribution of estimates upward so that the mean estimate is now greater than the true value but without the shape of the distribution having changed. A lower anchor will displace the mean estimate downward. In this scenario, anchors do a lot of damage. They turn a true-value-plus-random-error model into a true-value-plus-random-error-plus-systematic-bias model.

It is certainly not the case, however, that without anchoring, mean estimates are always accurate. They might be accurate, either by dumb luck, or because the judges have relevant knowledge. And if they do, they will be less vulnerable to the anchor’s biasing effect.

The question of interest is whether, or under what conditions, anchoring reduces accuracy, and whether it might actually increase accuracy. To see the effects of anchoring more clearly, suppose each judge makes too estimates, one after having received a high anchor, and another after having received a low anchor. When all estimates – the number of judges times 2 – are plotted, the distribution will be wider than if each judge makes only one estimate. The consequence is that the average error, that is, the average distance between an estimate and the true value, is larger. What if, however, each judge now uses the average of the 2 estimates as the final guess? Can the use of two anchors stir the wisdom of the crowd within (Herzog & Hertwig, 2009; Krueger & Chen, 2014)?

Consider first a case in which nothing happens. Suppose the true value is 50% and the mean estimate of unanchored estimates is also 50%. The mean estimate following a high anchor of 100% is 60%, and the mean estimate following a low anchor of 0% is 40%. The average of these two means is 50%, and nothing has changed. If each judge is affected the same way by the anchors, the final distribution of mean estimates following anchors is the same as the distribution of unanchored estimates. The average estimate is accurate and the errors have gotten neither larger nor smaller on average. Using both anchors has not reduced accuracy, although it has not increased it either.

Now consider true values other than 50%. The more extreme a true value is, the more likely it is that it is more extreme than the mean estimate of the unanchored estimates. This relative extremity of truth and estimate is critical. The use of two anchors will move the average of the anchored estimates toward 50%. This means that the distance to the true value will increase for extreme true values, and that it will decrease for moderate true values.

Whether double anchoring improves or degrades accuracy depends on the distribution of true values, which is under the experimenter’s control. If one wished to demonstrate the hazards of anchoring – even when both anchors are used – one would select extreme true values. Final judgments, obtained from averaging the estimates in response to both anchors, will be too regressive, that is too close to 50%. If one wished to demonstrate the benefits of anchoring, one would select mid-range true values. Final judgments will then be more accurate than unanchored estimates because of their regressiveness, that is, because they are closer to 50%.

To avoid biasing the outcome, one might sample true values from a flat distribution, which means that any value between 0% and 100% is equally likely. And this one should, in all fairness, tell the judges. In the extreme scenario of total ignorance, mean estimates will be uncorrelated with true values. In this case, a gain in accuracy thanks to the regressive effect of averaging anchors is as likely as a loss in accuracy. The gains and the losses cancel each other out.

If, however, judges have partial knowledge in the domain of estimation – if they didn’t, what is the point of the exercise? – then they can observe whether the average of their two post-anchor estimates is greater or smaller than 50%. If this average is very high, say 90%, they may conclude that the effect of the lower anchor was too weak, and they should regress some more. If this average is very moderate, say 60%, they may conclude that the effect of the low anchor was probably too strong, and that it compromised the effect of partial knowledge. If there is partial knowledge, the final estimate is likely on the right side of 50%. If so, an estimate of 60% is probably too regressive.

We suspect that the use of both anchors can improve the accuracy of average estimates when unanchored estimates are extreme; being extreme, these estimates are probably more extreme than the true values. Averaging anchored estimates improves accuracy by reducing extremity. The use of both anchors has a comparatively small effect if unanchored estimates are moderate; being moderate, these estimates might already be too regressive. Averaging anchored estimates reduces accuracy by making final estimates even more regressive. The accuracy gains in the case of extreme unanchored estimates are likely larger than the accuracy losses in the case of moderate unanchored estimates. The degree of regression to the mean is proportionate to the original distance from that mean (Fiedler & Krueger, 2012).

The lesson of this rather dry exercise is that the damage done by one anchor can be more than compensated by the benefits brought by the other (see also Krueger, 2012, for the use of the anchoring in efforts to reduce the base-rate fallacy).. A judge may therefore wish to generate estimates using both anchors and average the results. However, the mind is not pure, a point Kahneman stresses. If people have trouble ignoring a flagrantly false anchor value, they may not be able to ignore their own previous estimates and the earlier anchor when proceeding to the second round of estimation. Order effects may result; if they cannot be controlled in the moment, they can be mitigated over different judgment tasks and judges.

You are now in a position to enjoy a play with anchors and estimates. Here are some prompts.

[1] What percentage of the population of Greece resides in Athens?

[2] What percentage of deaths occur in hospitals?

[3] What percent of people are happier with life than you are?

[4] What is the probability that Joe Biden will run for president and share the ticket with a person who is not a white male?

Notice the increase in uncertainty over questions. The answer to [1] you may look up. Answers to [2] may have also found, but they will be estimates rather than true values. With [3] it gets pretty murky, but we may hope that the use of two anchors will humble the most extravagant self-enhancers. [4] has no true answer because it refers to a unitary event. Yet, the use of two anchors might help reign in overconfidence in your political forecasts.

Returning from the comparatively well-behaved world of percentages to countable quantities, we wonder how a researcher might go about selecting anchors. The low anchor may always be zero or something near it, but what about the high anchor? With some pilot testing, we can find the minimum number agreed to be too large. This number, and all numbers larger than it, are mathematically the same in that they are ‘too large.’ Psychologically, however, there is a difference. Some numbers are so large that they not only strike us as incorrect but also as absurd. Before asking us to estimate the population of Angkor Wat, the researcher can provide a low anchor of 0 and a high anchor of either 5 million or 5 billion. Both of these high numbers must be wrong, but only the latter is bizarre. The attentive subject will notice that the choice of the high anchor carries information. Perhaps the experimenters don’t even realize it, but the high anchors they provide will vary with the true value that they know but the subjects need to estimate. If the task is, for example, to estimate the number of monks over the age of 70 at Angkor who are also lactose intolerant, even an estimate of 5 million would seem bizarre. The high anchor for a countable quantity is thus a tell – a problem the percentage task avoids.

Kahneman and colleagues missed an opportunity to give us a heuristic that can help instead of wreak havoc. To wit, an ignorant judge might do well by dividing the high anchor at least by 2, or better yet, by taking the square root – until, of course, the experimenters get wise to this strategy. At the low end, experimenters who use 0 disable any such strategy.

References

Fiedler, K., & Krueger, J. I. (2012). More than an artifact: Regression as a theoretical construct. In J. I. Krueger (Ed.). Social judgment and decision-making (pp. 171-189). New York, NY: Psychology Press.

Furnham, A., & Boo, H. C. (2011). A literature review of the anchoring effect. Journal of Socio-Economics, 40, 35-42.

Herzog, S. M., & Hertwig, R. (2009). The wisdom of many in one mind: Improving individual judgments with dialectical bootstrapping. Psychological Science, 20, 231-237.

Kahneman, D. (2011). Thinking, fast and slow. NY: Farrar, Straus and Giroux.

Krueger, J. I. (2010). Judgmental drag. Psychology Today Online. https://www.psychologytoday.com/nz/blog/one-among-many/201012/judgmenta…

Krueger, J. I. (2012). Anchoring base rates. Psychology Today Online. https://www.psychologytoday.com/us/blog/one-among-many/201204/anchoring…

Krueger, J. I., & Chen, L. J. (2014). The first cut is the deepest: Effects of social projection and dialectical bootstrapping on judgmental accuracy. Social Cognition, 32, 315-335.

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heurtistics and biases. Science, 185, 1124–1131.

advertisement
More from Joachim I. Krueger Ph.D.
More from Psychology Today