Skip to main content

Verified by Psychology Today

Artificial Intelligence

Who Decides: You, the Algorithm, or Both?

Why we should collaborate more deeply with artificial intelligence.

Key points

  • Human bias, sluggishness, and inaccuracy often distort decision making, while AI poses opposite risks, going too far, too fast and overreacting.
  • Collaborative decision making by humans and AI could simultaneously look too near and far, go too fast and slow, and under and overreact.
  • People misattribute features of AI to themselves, studies show. AI could produce an illusion of control and a false sense of self-efficacy.

Artificial intelligence (AI) is supercharging decision making. Anyone with a smartphone can process vast amounts of information and make fast decisions at the tap of an icon. They can shop, invest, choose friends, while companies use AI to select markets, set prices and hire staff. Potential benefits are vast and growing. But what is really happening here and who is in control? How can we collaborate more effectively with AI in decision making?

A second related problem is that decision environments are increasingly rich and dynamic. We are flooded with information and options. There are simply more decisions to be made — which is good news overall. However, this also means that people and organizations become heavily reliant on artificial support. Especially when decisions are purely routine, or complex and must be quick, humans often delegate total responsibility to AI.

The Challenge of Human-Machine Collaboration

Problems therefore emerge as humans and AI collaborate in augmented decision making. The two types of agents are not easy to combine. On the one hand, humans often focus too near, process information slowly, and fail to detect relevant variation. Put simply, humans are relatively myopic, sluggish, and insensitive. On the other hand, as stated above, AI easily goes too far, too fast, and is often over-sensitive. Stated more formally, AI is inherently farsighted and hyperopic (the opposite of myopic), hyperactive, and hypersensitive.

When both types of agents work together in decision making, these risks will compound. The combined system might focus near and far, process fast and slow, and react passively and furiously, all at the same time. Decision making is then poorly coordinated and potentially conflicting. Ironically, digitalization results in less effective decision making.

Olga Guryanova / Unsplash
Working with AI
Source: Olga Guryanova / Unsplash

Consider the use of AI in surgery. To be effective, artificial agents must understand and trust surgeons’ perceptions, preferences, and speed of work. Equally, surgeons need to trust the accuracy and reliability of AI. But if one goes too fast and the other too slow, or they act independently and ignore each other, medical procedures end in disaster. Both types of agents must work collaboratively to ensure effective decision making in real time.

New Risks for Decision Making

These developments transcend prior thinking about decision making. To date, research primarily focuses on the risks posed by human limitations, especially myopia, bias, and idiosyncratic noise, and what happens when such distortions infect AI. Certainly, these are major concerns. Once infected in this way, digital systems quickly amplify human limitations.

Moving forward, however, equal attention must be paid to new risks posed by AI, even after we control for human bias and noise. Because digitalized decision making could still reach too far, go too fast, and react too furiously. Granted, it may be free of distorting priors, but it could become overly complex and opaque anyway. As I explain in my recent book Augmented Humanity, people will struggle in these situations and could slide into dependence or self-delusion.

Writing almost 20 years ago, Daniel Wegner identified a similar effect. He explained how deep systems in the brain trigger thought and action before deliberation or intention, yet people still feel a sense of control. He labeled this the illusion of conscious will. It has positive effects too, though, including support for a sense of self and responsibility.

In the same way, AI can produce a digitalized illusion of conscious will. People tap icons and make online choices, and thus experience a sense of intentional control. But in truth much about the outcome is determined by invisible algorithms. This poses risks for self-understanding and a sense of moral responsibility, not to mention decision making itself.

In fact, recent studies show that people do misattribute features of AI to themselves. Could the digitalization of decision making therefore increase the illusion of control and create a false sense of self-efficacy? If so, people will be less genuinely autonomous, despite feeling more empowered. To counteract these risks, people must develop stronger self-regulatory skills and learn to avoid some digital influences while embracing others.

Bottom line, AI adds unprecedented power and reach to decision making but not without risks. Of course, we are right to worry about the negative impact of bias and idiosyncratic noise. However in doing so, we must not forfeit control or abandon important commitments, nor diverge into conflict against AI. Instead, human and artificial agents must collaborate more deeply and form genuine partnerships in decision making.

References

Bryant, P. T. (2021). Augmented Humanity: Being and Remaining Agentic in a Digitalized World. Palgrave Macmillan.

Kahneman, D., Sibony, O., and Sunstein, C.R. (2021). Noise: A Flaw in Human Judgment. London, William Collins.

Wegner, D. M. (2002). The Illusion of Conscious Will. MIT Press.

advertisement
More from Peter T. Bryant, Ph.D.
More from Psychology Today