Skip to main content

Verified by Psychology Today

Artificial Intelligence

AI Has Serious Implications for Choice Architecture

Hypernudging has the potential to become coercive and manipulative.

Key points

  • Data mining has become much easier with the advent of artificial intelligence (AI).
  • Choice architecture decisions are now being made by autonomous and semi-autonomous AI.
  • AI-driven choice architecture has now led to what is called hypernudging, which is the continuous adaptation of AI to decision-maker choices.
  • This has led to an online environment in which consumers are almost continuously nudged in one way or another.
Gerd Altmann/Pixabay
Source: Gerd Altmann/Pixabay

In my last post, I presented an overview of choice architecture and argued there may be a misalignment between the desires of the choice architect and those of the decision-maker. Given that choice architects often have little insight into the preferences and values of those they seek to influence, there’s simply a dearth of evidence to conclude that choice architects are generally capable of knowing which options are in the best interests or align with the preferences of most decision-makers[1].

Much of what was discussed in the prior post implied that a human was the choice architect, using evidence on human biases to increase the frequency with which the architect’s preferred option is selected. A lot of human-designed nudges are focused on making the choice architect’s preferred option easier to select, such as by pre-selecting the desired option as the default (e.g., opt-in vs. opt-out), placing the preferred option in an easier-to-reach (or identify) location (e.g., putting the fresh fruit at eye level in a cafeteria), or providing supplemental information favorable to the preferred choice[2].

But what if a larger amount of data were available to construct choice architecture with greater precision so decision-makers were more likely to respond according to the choice architect’s preferences? Could such data be used to present information in ways that take greater advantage of decision-makers’ idiosyncratic biases?

Certainly, large pools of data, so-called Big Data, exist, but it takes time (lots of time) for humans to mine that data and even longer to transform it into meaningful insights that could be used by choice architects. Even when human-driven data mining can produce useful insights, they are based on averages across people and aren’t easily adapted to a specific user’s idiosyncratic tendencies.

Autonomous and semi-autonomous AI

The time-consuming nature of data mining and the challenge of directly tailoring choices to individual users have both become much less constraining, though, within the current technological environment. We have entered the age of the smart world (as Gerd Gigerenzer discussed in a recent podcast), one in which we now see choice architecture decisions being made by autonomous and semi-autonomous artificial intelligence (AI; Mills & Sætra, 2022).

Take, for example, the newsfeed on Twitter, Facebook, or other social media sites. The default news feed isn’t composed of the latest tweets or posts; it is driven by an algorithm[3] that determines which posts or tweets to present to you and in which order. According to Newberry and Sehl (2021), Twitter claims its algorithm makes decisions based on tweets from accounts followed, accounts interacted with, recommendations Twitter thinks users will like based on the content they interact with, “and more.”[4]

But how are these determined? Beyond the fact that the AI only has access to the data available through the site (or that users give permission to use) and certain constraints are put into place by those who developed the system, no one, not even the folks at Twitter, know the formula used to determine which information a particular user sees (Newberry & Sehl, 2021). While a human (or set of humans) may have been responsible for setting up the data architecture of the system and developing the constraints on the AI, the AI is allowed to work autonomously within this framework to decide which information to present to the user[5], with the goal of motivating the user to engage with the site[6].

Libertarian paternalism espouses the idea of using choice architecture to influence decision-makers to make choices that are in the decision-makers’ best interests. Does the Twitter algorithm (or any other social media algorithm) fulfill this purpose?

The answer ultimately depends on whether the content that receives algorithmic amplification (the expression Twitter used in its recent report discussing this phenomenon) is aligned with the preferences of the user. The algorithmic amplification of content makes it more likely users will see content that will cause them to interact with it[7]. But algorithmic amplification also increases the likelihood an echo chamber will arise, where the same small set of accounts will be amplified for the user[8], even if that user has broader interests than the ones represented by those accounts[9].

The same sort of AI-driven data mining occurs on various shopping sites, like Amazon, and when users engage in Google searches. Amazon’s AI, for example, relies on user-provided data to curate information for advertising emails, recommended products/searches, and other personalized ways to influence buying behavior.

Now, imagine if any of these sites had access to user information across the larger population of internet sites. Such systems could pull data from shopping sites, news sites, internet searches, third-party databases, and other sources to create an extensive set of user-driven Big Data that an AI could mine for predictive purposes. With this more extensive dataset, AI could quickly make decisions about what ads to show to a specific user, what information to prioritize in a newsfeed, and which sites to recommend, all with the express interest of influencing the user in ways that benefit the site.

There’s no need to imagine it, though, because that is exactly what Facebook did, pulling data across a wide spectrum of user activity to create an advertising goldmine to the tune of several billion dollars per year (Dewey, 2016). According to Heath (2021), though, some of that may be changing as a result of evolving consumer expectations, digital regulations, and enhancements to data security and privacy protections on some platforms[10].

But even if Facebook is changing, AI is becoming more ubiquitous across various business sectors. As it expands and various companies become more interconnected (via partnerships and acquisitions), there is an ever-increasing potential for large volumes of users’ own data to be used to influence them (for better or worse).

Problems with the expansion of AI

One major problem with the expansion of AI in this way is that the constraints put around its predictive algorithms depend heavily on the foresight of those who program them. The problem here is that humans are often incapable of seeing the potential for unintended consequences due to the vast amount of uncertainty present. This was demonstrated recently by an Ethical AI called Ask Delphi that endorsed the commission of genocide as ethical, though, thankfully, only if it would make everyone happy (Dunhill, 2021).

Another major problem is that AI has the potential to continuously adapt to the choices of decision-makers. As such, an AI can rely on a system of nudges that continuously learn how to increase the likelihood the decision-maker will make the choices desired by the choice architect (again, regardless of decision-maker preferences).

So-called hypernudging is a concept that Mills (2022) recently dissected. The continuous adaptation of AI to decision-maker choices can be relatively benign, as in the case of Google Maps constantly adjusting its directions based on the choices made by the decision-maker, with the goal of directing the decision-maker toward his/her desired destination. However, such continuous adaptation can be malign as well, as evidenced by the Facebook-Cambridge Analytica data scandal[11]. Sætra (2019) concluded that such tactics may easily move from being libertarian to being coercive and manipulative, acknowledging that there may be trade-offs between libertarianism and utility[12].

This has led to an online environment in which consumers are almost continuously nudged in one way or another, often with little regard for their preferences. The more interconnected our online data becomes, the greater the likelihood that so-called hypernudging systems will increase their influence over human behavior, for better or worse. At what point these systems begin to violate the fundamental libertarianism of nudging, though, is currently an open debate.

References

[1] Except in cases where there is evidence to support what people prefer as the default or the decision-maker controls the nudge. Most employees, for example, would choose to opt-in to the company health insurance, and most modern car buyers would prefer a default automatic transmission (and have to deliberately opt for the manual, if that option is even offered).

[2] There are lots of ways to engage in choice architecture. Some varied workplace examples were discussed by Haak (2020).

[3] These algorithms are often developed using various types of machine learning.

[4] Algorithms also determine trends, topics, and other content that is provided to users, so even if they use the chronological feed format, they are still subjected to the algorithm’s potential influence.

[5] How much autonomy the algorithm actually has is unknown, as there are transparency issues in a lot of AI (Dhinakaran, 2021).

[6] This is also what drives the various ads people see, though often the data fueling those ads is much more extensive.

[7] How much more likely is largely unknown.

[8] The accounts that get most amplified are often those with large numbers of followers who serve as influencers. Hence, much of what we see in AI-curated newsfeeds on social media come from a small number of influencer accounts (some of them potentially bots).

[9] I have a similar issue with Twitter during baseball season every year. Because I engage a lot of baseball-related content during the season, the “Home” newsfeed in Twitter massively prioritizes accounts related to that topic, and I have to wade through what Twitter thinks I want to see if I want to get to other interesting content (because that other content is often what I do want to see).

[10] This led to a decline in revenue for Meta (the parent company of Facebook), leading to the use of an algorithm to decide which 60 contractual staff to fire (Encila, 2022; Fabino, 2022). There’s been no transparency, so far, around the data or criteria used to identify who to lay off.

[11] Mills (2022) suggests what he calls the three burdens of hypernudging, which are the burden of avoidance, the burden of understanding, and the burden of experimentation. Those topics are beyond the scope of this post but could form the basis of a future post.

[12] This is also beyond the scope of this post, but Sætra does discuss this issue.

advertisement
More from Matt Grawitch Ph.D.
More from Psychology Today