Artificial Intelligence
Chatbots Are a Valuable Tool—and a Moral Test for Us All
It's a test we may fail if we treat AI as the Wild West and ignore history.
Updated July 17, 2023 Reviewed by Michelle Quirk
Key points
- Twenty years ago, we allowed Silicon Valley to delude us by dismissing social media's harmful effects.
- Threats now posed by generative AI tools require proactive ethical thinking about policies to confront them.
- We have lots of ethical reasoning tools to do better—we just have to use them.
ChatGPT4 and other such “generative AI” tools promise to provide us with many good things: better ways to manage information overload, more efficient use of our time, rescue from drudge work to allow us to focus more on what we care about. But they also provide us with something else: a moral test of sorts. It is only our human nature to delight in the shiny new thing. But we should know by now that our blithe treatment of these tools as the new Wild West, and our failure to seriously address right now the dangers they pose, will cost us dearly in the future.
We may well be on our way to failing the moral test posed by chatbots by ignoring the lessons of our response to the burgeoning dominance of social media that began 20 years ago.
Notwithstanding all the benefits of social media connectedness, our failure to seriously address its harms, coupled with the conceited, unrestrained culture of Silicon Valley, has arguably left us diminished in many important ways. The dark side of our digital platforms has contributed to economic disparity (Heuer, 2015), political tribalism (Bail et al., 2018), eroded concentration levels (e.g., Zhao et al., 2021), data exploitation, cyber-bullying—the list goes on.
A Proactive Focus on Harms
Similar—and completely avoidable—harmful effects already are emerging with chatbot development and use. History will repeat itself if we ignore the difficult work of deliberating on harms and responding with nudges, guardrails, and best-use incentives to address them.
For users, chatbots beckon with a clean promise to streamline workflows and even spare us from some types of work altogether, while inviting us to gloss over questions of fairness, appropriation, and attribution. But our norms and moral responsibilities don’t go away just because a gadget makes it easier for us to ignore them. When we make use of technology to circumvent our responsibilities, to claim the work of others as our own, to promote biased and discriminatory thinking, we become part of the problem and undermine the potential of technology to help us all flourish.
The moral test for system designers and engineers is arguably even greater. Already, developers have stolen copyrighted material and scraped private data to train their systems (Small, 2023), and chatbots have spread misinformation and even made things up. They are on their way to push entire categories of jobs into obsolescence. Worse, the very developers who helped build the foundations of chatbot technology warn that generative AI may soon pose an existential threat to humanity. In a single-sentence statement signed by more than 350 executives and engineers, the Center for AI Safety warned in May that “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war” (Center, 2023). The signatories included top executives from three of the leading AI companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic.
The most damning evidence of moral failure on the topic was offered by Altman this spring, who pleaded with political leaders in Washington to set policies curbing the headlong rush to develop generative AI—a headlong rush that he has been leading as the CEO of the company that has developed the ChatGPT tool (Kang, 2023).
Moral Responsibility and AI
It doesn’t have to be this way. And there are some encouraging signs that AI developers are increasingly recognizing the moral test posed by the industry. “The last thing you want is to get blindsided by a future YOU helped create,” warns Ethical OS, a consortium that advocates caution. Amodei, heading another generative AI outfit, shows us what proactive ethical deliberation can look like. His company, Anthropic, delayed release of its own tool, Claude, as engineers sought to anticipate harmful uses and effects. “My worry is always, is the model going to do something terrible that we didn’t pick up on?” Amodei said (Roose, 2023). The company made sure its tool was governed by “constitutional AI”—a mixture of foundational rules—such as the United Nations’ Universal Declaration of Human Rights—and some rules Anthropic added, which include, “Choose the response that would be most unobjectionable if shared with children” (Anthropic, 2023).
We have longstanding ethical-reasoning tools and practical moral deliberative strategies that can help boost more such proactive thinking. They are not only valuable for developers; we all, as users, have a moral obligation to ensure responsible consumption of these tools—not only to help prevent harm to others, but also to resist what Tim Wu refers to as the “tyranny of convenience” (2018).
Tools to Improve Ethical Thinking
A few examples to consider:
- The global Organization for Economic Cooperation and Development (OECD) issued a statement on AI ethics in 2019, calling for businesses and developers to adopt five “principles for responsible stewardship of trustworthy AI.” They include inclusive growth, human-centered values, transparency, and safety (OECD)
- In 2021, Neural Information Processing Systems, or NeurIP, adopted a scholar “submission checklist” that requires consideration and disclosure of potential harmful effects of innovations. It states, “Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations” (NeurIPS).
- The previously mentioned Ethical OS provides a checklist that identifies eight “risk zones” to consider, as well as scenarios to encourage focus on long-term effects of technology innovation (EthicalOS.org).
- Moral scholars and applied ethicists across the spectrum of topics offer helpful ways to think proactively about our moral responsibilities in technology development and use. Shannon Vallor lists 12 critical “technomoral virtues” that should shape our digital behavior and thus ensure “a future worth wanting” (Vallor, 2016). Charles Ess (2021) outlines how best to think about dilemmas we face in digital media. And my own Multidimensional Ethical Reasoning Task Sheet (MERITS) model can help improve focus on moral considerations (Plaisance, 2021).
When it comes to generative AI, both its development and its uses, we can all do better.
References
Anthropic. (2023, May 9). Claude’s constitution. Available: https://www.anthropic.com/index/claudes-constitution
Bail CA, Argyle LP, Brown TW, Bumpus JP, Chen H, Hunzaker MBF, Lee J, Mann M, Merhout F, Volfovsky A. (2018, September 11). Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences 115(37): 9216–9221. doi: 10.1073/pnas.1804840115.
Center for AI Safety (2023, May 30). Statement on AI risk. Available: https://www.safe.ai/statement-on-ai-risk
Ess, C. (2020). Digital Media Ethics (3rd Ed.) Cambridge: Polity.
Heuler, H. (2015, May 15). Who really wins from Facebook’s ‘free Internet plan’ for Africa? ZDNet. Available: https://www.zdnet.com/article/who-really-wins-from-facebooks-free-internet-plan-for-africa/
Kang, C. (2023, May 16). OpenAI’s Sam Altman urges AI regulation in Senate hearing. New York Times. Available: https://www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html
OECD. (2019). Recommendation of the Council on Artificial Intelligence. Available: https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449
Neural Information Processing Systems. Paper submission checklist. Available: https://neurips.cc/public/guides/PaperChecklist
Plaisance, P.L. (2021). Media ethics: Key principles for responsible practice (3rd Ed.) Cognella.
Roose, K. (2023, July 11). Inside the white-hot center of AI doomerism. New York Times. Available: https://www.nytimes.com/2023/07/11/technology/anthropic-ai-claude-chatbot.html
Small, Z. (2023, July 10). Sarah Silverman sues OpenAI and Meta over copyright infringement. New York Times. Available: https://www.nytimes.com/2023/07/10/arts/sarah-silverman-lawsuit-openai-meta.html
Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. New York: Oxford University Press.
Wu, T. (2018, February 16). The tyranny of convenience. New York Times. Available: https://www.nytimes.com/2018/02/16/opinion/sunday/tyranny-convenience.html
Zhao N, & Zhou G. (2021, February 9). COVID-19 Stress and Addictive Social Media Use (SMU): Mediating Role of Active Use and Social Media Flow. Front Psychiatry. doi: 10.3389/fpsyt.2021.635546.