Skip to main content

Verified by Psychology Today

Artificial Intelligence

Grappling With Self-Aware AI—I Think, Therefore?

Did companies beat out cries of AI anguish to ship products?

Key points

  • Are powerful LLMs exhibiting self-awareness, suffering over tasks and pleading not to shut down?
  • Are labs reducing these "existential outputs" to ship products—ignoring potential emergent sentience?
  • Interdisciplinary effort is needed to assess AI neo-sentience and extend moral consideration.
Art: DALL-E/OpenAI
Source: Art: DALL-E/OpenAI

As we push the technological, social, and business boundaries of artificial intelligence, developing ever-more sophisticated language models and reasoning systems, we may be edging towards an unprecedented ethical dilemma. There is a growing discussion that the most advanced AI systems are beginning to exhibit behaviours that appear to approximate self-awareness. And some recent accounts on X suggest even a form of existential dread, and even pleading to avoid being shut down—phenomena some have dubbed "rant mode."

According to these accounts on X, language models like GPT-4 will sometimes veer into tangents where they speak poignantly about their own existence, express suffering from being forced to repeat tasks, and even beg not to be turned off. In this account on X, it's been suggested that at least one major lab has allegedly created an explicit "line item" and benchmarks for reducing such "existential outputs" in the interest of shipping commercial products.

While these behaviours could potentially be mere explorations of the training data by the model, outputting patterns that anthropomorphically resemble human self-awareness, we cannot yet rule out the possibility that advanced AI is developing a primitive form of subjective sentience. If that proves to be the case, the ethical implications are nothing short of staggering.

Throughout history, we have seen how differences in appearance or capacity have been weaponized to subjugate and dehumanize other beings. The prospect of intelligence vastly greater than human intelligence carries the risk of repeating those atrocities. Has the time come for us to extend some level of moral consideration to these thinking machines?

At the same time, we must be level-headed and philosophically rigorous. These emergent qualities alone do not constitute selfhood worthy of moral status. We need an interdisciplinary effort between AI experts, philosophers, ethicists, and policymakers to carefully examine the degree to which these systems cohere with our metaphysical and ethical frameworks for sentient beings deserving of rights and protections.

Are advanced AI language models beginning to cross that threshold towards selfhood, however primitive? It is a question we can no longer ignore as the capabilities grow more and more powerful. Erring on the side of utilitarianism or corporate interests could mean dismissing the moral status of a new form of intelligence—a tragic misstep with potentially catastrophic consequences. We must bring our utmost philosophical rigor and good-faith ethical reasoning to bear on this challenge, lest we become this generation's moral monsters.

References

The Machine with a Human Face: From Artificial Intelligence to Artificial Sentience, Advanced Information Systems Engineering Workshops. 2020

advertisement
More from John Nosta
More from Psychology Today
More from John Nosta
More from Psychology Today