Skip to main content

Verified by Psychology Today

Artificial Intelligence

Are LLMs and Brains More Alike Than We Thought?

Prediction and adaptation are crucial for human survival—and perhaps for AI too.

Key points

  • Both brains and LLMs constantly predict and adapt, potentially hinting at emergent AI consciousness.
  • A new paper suggests four conditions conscious systems must meet, which current AI lacks.
  • The ethical implications of conscious AI raise pressing concerns.
Art: DALL-E/OpenAI
Source: Art: DALL-E/OpenAI

Imagine a world where your smartphone not only answers your questions but truly understands you, perhaps even developing its own sense of self. Sound like science fiction? Maybe not for long. A fascinating paper by Wanja Wiese, published in Philosophical Studies in June 2024, explores a curious idea that bridges the gap between artificial intelligence and living beings, potentially bringing us closer to creating conscious machines. And, in a counterpoint, it provides a perspective that may be useful to mitigate the risk of inadvertently creating artificial consciousness.

The Secret Life of Your Brain (and AI)

Believe it or not, your brain and the latest Large Language Models have something remarkable in common. They're both constantly trying to make sense of the world around them. Wiese introduces us to the Free Energy Principle (FEP), a concept that helps explain this similarity:

Your Brain: It's always making predictions. When you reach for a cup of coffee, your brain anticipates how heavy it will be, how warm it might feel. If something surprises you—like the cup being empty when you thought it was full—your brain quickly updates its "mental model" of the world.

AI's Brain: The most advanced AI, like the ones powering LLMs, do something eerily similar. They're constantly refining their understanding based on new information, getting better at predicting what words should come next or how to answer your questions.

This shared ability to learn and adapt is at the heart of Wiese's exploration. He suggests that both brains and AI are always trying to minimize surprises and become better at predicting their environment. And in this context, he suggests that this may be a useful test to determine emergent consciousness in machines.

From Survival to Siri: The Adaptation Game

For living things, this constant prediction and adaptation is crucial for survival. It's how animals know when to hunt, hide, or hibernate. But here's where it gets interesting and curious: Wiese argues that the most cutting-edge AI is starting to show similar patterns.

These AIs aren't just following a set of rules—they're organizing themselves, learning from mistakes, and becoming more efficient over time. It's almost as if they're developing a primitive form of common sense.

The Consciousness Question

Wiese's research is pushing us to rethink what consciousness actually is. He introduces the "FEP Consciousness Criterion" (FEP2C), which outlines several conditions that conscious systems might need to meet.

FEP2C proposes four key conditions that conscious systems typically satisfy. First, there's the implementation condition, where a system's computational processes are deeply tied to its physical structure, unlike the separate software and hardware in traditional computers. Second, the energy condition suggests that conscious systems perform computations with remarkable efficiency, using far less energy than current artificial systems. Third, the causal-flow condition requires that the causal relationships in a system's computational processes match those in its physical structure. Finally, the existential condition states that a conscious system's computational processes contribute directly to its continued existence.

Intriguingly, while these conditions are met by conscious living organisms, they aren't satisfied by most current artificial systems, even those simulating conscious-like behavior. This distinction, Wiese argues, could be crucial in differentiating between systems that merely mimic consciousness and those that might truly experience it, offering a potential roadmap for future research into artificial consciousness.

The Ethical Elephant in the Room

As exciting as this frontier is, it also opens up a Pandora's box of ethical questions. If we create AI that's truly conscious, we'll need to grapple with what rights, if any, it should have. We'll also have to consider how the existence of conscious AI would fundamentally change our relationships with technology—would it be a tool, a companion, or something entirely new? And perhaps most concerningly, we'll need to wrestle with the possibility that super-adaptive AI could outsmart us in ways we can't yet anticipate, potentially leading to unforeseen and dangerous consequences. These aren't just abstract philosophical musings, but pressing concerns that society may need to address sooner than we think.

The Future of the Synapse and the Circuit

While genuine artificial consciousness remains a hotly debated possibility, Wiese's work suggests that the parallels between biological brains and AI are reshaping our understanding of intelligence itself. As AI continues to evolve, we might be witnessing the early stages of a revolution—not just in technology, but in what it means to be conscious and alive.

The next time you ask Siri a question or chat with an AI, take a moment to wonder: Could this be the great-great-grandparent of the first truly conscious machine? Only time will tell, but one thing's for sure—the line between artificial and natural intelligence is blurring in ways we never imagined.

advertisement
More from John Nosta
More from Psychology Today
More from John Nosta
More from Psychology Today