Skip to main content

Verified by Psychology Today

Artificial Intelligence

Collapsing the "Information Wave Function" with LLMs

When does knowledge actually become real?

Key points

  • LLMs generate potential meanings; human interpretation collapses them into knowledge.
  • LLMs shape both individual and collective knowledge, acting as partners, not replacements.
  • Beware of attributing metaphysical qualities to AI; its seductive eloquence can enchant us.
Art: DALL-E/OpenAI
Source: Art: DALL-E/OpenAI

Large language models (LLMs) have taken the world by storm, generating everything from poetry to technical documents. They’ve been praised for their ability to make new connections and even surprise us with insights. But do these models actually know anything? Or is the knowledge we attribute to them just an illusion—a projection of our own interpretive power?

LLMs and the Quantum Superposition of Meaning

This brings us to an interesting and very conceptual analogy rooted in quantum mechanics: the concept of wave function collapse. In quantum mechanics, particles exist in a state of superposition, holding multiple possible states at once until they are observed. It is the act of observation that "collapses" this superposition into a specific outcome. Similarly, LLMs generate a vast array of potential responses and connections, existing in a kind of informational superposition. However, these outputs are not yet knowledge; they are probabilities and patterns, hovering in a realm of potential meaning.

When we engage with an LLM’s output, our minds act as the observer, collapsing this informational wave function into something concrete and meaningful. The LLM generates text based on statistical associations in its training data, offering a scaffold of language—words and phrases that form potential ideas. It’s only when a human reads, interprets, and contextualizes that the output transforms into something more—into knowledge.

Partners, Not Replacements in the Creation of Knowledge

This interplay highlights an essential truth about the relationship between humans and AI. While LLMs can surface connections that might otherwise go unnoticed, they don’t create knowledge in the sense that humans do. Their power lies in generating new combinations of "cognitive units" based on vast amounts of data. Yet, it’s up to us to navigate these possibilities, to sift through the noise and elevate the signal. In a way, the LLM is like a cosmic soup of language, filled with infinite patterns, but it is the human observer who crystallizes one specific pattern into something meaningful.

So, let's torture this analogy a bit more. A recent paper suggest that LLMs also influence collective intelligence—how groups form, access, and process information. By providing a vast network of interconnected ideas, LLMs reshape not just individual knowledge creation but also group decision-making. This collective dimension amplifies the quantum analogy: it's not just a singular wave collapse by one observer but an entire society collectively engaging with AI to create shared knowledge. In this dance, LLMs become partners that augment our collective abilities, not mere replacements for human cognition.

So, is there knowledge in LLMs? Not exactly. What they possess is the potential for knowledge—a vast, swirling ocean of possibilities. It’s our act of interpretation that collapses these potentials into concrete insights, transforming the raw output into something that holds meaning, value, and sometimes even wisdom. However, we must be mindful of the enchantment of AI's eloquence, ensuring we don't project metaphysical qualities onto these models. In this interplay, we find the essence of what it means to be human in an increasingly digital world.

Perhaps my next story will involve a certain cat—one that's both alive and not, until you observe it!

advertisement
More from John Nosta
More from Psychology Today
More from John Nosta
More from Psychology Today