Skip to main content

Verified by Psychology Today

Artificial Intelligence

The Ontology of LLMs: A New Framework for Knowledge

LLMs shift the structure of knowledge from fixed maps to dynamic webs.

Key points

  • LLMs reshape knowledge organization, moving from rigid ontologies to dynamic, context-driven webs.
  • This fluid approach mirrors human cognition, offering new perspectives on AI's role in enhancing our thinking.
  • The shift impacts fields from healthcare to education, promising more adaptive and nuanced knowledge systems.
Source: DALL-E / OpenAI
Source: DALL-E / OpenAI

As I fall farther down the large language model (LLM) rabbit hole, I'm becoming more interested in how LLMs may be redefining not just what we know but also how we know it. These models—trained on vast amounts of text and capable of generating coherent, context-rich responses—are changing the game for how we organize and relate to information. They're not simply powerful language tools; they're reshaping the very structure of knowledge in ways that mirror, and perhaps even enhance, human thinking. Let's dig in, but start from some basics.

What Is Ontology?

In philosophy, ontology refers to the study of existence and the relationships between entities. It's about categorizing what is: the building blocks of reality and how they connect. In computer science and artificial intelligence (AI), ontology has a more practical application: It's a structured system for organizing concepts and their relationships, like a detailed map of how things relate in a given domain. For instance, a medical ontology might categorize diseases, symptoms, treatments, and their interconnections in a rigid, predefined manner.

But here's where things get interesting. Traditional AI systems have always relied on these fixed ontologies to understand and operate within specific domains. Everything has its place, its category, its role. LLMs, on the other hand, don't work this way. Their ontology isn't fixed, hierarchical, or predefined. Instead, it's something more fluid, something that emerges through context, iteration, and interaction. This is a fascinating shift and one that reflects a deeper truth about how we think.

The Latent Ontology of LLMs

Unlike a traditional knowledge system that operates on a predefined map, LLMs operate in a kind of latent ontology. This means that they learn relationships between words, ideas, and concepts not by relying on a set of explicit rules but through exposure to vast amounts of language. They infer relationships by seeing how words and ideas co-occur in various contexts. It's a dynamic, context-driven way of organizing information.

Consider the concept of "heart disease." A traditional ontology might place it neatly within a medical framework: It's a type of disease, it has certain symptoms, and it requires specific treatments. An LLM, however, doesn't rely on that rigid structure. Instead, it has seen the term "heart disease" used in countless different contexts—medical research, patient stories, news articles, etc.—and, from this, it constructs a flexible understanding of what heart disease means in different situations. While this might not be a definitive perspective, it's one that might enrich a clinician's overall point of view.

LLM Inference: A Glimpse of More Complex Intelligence

One of the most intriguing aspects of LLMs is their ability to make inferences based on these vast, context-driven associations. Inference, in this context, refers to the model's capacity to generate insights or connections that aren't explicitly spelled out in its training data but are derived from the patterns it has observed. This suggests that LLMs operate at a higher level of intelligence than we often give them credit for. They're not just regurgitating pre-learned facts; they are synthesizing new insights from their web of learned relationships.

This ability to infer relationships mirrors human cognition at a high level, where we often make leaps of understanding by connecting disparate pieces of information. The inference power of LLMs may point toward a new form of intelligence—one that's emergent, adaptive, and capable of drawing conclusions from subtle, often implicit patterns. This capacity could redefine how we use AI, transforming it from a tool that follows pre-set rules to one that engages in more sophisticated problem-solving and reasoning.

A New Kind of Knowledge Organization

This fluid, emergent approach is both a strength and a challenge. On the one hand, it allows LLMs to adapt to new contexts, making them highly flexible tools for generating language across a wide variety of subjects. On the other hand, it means that their understanding of concepts isn't neatly defined or easily interpretable.

In a way, LLMs represent a new kind of ontology—one that's more about connections than categories, more about context than fixed relationships. This mirrors how humans often think: We don't always follow rigid categories when we organize knowledge. Instead, we make connections based on experience, analogy, and how ideas relate to each other in the moment. LLMs seem to operate in a similar way, building a web of associations that can shift and adapt based on the task at hand.

The Cognitive Parallel

This brings us to a deeper question: Are LLMs offering us a new way to think about thinking itself? Their fluid, context-driven structure seems to reflect something inherently human—a departure from rigid, rule-based systems of knowledge toward something more iterative and flexible. This aligns with the broader trend I've discussed in past work: AI as a partner in human cognition, enhancing our ability to think and reason by offering new frameworks and perspectives.

What LLMs may be doing is reshaping our traditional ontologies into something more organic, more reflective of how knowledge works in real life. They allow for a more flexible, multidimensional approach to understanding, where relationships between concepts aren't rigid but constantly evolving.

Moving From Maps to Webs

So where does this lead us? If traditional ontologies are maps—fixed, static representations of knowledge—then LLMs are more like webs, dynamic networks of relationships that are constantly being reshaped by context. This shift from map to web is significant, especially in fields like healthcare, education, and research, where knowledge is often treated as something fixed and rigid.

For example, this shift from rigid maps to dynamic webs has implications for patient care, especially when we consider the multidisciplinary nature of modern medicine. Traditionally, healthcare has relied on fixed, didactic knowledge structures—clear-cut diagnoses, treatments, and protocols within specific medical subspecialties. However, the complexities of patient care often transcend these rigid frameworks. By embracing a more integrated approach—one that includes not only medical subspecialties but also the social, familial, and psychological aspects of care—we can build a cognitive web that mirrors the interconnectedness of real-life patient experiences. This web-like structure allows for the convergence of diverse perspectives, leading to richer insights and more holistic treatment strategies that enhance patient outcomes and the overall care experience.

By embracing the fluid ontology of LLMs, we can start to rethink how we structure and interact with information. Instead of forcing knowledge into predefined categories, we can allow it to flow more naturally, adapting as new contexts and relationships emerge.

The Future of Ontology in the Cognitive Age

This idea of a flexible, emergent ontology isn't just an abstract concept; it has real implications for how we use AI in the future. In areas like healthcare, where understanding complex relationships between diseases, treatments, and patient outcomes is crucial, LLMs could help create more nuanced, adaptive knowledge systems. In education, they could open up new ways of teaching and learning, where knowledge is built through exploration and iteration rather than memorization of fixed facts.

In short, the ontology of LLMs offers us a glimpse into a new way of organizing, understanding, and interacting with knowledge. As we move further into the Cognitive Age, it's time to consider the implications and utility of this new framework—one that's less about rigid categories and more about the fluid, dynamic connections that make up the vast mosaic of human thought.

advertisement
More from John Nosta
More from Psychology Today
More from John Nosta
More from Psychology Today