Artificial Intelligence
Implicit vs. Explicit Reasoning in Healthcare
Clinicians, AI, and Serena Williams.
Posted May 7, 2024 Reviewed by Gary Drevitch
Key points
- Many in the healthcare system rely on explicit reasoning, needing to show why and how they're informed.
- Existing theories of cognition may prove useful in guiding towards a more unified system of LLM reasoning.
Serena Williams stands on the baseline of the court, waiting for her opponent to send a wicked-fast ball over the net. The ball, once served, rockets over at 100 mph, tracing a parabolic arc through the air. In a reaction time of around 200 milliseconds, Williams computes the current trajectory of the ball. She uses this information to predict its future location, all while factoring in spin, wind resistance, and the complex modeling of how the ball will bounce depending on whether the court’s surface is grass, clay, or tarmac. These complex ballistic calculations are performed at unfathomable speed, triggering muscle actions that send Williams on the perfect interception course. Whack. The ball is returned.
Did Serena perform a series of complex calculations? Yes. Does she hold a Ph.D. in computational physics or mechanical engineering? No. Is she aware of the complex mathematical feat she has just flawlessly pulled off? You’d have to ask her.
This is the distinction between implicit reasoning and explicit reasoning.
Implicit reasoning is intuitive, automatic, and often unconscious, much like Williams’s instantaneous reactions. By contrast, explicit reasoning is formal, codified, and conscious, akin to a physicist calculating ball trajectories on paper. When a physicist writes out pages of equations to describe the movement of a tennis ball, that's explicit. When Serena Williams tracks a ball in real time and flawlessly executes a return shot, that's her implicit reasoning of those same ballistic calculations at work. It’s not to say either is better or worse, but they are certainly different.
Healthcare and its demand for explicit reasoning
Most of us can comfortably navigate the world, making optimal decisions in chaotic environments without a clear view of how we integrate information and use it to form predictions. A clinician in a healthcare environment, however, cannot afford such ambiguity. Reliance on explicit reasoning allows clinicians to communicate the mechanisms of their decision-making reliably to others. When operating as part of a care team, it is imperative that clinical decisions are well-documented so that efforts can be coordinated and accountability maintained. In healthcare, being correct isn't enough; we need to be able to show our working.
Are large language models like ChatGPT examples of explicit reasoning?
Not currently. And at this moment, it is hard to see how explicit reasoning can become an integral component of these AI models. ChatGPT is a large language model (LLM) that, due to its vast size and complex network architecture, performs high-level language tasks with seeming ease. However, the deep layers of its neural network make it nearly impossible to fully understand how it combines information to form knowledge representations. This opacity means that while LLMs like ChatGPT perform very well, they do so on a basis more akin to implicit rather than explicit reasoning.
To put ChatGPT's size into perspective, consider that the 2012 winner of the 'Large Scale Visual Recognition Challenge' was a neural network with 60 million parameters. In stark contrast, ChatGPT is powered by a network with 1.76 trillion parameters—30,000 times larger. When it comes to AI, size clearly does matter.
What does this mean for LLMs in healthcare?
While implicit reasoning can be very effective at performing complex tasks, it is not well-suited for healthcare environments where explicit reasoning is crucial. Therefore, we can deduce that LLMs, in their current form, are not well-suited for clinical applications. This doesn't mean healthcare will be left out of the next phase of technological innovation, but rather that the AI community must build explicit reasoning systems to complement implicit reasoning LLMs, particularly in fields like mental healthcare, where such developments are already underway [1,2].
Neuroscience might hold the key
The field of artificial intelligence strives to replicate elements of natural intelligence in computers, often drawing inspiration from complex examples in nature, like the human brain. Pre-existing theories of cognition, like Daniel Kahneman's fast (implicit) and slow (explicit) thinking [3], may therefore provide valuable guidance. These cognitive systems work in concert, enabling seamless adaptation in diverse environments, from the tennis court to clinical settings. By leveraging these principles, neuroscience could guide AI toward a unified system that reflects our own adaptability of reasoning—a crucial development for healthcare innovation. Such a system would blend the instinctive prowess of Serena Williams with the analytical depth of top clinicians.
And wouldn’t that be nice.