Skip to main content

Verified by Psychology Today

Neuroscience

The Real Problem of Consciousness

Anil Seth poses excellent questions about consciousness but has weak answers.

Key points

  • Seth provides an excellent characterization of the real problem of consciousness as finding mechanisms that explain conscious experiences.
  • The mechanisms Seth advocates perform predictive processing using Bayesian inference, but brains work differently.
  • Parallel constraint satisfaction is an alternative mechanism that is much more biologically plausible for all mental functions.

The hottest new book on consciousness is Anil Seth’s Being You. With clarity and verve, he promises a "new science of consciousness" based on advances in neuroscience. Seth provides an excellent formulation of the problem of explaining consciousness but an inadequate solution based on predictive processing.

Some philosophers have insisted that the hard problem of consciousness is to explain why there is something that it's like to be conscious. This formulation sets up science and philosophy to fail because of the utter vagueness of "what it's like." Seth has a much better formulation (p. 279):

This is the essence of the real problem approach to consciousness. Accept that consciousness exists, and then ask how the various phenomenological properties of consciousness—which is to say how conscious experiences are structured, what form they take—relate to properties of brains, brains that are embodied in bodies and embedded in worlds. Answers to these questions can begin by identifying correlations between this-or-that pattern of brain activity and this-or-that type of conscious experience, but they need not and should not end there. The challenge is to build increasingly sturdy explanatory bridges between mechanisms and phenomenology, so the relations we draw are not arbitrary but make sense. What does "make sense" mean in this context? Again: explain, predict, and control.

Finding Mechanisms to Explain the Full Range of Conscious Experiences

Then the real problem of consciousness is to find mechanisms that explain the full range of kinds of conscious experiences, which include the following:

  • External sensations such as seeing, hearing, smelling, tasting, and touching;
  • Internal sensations such as pain, heat, cold, proprioception, balance, full stomach, and needing to go to the bathroom;
  • Emotions such as happiness, sadness, fear, anger, disgust, shame, and envy;
  • Thoughts such as my current awareness that I'm writing a blog post about consciousness.

This variety recognizes that there is not just something that it's like to be conscious but rather many things that it's like to have different forms of consciousness. A good theory of consciousness should specify mechanisms that apply to all of them, meshing with neural theories of sensation, emotion, and thought.

Bayesian Inference

Unfortunately, the mechanisms that Seth appeals to for these explanations are not up to the job. He buys into the currently popular view that the brain is primarily a predictive processor that uses probability theory based on Bayesian inference to continuously make predictions about what to expect. Seth thinks that the different kinds of conscious experience are just these predictions. The three main problems with this view are that (1) the brain is much more than a prediction engine, (2) evidence is lacking that the brain uses Bayesian inference, and (3) how Bayesian predictions produce a full range of conscious experiences remains unspecified.

Prediction about what to expect in the future is one important function of the brain, but it is far from the only function. At least as important, brains also engage in pattern recognition, explanation, evaluation, memory, and communication. The brain uses pattern recognition to identify important objects in the environment—for example, an apple whose shape and color match a template stored in memory.

Brains also generate explanations of what has already happened—for example, why someone put an apple on the table. The process of evaluation is important for assessing the worth of what is perceived—for example, the value of an apple for alleviating hunger—and for deciding what information is important enough to store in memory. Brains also serve to communicate important information to other brains—for example, that the apple can be shared. Pattern recognition, explanation, evaluation, memory, and communication all require inference, but they do not reduce to predictions about the future.

Moreover, there is no evidence that brains accomplish prediction, pattern recognition, explanation, evaluation, memory, and communication using probabilities and Bayesian inference. Probability theory was only invented in the 17th century, and sophisticated ideas about the use of Bayes's theorem were only developed in the 20th. Even today, Bayesian inference has serious computational problems because the time required to compute with probabilities grows exponentially with the size of problems to be solved. At best we can say that the brain operates as if it were an approximation to a Bayesian engine, but "as if" is not a mechanism, so it does not solve the real problem of consciousness.

Constraint Satisfaction

Fortunately, we have a much more satisfactory general account of neural computation, which views the brain as a constraint satisfaction engine rather than a probability engine (McClelland et al. 2014; Thagard 2019). For example, perception of an apple satisfies various positive constraints such as that apples have particular colors and shapes, as well as negative constraints such as that apples are not pears. These constraints are easily captured in real neural networks by means of the excitatory and inhibitory links between neurons and neural groups. Constraint satisfaction also explains explanation, evaluation, communication, and even prediction without assuming that the brain is computing with probabilities.

The third major difficulty with Seth's approach to the real problem of consciousness is that it does not explain any particular kinds of qualitative experience. For example, he proposes that emotions are the result of predictive processing concerning internal sensations coming from the body. He claims (p. 188) that "affective experiences are not merely shaped by interoceptive predictions but constituted by them." But allusion to such predictions tells us nothing about the differences among experiences that involve happiness, sadness, fear, anger, or disgust. Alternatively, the semantic pointer theory of emotions explains different emotional experiences as resulting from constraint satisfaction performed by neural processes. These processes combine representation of a situation such as eating an apple, physiological changes resulting from eating the apple, and cognitive appraisals of the goal relevance of eating the apple (Thagard 2019; Thagard, Larocque, and Kajic, 2021). Different physiology and different appraisals yield different experiences.

Although I believe Seth is wrong about the neural mechanisms that he proposes to connect brains with consciousness, I applaud his formulation of the problem and his many interesting discussions such as advances in measuring consciousness. He errs in calling perceptions “controlled hallucinations,” but that will require another post.

References

McClelland, J. L., Mirman, D., Bolger, D. J., & Khaitan, P. (2014). Interactive activation and mutual constraint satisfaction in perception and cognition. Cognitive Science, 38(6), 1139–1189.

Seth, A. (2021). Being You: A New Science of Consciousness. New York: Dutton.

Thagard, P. (2019). Brain-Mind: From Neurons to Consciousness and Creativity. New York: Oxford University Press.

Thagard, P., Larocque, L., & Kajić, I. (2021). Emotional change: Neural mechanisms based on semantic pointers. Emotion, Advance online publication. https://doi.org/10.1037/emo0000981.

advertisement
More from Paul Thagard Ph.D.
More from Psychology Today