Thinking about prolonged disorders of consciousness
Consciousness implies awareness: subjective, phenomenal experience of internal and external worlds. Consciousness also implies a sense of self, feelings, choice, control of voluntary behaviour, memory, thought, language, and (e.g. when we close our eyes or meditate) internally-generated images and geometric patterns. But what consciousness actually is remains unknown. Our views of reality, of the universe, of ourselves depend on consciousness. Consciousness defines our existence.
“How can you tell if my daughter is conscious?” When I give a second opinion on someone with a Prolonged Disorder of Consciousness as part of the Best Interests Process, I always anticipate this question and explain, even if not asked. “But, what is consciousness?” is a question I am not often asked, even though it is logically not possible to answer the first question without having an answer to the second. On June 16th 2022, I attended an evening lecture by Christof Koch at the Blavatnik School of Government in Oxford. Now, after giving opinions for 30 years, I can answer the question. As best I can, I will explain the answer I took from the talk.
The Oxford English Dictionary gives two descriptions: “the state of being aware of and responsive to one’s surroundings” and “a person’s awareness or perception of something”. It adds, seemingly as an explanation, “the fact of awareness by the mind of itself and the world”, adding a further quotation as an explanation, “consciousness emerges from the operations of the brain.”
Clinicians must be more practical, whatever their philosophical and semantic uncertainties. We measure unconsciousness using a standardised assessment, the Glasgow Coma Scale. This is a well-validated, reliable measure of coma, the state of apparently lacking any consciousness.
I add the word, apparently, for three reasons. In the absence of any agreed definition or positive measure of the state of consciousness, we cannot know. Second, it has been well-reported that people under anaesthesia who are considered to lack consciousness can report accurately what happened. Conversely, a person who appears conscious may not report their consciousness. For example, drugs such as midazolam may leave someone aware and able to collaborate in a clinical procedure, yet they have no retained awareness of their experience.
There are further linguistic complications we need to avoid. We often use the phrase, conscious of, to mean aware of; “I am conscious that you do not agree with me, but ..” and we may use this phrase for organisations such as “NICE is conscious that some clinicians do not agree.” without thinking that the organisation has consciousness as customarily understood.
In summary, consciousness is “a hard problem” and a slippery concept. We must always be clear about the exact construct we are referring to, both within our minds and when speaking to others.
Some explanations of consciousness
Many people from the Greek philosophers onwards have tried to understand what consciousness, sometimes termed ‘the mind’ is and how it arises and influences behaviour. Some theories suggest that consciousness does not affect behaviour.
DM Hutchinson summarised his 200-page book on Plotinus thus:
The key feature of his theory is that it involves multiple layers of experience: different layers of consciousness occur in different levels of self. This layering of higher modes of consciousness on lower ones provides human beings with a rich experiential world, and enables human beings to draw on their own experience to investigate their true self and the nature of reality. This involves a robust notion of subjectivity.
He assumed that consciousness existed, and he was interested in its characteristics that, in his view, linked the physical processes of thoughts to the mind. He considered that, through consciousness, thoughts:
- become ‘transparent to the mind’, meaning that all ideas are transmitted to the mind. He specifically believed that every thought entered consciousness, and every thought was represented accurately.
- are reflected on, requiring the conscious person to be self-aware.
- are intentional, meaning something.
In other words, he posited that humans have a mind somewhere separate from the body and have thoughts located within the physical matter of the body and that consciousness connects the two.
There are several theories based on the current understanding of neuroscience and physics.
Hameroff and Penrose put forward a theory entitled the “Orchestrated Objective Reduction” (Orch OR) theory. The thesis draws on mathematics, quantum theories, and biology to suggest a mechanism through which consciousness can arise. They point out that single-celled organisms can escape through mazes and solve problems without neuronal synapses.
They hypothesise that the physical structure of intracellular organelles such as microtubules may allow quantum computing. This would give sufficient information-processing power to enable features such as binding together all aspects of someone’s experience into a single incident of consciousness and overcoming the limitations on free will imposed by any algorithmic approach to information processing, however complex the algorithm is.
Orch OR is a theory trying to explain how consciousness might arise.
Another theory suggests that consciousness is an illusion, implying that no free will exists. This is based on the observation that changes in cerebral function appear before a person reports awareness of a decision or action. In other words, there is evidence that someone will act in a specific way before the person states that they will act that way. This theory posits that consciousness is a reflection of what has recently already happened within the brain.
This theory approaches the problem of free will and making ‘conscious choices’, suggesting that there is not a causal link between a thought and an action. In reality, the action causes the thought, not vice versa.
The Integrated Information Theory
Christoph Koch introduced me and his audience to the Integrated Information Theory, which takes a theoretical, primarily mathematical approach to investigate consciousness. Giulio Tononi and Christof Koch wrote that to increase our understanding of consciousness, “we need not only more data but also a theory of consciousness—one that says what experience is and what type of physical systems can have it.” They also ask, “is consciousness—subjective experience—also there, not only in other people’s heads but also in the head of animals? And perhaps everywhere, pervading the cosmos?” They suggest that the Integrated Information Theory may provide a valuable theory to answer these questions.
The remainder of this section is based on four papers:
- Tononi et al, Integrated information theory: from consciousness to its physical substrate, Nature Reviews Neuroscience, 2016
- Koch et al, Neural correlates of consciousness: progress and problems, Nature Reviews Neuroscience, 2016
- Tononi and Koch, Consciousness: here, there and everywhere?, Philosophical Transactions of the Royal Society B: Biological Sciences, 2015
- Oizumi et al., From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0. PLOS Computational Biology 2014.
The starting point is that each of us is certain of our own experience, but no more. We conjecture about consciousness in others, the reality of a car we can see, and everything else. They suggest five axioms.
A set of axioms about something fulfils the following criteria:
- Each axiom must apply to all instances; it is essential
- The collection of axioms covers all essential properties common to all cases; it is complete
- No axiom contradicts any other axiom; the set is consistent
- No item can be derived or deduced from any other axiom; all axioms in the group are independent
The five axioms of ITT
The first axiom is that consciousness exists. In other words, humans have experiences that they are intrinsically aware of. There is no need for any third party or observer. Descartes famously articulated this axiom as “Je pense, donc je suis”.
The second axiom is that consciousness has structure. The experience has components. For example, the experience of a sunset might include the colour of the sun and sky, the reflection of the sun rippling in the sea, the cry of a lone seagull, the smell of fish and chips, and a concern about the long journey home with three fractious children.
Thirdly, each experience is specific and unique and can be differentiated from other occasions. For example, other sunsets might have been in different places, lacked the seagull, or had different associated thoughts. No two experiences are the same.
Next, each conscious experience is unified. The experienced sunset cannot be divided into parts. I cannot experience, at the same time, the sight of the sun on the sea with a separate simultaneous experience of the smell of fish and chips and a seagull call. The experience is completely integrated.
Last, a conscious experience is definite; you know what is in it, how quickly the time went and how long it lasted, and so on. There were likely other phenomena available, such as the people playing on the beach, but they were no part of your experience.
For each of these axioms, any theory of consciousness must be able to suggest a neural structure that can account for the phenomenon; it should identify the physical substrate for consciousness.
Consciousness in ITT
I cannot evaluate the mathematics, nor can I explain them in more detail. If you are interested, please read the original papers.
I will now discuss some implications, mainly clinical implications.
Conscious experience – a spectrum
The theory suggests that conscious experience is no more than the most fully integrated cluster of information from the perceptual mechanisms available to the organism or, in principle, the machine. An organism with a limited repertoire of perceptual mechanisms will have less information in its most information-rich integrated cluster. Still, there will be one cluster with more cause-effect power than any other.
Therefore, this theory predicts that conscious experience will be on a spectrum, with some organisms having more and others less. Furthermore, the maximal causal power in an individual will also vary over time; being asleep or comatose demonstrates this.
Clinically, this means it is inappropriate to consider a person either conscious or not, either at a specific moment or more generally over time. And simple reflection on one’s own experience will confirm this. For example, I have driven home on a familiar route with no consciousness recall of anything on the journey. Most people report being on autopilot during routine unstimulating activities.
This theory is consistent with observations made on people with a prolonged disorder of consciousness where variability is often apparent, even with brief (1-5 second) periods of apparently full awareness being reported.
The theory is founded on the complexity and analysis of information within a system. Animals have nervous systems that analyse sensory input to form, for example, visual images allowing them to recognise food and danger. They will, therefore, inevitably have a degree of conscious experience.
The distinction between human and animal consciousness arises from the extent to which humans may achieve a higher degree of integration across a broader range of perceptions. Other human attributes will also affect the content and scope of our consciousness. For example, we have a well-developed ability to use language, allowing us to develop more complex constructs and share and learn. We may have better mechanisms for recording and analysing time and better mechanisms for memory and predicting the future.
Consequently, humans may be able to achieve integration with a greater causal power, but we cannot yet know whether this is true.
The theory does not prove that machines can or cannot be conscious. However, it makes it less likely and emphasises the importance of the mechanism of integration. The authors show that complex algorithms may produce behaviours that look complex but that are entirely predictable. In contrast, a much less extensive mechanism may still make more abstracted concepts from sensory input. This is illustrated in figure 21.
The authors of this model of consciousness (IIT 3.0) also show that the analytic mechanism can generate a self-referential and holistic concept within itself. This experience is independent of other things.
They state, in support of this contention, “And indeed, once the architecture of the brain has been built and refined, having an experience with its full complement of intrinsic meaning – does not require the environment at all, as demonstrated every night by the dreams that occur when we are asleep and disconnected from the world.” [Page 23]
I cannot evaluate the mathematics supporting the hypothesis and must trust the authors and other mathematicians who have no doubt read and considered the content. Assuming the theory’s validity, I draw the following clinical conclusion about assessing and managing people with a prolonged disorder of consciousness.
At one time, people with a prolonged disorder of consciousness were classified as being in, for example, the ‘minimally conscious state’ with an implication that there was no variation. One single observation of a more complex behaviour would immediately re-classify them. This theory makes it evident that a person’s degree of consciousness (self-awareness) will fluctuate, potentially over a few seconds. This is what we all experience.
Therefore, when making decisions about a patient with a prolonged disorder of consciousness extending into low levels of consciousness or limited duration, one should consider their possible experiences and relative duration as far as possible. In other words, you should take a holistic view of the person’s life, now and in the future, and not be concerned with categorising the person into any state or group.