Hallucination or Confabulation? Neuroanatomy as metaphor in Large Language Models

Language Models (LLMs) and their capabilities become an increasingly prominent aspect of our workflows and our lives

vary significantly due to the probabilistic nature of the transformer architecture [6] and this same lack of deterministic, or rules-based, output likely accounts for the tendency of LLMs to produce non-factual or "confabulated" narrative details.For ease of reference, we provide a summary of the relevant points of comparison between these terms in Table 1.
In using language derived from human neurocognitive processes, we recognize the risk of seeming to advance an anthropomorphic view of LLMs [7].This is not our intention and does not accurately reflect our thinking.Rather than anthropomorphizing LLMs as displaying human traits and behaviours and thereby implying capacity for empathic connection, motivation for power, and other similarly far-fetched ideas, we instead believe that using linguistic terms more faithful to the underlying technology provides needed clarity upon which to build trust and drive adoption.Using metaphoric language with precision, functional aspects of LLMs can be subjected to analogical reasoning, thereby yielding plausible roadmaps for development of more powerful, more advanced, and more helpful models.
As noted above, confabulation is often associated with clinical conditions involving right hemispheric deficits.The right hemisphere of the brain is thought to be responsible for a wide range of cognitive functions, including processing non-verbal cues, understanding the emotional state of others, and appreciating the nuances of music [8].When this hemisphere is damaged the left hemisphere predominates but does so in a more literal and simplistic way.This can lead to a focus on detail, a preference for sequencing and ordering, and an overly optimistic and often unrealistic assessment of the self [8].
We suggest that the analogy between the left hemisphere's orientation to the world and current LLMs is instructive.With the availability of massive computational power, LLMs vastly outperform the human brain's ability to absorb and retain large amounts of information and can produce outputs on a scale that no individual human could.Yet LLMs, like the unmitigated, confabulating left hemisphere, may confidently produce false information.We propose that the "human-in-the-loop" approach [9] to responsible use of AI in the medical context may be seen as a reintroduction of the contextualising and sense-making functions of the right hemisphere, in the form of direct human oversight.While the landscape here changes rapidly, at this time humans remain better suited to real-world decision making under conditions of uncertainty and are for now alone in our capacity for empathy, embodiment, and the complex value-based prioritisation required to make judgements in medical care.In the collaboration between humans and AI technology, we witness a synergistic relationship reminiscent of the brain's right and left hemispheres.Just as these brain regions possess unique strengths that complement each other, humans and AI each contribute capabilities that may compensate for the other's limitations.This partnership bears the promise of reshaping industries and solving unimaginably complex problems.
Over the longer term will we see development of artificial intelligence analogues to right hemispheric ways of thinking?We think the answer is in many ways, yes, though perhaps not to the dystopian extent that some may fear.Artificial correlates of empathy may yet be years away, but we see multiple specialised LLMs interacting in a structured way, much like the gated interconnectivity of our neuroanatomy, on the very near-term horizon [9].There is collective wisdom preserved in evolution's partitioning of selectively interconnected brain structures, which we believe provides a map for our approach to the development of advanced AI systems.The use of inaccurate language often leads to pervasive misunderstandings which become increasingly difficult to correct over time.Particularly in the medical context, where adoption of new technologies can have both immediate and long-term implications for the health and well-being of the population, it is important that we choose our words, and thus our metaphors carefully.We propose using the term "confabulation" not merely to correct a misnomer, but because the neuroanatomical analogy it implies unlocks new ways of understanding and suggests exciting new paths for technological advancement.