Skip to main content
Advertisement
  • Loading metrics

From artificial to organic: Rethinking the roots of intelligence for digital health

  • Prajwal Ghimire ,

    Roles Conceptualization, Investigation, Methodology, Resources, Writing – original draft, Writing – review & editing

    prajwal.1.ghimire@kcl.ac.uk

    Affiliations School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom, Department of Neurosurgery, King’s College Hospital NHS Foundation Trust, London, United Kingdom

  • Keyoumars Ashkan

    Roles Conceptualization, Project administration, Writing – review & editing

    Affiliations School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom, Department of Neurosurgery, King’s College Hospital NHS Foundation Trust, London, United Kingdom, Institute of Psychology, Psychiatry and Neuroscience, King’s College London, London, United Kingdom

Abstract

The term “artificial” implies an inherent dichotomy from the natural or organic. However, AI, as we know it, is a product of organic ingenuity—designed, implemented, and iteratively improved by human cognition. The very principles that underpin AI systems, from neural networks to decision-making algorithms, are inspired by the organic intelligence embedded in human neurobiology and evolutionary processes. The path from “organic” to “artificial” intelligence in digital health is neither mystical nor merely a matter of parameter count—it is fundamentally about organization and adaption. Thus, the boundaries between “artificial” and “organic” are far less distinct than the nomenclature suggests.

Introduction

The mid-20th century was a formative era for the study of machine intelligence. In 1950, the British mathematician Alan Turing proposed a thought experiment—later known as the Turing Test—to probe a fundamental question: could a machine ever think? Turing argued that if a computer could execute a conversation so seamlessly that a human judge could not distinguish it from a real person, then, for all practical purposes, the machine was “thinking” [1]. His idea gave early researchers a criterion for comparing artificial behavior to human cognition, even if no one believed it to be a perfect or final measure.

Just a few years later, in 1956, the Dartmouth Summer Research Project on Artificial Intelligence brought together a small group of visionary scientists [2]. They gave the new field its name, Artificial Intelligence (AI), and set forth the bold goal of replicating or exceeding human cognitive capabilities in non-biological substrates. These early pioneers approached their work as a grand quest to construct minds out of silicon and algorithms, rather than flesh and neurons. If Turing’s thought experiment was a philosophical spark, the Dartmouth gathering ignited an entire academic discipline.

From these beginnings, the notion took hold that machine intelligence might evolve into a distinct and separate entity, growing ever more sophisticated until it approached or even surpassed human intellect [3].

Roots: Human inputs and patterns of thought

To date, we continue to use Turing’s framework and the Dartmouth-inspired term “Artificial Intelligence”. Yet as AI technology has advanced, and particularly as data-driven machine learning systems have come to dominate the field, our understanding of what makes these systems “intelligent” has shifted. Instead of observing entirely new forms of reasoning emerging from isolated digital minds, we see something more nuanced: these systems are deeply and inescapably rooted in human inputs, human culture, and human patterns of thought [4].

For all the complexity of modern machine learning, the fact remains that today’s AI models learn from data we provide. Whether they are identifying objects in images, translating languages, recognizing speech, or engaging in human-like conversation, their abilities flow from patterns observed in massive, human-curated datasets [5]. The clever turns of phrase in a language model’s output are echoes of human writing. The refined decision-making of a recommendation system arises from signals in human behavior. Even the architecture of neural networks are designed, tuned, and improved by people drawing inspiration from biological brains and mathematical insights [6].

Terminology: Artificial, organic, and intelligence

This interconnectedness underscores a crucial point: what we call “artificial” intelligence is not, in reality, conjured out of a void. Instead, it is a distillation of our collective intelligence, channeled and rearranged by algorithms. The very term “artificial” might suggest that these systems are something other than human in their origins, but this framing can be misleading. Just as languages evolve through communities of speakers, and just as cultural knowledge passes through generations of human minds, the “intelligence” in AI emerges from the human intellectual ecosystem it was trained on. Machines do not intrinsically know how to parse a sentence or evaluate the correctness of a fact—these capabilities only arise from exposure to our texts, images, and examples.

This realization changes how we understand Turing’s challenge and the Dartmouth vision. When an AI passes a Turing-like test of conversational skill, it is not proving that it possesses some newly minted, inorganic mind. Rather, it is demonstrating how adeptly it can replicate human conversational patterns. If we cannot tell whether the speaker behind the screen is human or machine, it is because the machine is reflecting, in refined statistical form, the human-made patterns that taught it how to speak in the first place. The “intelligence” we see is, at its core, a reflection of the organic intellect that produced the input data [7].

Acknowledging this human backbone to machine intelligence also carries implications for ethics, accountability, and design. If what we label as “artificial” is in fact “organically” derived—from human knowledge, human choices, and human biases—then we remain responsible for the outcomes. If a biased dataset led to a biased model, then the root cause is not some alien mind, but our own flawed inputs [8]. Understanding AI as organic in its essence encourages us to scrutinize the data we feed it and the purposes we set. It reminds us that the machine’s “values” are, in truth, our values writ large and automated at scale.

Thus, we define organic intelligence not by the material of its substrate but by organization of its dynamics: systems that exhibit self-organization, adaptive plasticity, and hierarchical feedback control. In this view, the so-called artificial architectures of deep learning are themselves extension of organic principles materialized through inorganic means.

Organically-rooted constructs

This perspective does not diminish the genuine achievements of AI research. On the contrary, it highlights an extraordinary human accomplishment to, in essence, extend this capability such that the intelligence, previously confined to the realm of organic matter, can now also be delivered by the inorganic matter. We have thus created tools that can amplify, reorganize, and reflect our collective intellect in powerful new ways. These systems have the capacity to make certain tasks easier, to unearth patterns we might have missed, and to serve as creative partners in fields from science to the arts. By seeing them for what they are, organically-rooted human-guided constructs, we can better integrate them into society, ensuring they complement rather than distort our priorities [9].

Concept of artificial general intelligence, superintelligence, and digital health

Recent progress has provided a base for development of artificial general intelligence (AGI) and ultimately super general intelligence (SGI) [1014]. This has been possible due to enhancements of hardware capabilities. The core idea still is to try to mimic the human brain networks to achieve skills such as multitasking, reasoning, and establishing causal relationships with multiple predictions, but not without inherent bias that comes from the training performed by human users [1014]. These models are likely to be useful for digitizing hospital-related tasks, enhancing the experience of digital health. This will be possible only if speed is balanced with accountability; explainability is treated as part of safety and targeting human-level breadth across tasks under resource constraints. In clinical settings, these aspects translate into uncertainty-aware objectives, rehearsed rollback protocols, and escalation pathways [10,14].

Some examples of AI applications in healthcare with organic roots would be sparse and modular architectures for radiology triage and neuro-oncology stratification, which align with small-world efficiency principles [11,12,15]. Other examples include continual and domain-adaptive learning models with homeostatic calibration for cross-site generalization [4,5] along with hybrid neuro-symbolic and memory augmented networks that integrate reasoning and perception for longitudinal patient monitoring [16]. The main technical barriers for applying these models would be data quality and harmonization across institutions, generalization and calibration under scanner protocol drift, compute and energy constraints for edge deployment, and governance of adaptive models [17].

The potential milestones that will be required before AGI and SGI can be considered for healthcare contexts could be robust narrow AI (modular clinical models with drift detection with safeguards including dataset harmonization and calibration metrics); cross-task generalization (unified triage, segmentation and report generation with safeguards including explainability and abstention frameworks); tool-use and reasoning (integration with EHR and external databases with safeguards of continuous auditing and model cards) and autonomy under oversight (context-aware multi-agent reasoning with safeguards including uncertainty-over-preference, corrigibility and off-switch verification) [1822]. These milestones provide a bridge from current digital health AI to the AGI/SGI discourse.

Bias mitigation can be achieved within algorithmic design through re-weighting and counterfactual data augmentation during training, combined with structural plasticity that can down-weight spurious or site-specific connections over time [2325]. On the other hand, accountability can be achieved by embedding governance hooks such as logging rewiring events, explanation stability, or abstention triggers directly into model architecture, ensuring traceable model evolution [26]. This leads to bias mitigation and accountability engineered into system’s logic rather than being deferred to clinical workflow.

Implications of shift in terminology

The shift in terminology from real/artificial to organic/inorganic has implications for research priorities, benchmark design, and interdisciplinary collaboration [12]. It can thus lead to a paradigm change in research focus from model scale to organizational efficiency; proposing dynamic benchmarks that test adaptability and calibration under distribution shift, rather than static accuracy-only leaderboards. Furthermore, the organic/inorganic lens invites joint design between neuroscientists, clinicians, and AI engineers—treating intelligence as a continuum of structure and adaptation rather than a categorical divide [12,13,2729]. These shifts will reframe AI research as an integration science, aligning computational modeling with biological organization and cognitive safety.

Conclusion

In the decades since Turing’s and the Dartmouth pioneers’ era, we have advanced toward systems that can meet and sometimes surpass the benchmarks of that era’s imagination. But we have also learned that “artificial” intelligence cannot be neatly separated from the human context that birthed it [30]. The name may endure out of historical convenience, but as we chart the future of AI with superintelligence, it may be more accurate to think of these technologies as inorganic channels for organic wisdom, extended and transformed through computational means. The time, perhaps, is now right for a name rethink away from real versus artificial intelligence towards organic versus inorganic intelligence as we make our move towards digital health in our hospitals. After all, intelligence, organic or inorganic, is defined by how systems organize and adapt information.

References

  1. 1. Turing AM. Computing machinery and intelligence. Mind. 1950;59(236):433–60.
  2. 2. McCarthy J, Minsky ML, Rochester N, Shannon CE. A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955. AI Magazine. 2006;27(4):12.
  3. 3. Russell S, Norvig P. Artificial intelligence: a modern approach. 4 ed. Pearson; 2021.
  4. 4. Mitchell M. Artificial intelligence: a guide for thinking humans. Penguin; 2019.
  5. 5. Halevy A, Norvig P, Pereira F. The unreasonable effectiveness of data. IEEE Intell Syst. 2009;24(2):8–12.
  6. 6. Goodfellow I, Bengio Y, Courville A. Deep learning. MIT Press; 2016.
  7. 7. Floridi L, Chiriatti M. GPT-3: its nature, scope, limits, and consequences. Mind Mach. 2020;30:681–94.
  8. 8. Caliskan A, Bryson JJ, Narayanan A. Semantics derived automatically from language corpora contain human-like biases. Science. 2017;356(6334):183–6. pmid:28408601
  9. 9. Crawford K. Atlas of AI: power, politics, and the planetary costs of artificial intelligence. Yale University Press; 2021.
  10. 10. Legg S, Hutter M. Universal intelligence: a definition of machine intelligence. In: arXiv.org [Internet]. 2007. Available from: https://arxiv.org/abs/0712.3329
  11. 11. Yuan Y, Liu J, Zhao P, Xing F, Huo H, Fang T. Structural insights into the dynamic evolution of neuronal networks as synaptic density decreases. Front Neurosci. 2019;13:892. pmid:31507365
  12. 12. Yuan Y, Chen X, Liu J. Editorial: Brain-inspired intelligence: the deep integration of brain science and artificial intelligence. Front Comput Neurosci. 2025;19:1553207. pmid:40104427
  13. 13. Dehghani N, Levin M. Bio-inspired AI: integrating biological complexity into artificial intelligence. arXiv (Cornell University). 2024.
  14. 14. Hassabis D, Kumaran D, Summerfield C, Botvinick M. Neuroscience-inspired artificial intelligence. Neuron. 2017;95(2):245–58. pmid:28728020
  15. 15. Latora V, Nicosia V, Russo G. Complex networks: principles, methods and applications. Cambridge University Press; 2017.
  16. 16. Esteva A, Robicquet A, Ramsundar B, Kuleshov V, DePristo M, Chou K, et al. A guide to deep learning in healthcare. Nat Med. 2019;25(1):24–9. pmid:30617335
  17. 17. Rajpurkar P, Chen E, Banerjee O, Topol EJ. AI in health and medicine. Nat Med. 2022;28(1):31–8. pmid:35058619
  18. 18. Thórisson KR, Isaev P, Sheikhlar A. Artificial general intelligence (LNCS, 14951). Springer; 2024.
  19. 19. Russell S. Artificial intelligence and the problem of control. In: Werthner H, Prem E, Lee EA, Ghezzi C, editors. Perspectives on digital humanism. Cham: Springer; 2022.
  20. 20. Hadfield-Menell D, Dragan A, Abbeel P, Russell S. The off-switch game. In: IJCAI, 32. 2017. 220–7.
  21. 21. Bostrom N. Superintelligence: paths, dangers, strategies. Oxford University Press; 2016.
  22. 22. Topol EJ. Deep medicine: how artificial intelligence can make healthcare human again. Basic Books; 2019.
  23. 23. Ilievski F, Hammer B, van Harmelen F, Paassen B, Saralajew S, Schmid U, et al. Aligning generalization between humans and machines. Nat Mach Intell. 2025;7(9):1378–89.
  24. 24. Górriz JM, Álvarez-Illán I, Álvarez-Marquina A, Arco JE, Atzmueller M, Ballarini F, et al. Computational approaches to Explainable Artificial Intelligence: Advances in theory, applications and trends. Information Fusion. 2023;100:101945.
  25. 25. Mehrabi N, Morstatter F, Saxena N, Lerman K, Galstyan A. A survey on bias and fairness in machine learning. ACM Comput Surv. 2021;54(6):1–35.
  26. 26. Varshney KR, Alemzadeh H. On the safety of machine learning: cyber-physical systems, decision sciences, and data products. Big Data. 2017;5(3):246–55. pmid:28933947
  27. 27. Marcus G. The next decade in AI: four steps towards robust artificial intelligence. arXiv. 2020:2002.06177.
  28. 28. Bengio Y, Lecun Y, Hinton G. Deep learning for AI. Commun ACM. 2021;64(7):58–65.
  29. 29. Doshi-Velez F, Kim B. Towards a rigorous science of interpretable machine learning. arXiv. 2017:1702.08608.
  30. 30. Boden MA. AI: its nature and future. Oxford University Press; 2016.