Skip to main content
Advertisement
  • Loading metrics

Hierarchy and hope: Exploring AI’s role in medicine through a thematic analysis of online discourse

  • Johan Pushani,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Department of Paediatrics, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada

  • Sherwin Rajkumar,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Department of Paediatrics, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada

  • Alishya Burrell,

    Roles Investigation, Methodology, Supervision, Writing – review & editing

    Affiliation Department of Medicine, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada

  • Erin Peebles,

    Roles Investigation, Methodology, Supervision, Validation, Writing – review & editing

    Affiliation Department of Paediatrics, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada

  • Amrit Kirpalani

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Supervision, Visualization, Writing – original draft, Writing – review & editing

    amrit.kirpalani@lhsc.on.ca

    Affiliations Department of Paediatrics, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada, Division of Nephrology, Children’s Hospital, London Health Sciences Centre, London, Ontario, Canada

Abstract

The healthcare community remains divided on the benefits of artificial intelligence (AI) in medicine. In this qualitative study, we sought to better understand the perceived opportunities and threats of AI among premedical students, medical students, and physicians. We conducted a thematic analysis on Reddit, a social platform where candid opinions are often shared. Posts from the r/premed, r/medicalschool, and r/medicine subreddits were searched using the terms “AI”, “chatGPT”, “openAI”, and “artificial intelligence”. We analyzed 2403 comments across 47 threads from December 2022 to August 2023. A coding scheme was developed manually following Braun and Clarke’s (2006) framework, and common themes were extracted. The main themes identified centered on AI enhancement versus replacement. Careers perceived to be lower in the medical social hierarchy were considered most at risk of replacement. AI was thought to first replace non-medical jobs, followed by mid-levels, and then primary care and diagnostic specialties, with specialists and surgeons affected last. Some contributors emphasized that AI could never replace a physician’s compassion and nuanced clinical judgment. Others viewed AI as a tool to enhance efficiency, particularly in tasks such as studying, note writing, screening, and triage. Although verifying the credentials of commenters on online forums poses a challenge, platforms like Reddit offer a valuable opportunity to understand nuanced attitudes and perceptions regarding AI in medicine. Online forums allow for a unique understanding of the impressions of AI in medicine. While AI was generally well-received, we identified a key finding: a socially hierarchical, biased form of thinking among healthcare professionals. The perpetuation of this biased mindset may contribute to role devaluation, mistrust, and collaboration challenges within healthcare teams–ultimately impacting patient care. To fully leverage AI’s potential in medicine, it is critical to acknowledge and address potentially biased perceptions within the healthcare community.

Author summary

Artificial intelligence (AI) tools, like ChatGPT, are rapidly becoming part of healthcare, yet there is still uncertainty about whether AI will primarily support clinicians or replace them. In this study, we analyzed public online discussions to better understand how these groups talk about AI in medicine. Using thematic analysis, we reviewed 2,403 comments across 47 Reddit threads from December 2022 to August 2023. We found two dominant themes. First, many users viewed AI as an enhancement tool, supporting studying, writing, clinical documentation, and early screening or triage - while emphasizing that AI can be inaccurate and requires human oversight. Second, others emphasized AI as a potential replacement force, often predicting a “hierarchy” of job risk where roles perceived as lower in the medical social structure were viewed as more vulnerable to automation than specialist physicians. These findings suggest that opinions about AI are shaped not only by technology, but also by professional identity and hierarchy, factors that may influence collaboration and the equitable implementation of AI in healthcare.

Introduction

The integration of artificial intelligence (AI) in healthcare marks a pivotal juncture, with AI’s potential to revolutionize diagnostics, treatment, and patient care at the forefront of contemporary medical discourse. Yet, amidst this technological advance, the medical community’s reception remains mixed–characterized by both optimism for AI’s transformative capabilities and apprehensions regarding its impact on professional roles and patient care [13].

The need to investigate these perceptions is underscored by the rapid evolution of AI technologies and their implementation in healthcare environments. As innovation outpaces policy and practice, aligning technological advancement with healthcare delivery and professional norms becomes increasingly urgent [4]. To support this alignment, a deeper understanding of healthcare professionals’ perceptions of AI is needed.

While previous survey-based studies have explored medical professionals’ perceptions of AI, spontaneous and candid perspectives remain underexamined. Our qualitative study analyzed Reddit discussions to capture unfiltered views on the opportunities and challenges AI is perceived to bring to the medical field. By incorporating the perspectives of premedical students, medical students, and physicians, we provide a broader snapshot of how AI is perceived across the current and future medical workforce.

Although the credentials of online commenters cannot be verified and these comments do not represent the medical field as a whole, open discussion on platforms like Reddit offer valuable insight into nuanced and candid opinions surrounding AI in medicine. These conversations can reveal not only individual opinions but also implicit attitudes and underlying structural themes–such as role prestige and the social hierarchy within medicine– that shape how different groups discuss and engage with AI. This work complements traditional survey-based studies and is beneficial in guiding the ethical development of AI solutions while safeguarding the integrity of medical practice and patient care amidst this digital transformation [5].

Methods

In this qualitative study, we analyzed discussions on the topic of AI in medicine on the social media site Reddit. Our primary goal was to explore and understand perceptions around AI in medicine amongst pre-medical students, medical students, and physicians.

Platform

Reddit is a popular social news aggregation and discussion platform that is widely used by students and physicians. Users can ask questions, disseminate information and publicly share experiences. Submitted content, or threads, are displayed in their appropriate categories, or subreddits, for all users to interact. As of October 2023, there are over 100 thousand active communities and 16 billion posts and comments [6]. The specific subreddits analyzed in the paper include r/premed with 388 thousand members, r/medicalschool with 715 thousand members, and r/medicine with 461 thousand members.

While Reddit users typically post under pseudonyms, making individual identities and credentials unverifiable, the platform remains a credible data source for qualitative research. Although anonymity limits demographic precision, it also facilitates spontaneous, unfiltered discourse—offering insight into genuine attitudes and experiences, particularly within Reddit’s topic-specific communities organized around shared interests [7]. Anonymity further reduces social desirability bias, enabling users to express perspectives they might withhold in identifiable settings. This allows researchers to capture authentic concerns rather than performative responses, and encourages discussion of sensitive or controversial professional topics that are rarely addressed in formal surveys. Subreddit-specific moderation policies, community voting systems, and participation norms further support content quality and relevance [8]. Reddit has also been increasingly adopted in academic studies exploring health-related topics due to its large, active user base and transparent, publicly accessible data [9]. These characteristics make it a valuable platform for thematic and exploratory qualitative analysis.

Theoretical framework

Thematic analysis offers a flexible and rigorous approach for examining qualitative data, particularly suited for analyzing the perceptions and attitudes of diverse groups regarding artificial intelligence (AI) in healthcare [10]. This method allows for the identification, analysis, and reporting of patterns within data, providing a comprehensive and detailed understanding of the discussions around AI in medicine.

For this study, we adopted a contextualist approach, which acknowledges the ways individuals create meaning from their experiences within broader social contexts [10]. This perspective is essential for capturing the complex views on AI among different groups within the medical community.

Data collection

We searched the r/premed, r/medicalschool, and r/medicine subreddits for all discussion threads created between December 2022 and August 2023, using the terms “AI”, “chatGPT”, “openAI”, and “artificial intelligence”. The content of each thread was reviewed by two researchers (JP and SR) and manually coded to enhance contextual understanding and researcher reflexivity. The initial phase involved immersing ourselves in the data through repeated readings of Reddit threads to fully understand the content. Threads relating to the search terms but not providing opinions or stances were excluded from coding, along with threads labelled as “Meme” and “Shitpost”.

Inductive coding was performed on each data set by two researchers (JP and SR) who independently reviewed threads and generated a set of preliminary codes. The code sets from each thread were compared between researchers and assessed for similarities and differences. If a thread was coded differently by each researcher, a discussion was had regarding reasons for selecting a certain code. Eventually, one of the two codes were selected or an overarching code was used as a middle ground. These code sets were compared and after finding similarities in coding, researchers coded the remaining threads together. The coding process was cyclical, wherein researchers returned to the data after an initial pass at coding, as well as dynamic, wherein codes were refined and grouped to highlight similarities and minimize redundancy.

A thematic analysis was initially performed by three researchers (JP, SR, AK) during a group discussion and meeting. Themes were identified by grouping related codes and examining overarching patterns and their interactions [10]. We then used triangulation by involving two additional researchers (AB, ERP), who were not part of the original data collection, to provide additional perspective and refine the analysis. Thematic saturation was established by iteratively analyzing data until no new themes or insights emerged, demonstrating redundancy in findings. This process ensures a thorough representation of the phenomenon under study. Despite the absence of inter-rater reliability statistics, the limitation was mitigated through triangulation and reflexivity.

Reflexivity

The researchers in this study had an interest in medical technologies such as AI and share a common belief that AI will make an impact in the medical field, albeit with no stance on the directionality or extent of its implication. We reflected individually and as a group on our potential biases and their impact on our data analysis. Naturally, as medical students and physicians, our role and socialization within the medical education system may have shaped our interpretations. We acknowledge that our experiences may impact the interpretation of the data.

Ethical considerations

As per University protocol, this study did not require any ethics board approval as it was purely a thematic analysis of discourses from a website with public access. No contact was made with users and there was no risk or harm to any individual. Our team functioned exclusively as observers.

Results

We analyzed a total of 2403 comments across 47 threads from December 2022 to August 2023 (Table 1). Seventeen threads containing 832 comments were coded from r/medicalschool. Eighteen threads containing 1319 comments were coded from r/medicine. Twelve threads containing 252 comments were coded from r/premed.

thumbnail
Table 1. AI-related search terms and number of relevant threads analyzed from medical subreddits.

https://doi.org/10.1371/journal.pdig.0001212.t001

Our analysis revealed two principal themes: AI Enhancement, highlighting AI’s capacity to bolster medical practices through educational support, writing assistance, and operational efficiencies; and AI Replacement, exploring the contentious debate over AI’s potential to supplant human jobs within medicine (Fig 1). Through the lens of our thematic analysis, we uncovered the multifaceted views of the medical community, reflecting a balance of optimism and concern regarding AI’s evolving role in healthcare.

Whereas much of the discussion around generative AI used ChatGPT as an example, the discourse largely used this tool as a jumping off point for larger level discussion around AI on a grander scale.

Theme 1: AI enhancement

The first major theme was AI enhancement, where many users framed AI as more than a facilitative tool but as a pivotal element in advancing medical education, writing, and predictive healthcare management. Discussions centered around AI’s multifaceted benefits, while acknowledging its limitations, and scrutinizing the implications of these technological interactions in healthcare.

Educational aid.

The role of AI as a dynamic educational resource was evident, with students leveraging its capabilities to generate study materials, including licensing exam-style questions and creative mnemonics. This innovative use of AI facilitates a deeper understanding and retention of information, as one participant noted, “I asked it to generate NBME [National Board of Medical Examiners]-style questions... It’s useful as a quick reference and especially useful as a way to re-organize information... generate mnemonics etc.” (Thread 8, Table 2). The sentiment was echoed by another, who found AI “beneficial for summarising or directing your study... asking for mnemonics or poems or references to study from” (Thread 8, Table 2). These insights underscore AI’s potential to simplify complex concepts, making educational content more accessible, as illustrated by the comment, “I use it to help me study ‘explain X and Y like I’m 5 with examples’... It will only get smarter” (Thread 25, Table 2).

thumbnail
Table 2. Summary of Reddit threads identified by search terms and their corresponding number of comments.

https://doi.org/10.1371/journal.pdig.0001212.t002

However, concerns were raised about AI’s reliability, with numerous comments highlighting the tendency of ChatGPT to provide incorrect or misleading information, or as one user deemed “something between incorrect answers presented with made-up citations… to answers that are just slightly off” (Thread 6, Table 2). The collective viewpoint suggests a cautious approach: “it can be wrong at times, so you gotta use your best judgement” (Thread 8, Table 2), emphasizing the need for critical engagement with AI-generated content.

Personal assistant tool.

The utility of AI as a writing tool was another main point of discussion. Users overwhelmingly found that ChatGPT was helpful when writing essays and literary works, and appreciated that this would be particularly valuable in medicine, where the clarity and precision of communication are paramount. Users appreciated AI for “having writer’s block? Let chatgpt [sic] give you a starting point... Too tired to edit an essay? Ask chatgpt [sic] to find grammar and spelling mistakes” (Thread 25, Table 2).

Appreciation was also given to the potential for AI to reduce the time spent on administrative tasks that were felt to be mundane or uninteresting:

“My attending wrote some insurance appeal letters with it after we were joking about chatGPT [sic] use. He was impressed with the result.”

Yet, the absence of personal touch in AI-generated content was noted, highlighting a trade-off between efficiency and the nuanced expression of human emotions and stylistic preferences. Many users agreed that generative AI “removes some of the pathos and stylistic choices” (Thread 22, Table 2) or that “the main lacking trait was a personal touch or personal voice” (Thread 26, Table 2), however many were quick to emphasize that “you can always edit the draft chatgpt [sic] makes” (Thread 22, Table 2) suggesting this may not be as significant a limitation.

Clinical efficiency tool.

Another helpful area for AI is its triaging ability and predictive capacity. Some users believed AI could leverage these abilities to increase efficiency and better evaluate outcomes and potential complications based on certain patient features.

The most likely answer is that machine learning may increase our efficiency and accuracy by processing large quantities of data and automating repetitive tasks with its output being “preliminary” and requiring the review and editing of a physician (think writing notes, triage, oxygen weaning recommendations, maybe even prelim radiology reads or at least highlighting areas of likely interest). (Thread 4, Table 2)

While it shows promise in these areas, users agreed that it still lacks refinement and overall reliability. As such, its current role is that of a screening tool rather than a diagnostic one.

“There is a fairly common AI service that most hospitals have been using for years that will screen CT heads for bleeds, CTAs for large vessel occlusions, and CT PE studies for PE. It is OK but quite frequently misses clinically relevant findings, and more or less doesn’t look for any pathology outside the listed diagnoses.” (Thread 46, Table 2)

Theme 2: AI replacement

Many discussions framed AI as more than an assistive tool, but rather, one with the potential to make specific aspects of human physicianship obsolete. The discourse on AI’s potential to replace jobs in medicine uncovered a complex narrative that delves into the social hierarchy of replacement, conditional acceptance of AI’s role, and the nuanced implications of augmentative technologies. This theme highlights the medical community’s divided stance on AI, reflecting a spectrum of opinions that range from cautious optimism to outright skepticism, all framed within the broader context of healthcare’s evolving landscape.

Social hierarchy of replacement.

At the heart of the discussion was the concern over AI’s capability to replace human roles within medicine, sparking a debate that touches on the fundamental aspects of medical practice. The conversation was anchored in a perceived hierarchy of vulnerability to AI replacement with users suggesting a tiered approach to AI’s integration, where non-medical jobs were seen as the most vulnerable, followed by “mid-level healthcare workers” (allied health professionals), and with physicians, particularly subspecialists, viewed as less at-risk. One participant starkly noted, “By the time AI is able to replace doctors, they’ll have already replaced basically every other knowledge worker” (Thread 9, Table 2), implying a gradual encroachment of AI across professions.

The dialogue evolves to address the medical field specifically, where mid-level practitioners such as NPs and PAs are viewed as more immediately susceptible to AI’s influence than physicians, a sentiment captured by the assertion that “an introduction of AI won’t skip straight to the top and replace doctors” (Thread 21, Table 2). This distinction extends within the physician community itself, with primary care practitioners perceived as more vulnerable compared to specialists and surgeons. The latter group is considered the least at risk due to the complex nature of surgical procedures, as highlighted by one user: “We are nowhere remotely close to AI touching surgery” (Thread 29, Table 2).

Contrasting with these views are arguments emphasizing the irreplaceable aspects of human judgment, complex decision-making, and holistic patient care that AI cannot mimic. The skepticism about AI assuming total responsibility is underscored by concerns over legal liabilities, with one participant remarking, “I honestly don’t think AI will ever result in endangering physicians’ jobs” (Thread 3, Table 2), reflecting doubts about AI’s full integration into clinical settings.

Amidst these varied perspectives, some comments employ satire to diminish fears of AI replacement, likening them to historical apprehensions about technological advancements rendering human skills obsolete, “It’s like when calculators were invented and they got rid of all the mathematicians” (Thread 34, Table 2). However, the prevailing sentiment is not one of dichotomy but rather a nuanced view that AI will transform, not replace, medical practice. This view is exemplified by the belief that AI will serve as an augmentative force, enhancing diagnostic support and triaging in primary care, “AI is more likely to complement healthcare professionals rather than replace them entirely” (Thread 29, Table 2).

Through these discussions, a consensus emerged that while AI is set to significantly influence medical practice, it is anticipated to act as an assistive technology, bolstering the efficiency and capability of healthcare delivery without displacing the essential role of human professionals in medicine. This narrative reflects a collective anticipation of AI as a transformative, yet non-displacing, force within healthcare, reinforcing the enduring value of human expertise in the field.

Augmentation as a precursor to replacement.

Whereas many users favoured the role of AI as an assistive tool for physicians, many stressed the secondary effect of this in reducing the total number of workers. If alongside AI physicians could increase their productivity, then users anticipated a decrease in the number of physicians needed by a hospital.

“I don’t see AI ever totally causing job obsolescence. However, I can see diagnostic radiology experiencing the first effects in the form of reduction in the number of jobs. Imagine an AI that can screen hundreds of images in minutes and highlight any perceived abnormality, and kicking it over to be reviewed by a human who then verifies the “abnormality” as a pathological issue or normal.” (Thread 3, Table 2).

The anticipation of AI’s role in diagnostic radiology as a precursor to broader impacts across the field reflects a pragmatic view of technology’s integration into healthcare, “If it can allow one person to do the work of 5, it’s game over for 80% of people” (Thread 5, Table 2).

Financial implication.

As a result of AI increasing physician efficiency and productivity, many users raised concerns about changes to compensation structures,“your job will likely become a lot faster since you’ll mostly screen for the AI detections to see if they’re false positives/negatives, which means you’ll be expected to do a lot more during your time and get compensated the same.” (Thread 34, Table 2).

A small proportion of users explored compensation changes through the lens of hospital administrators and how hospitals are set to profit more from proper AI implementation. In the eyes of the hospital, “physicians are expensive to employ and ‘wasteful’ (Thread 29, Table 2). Concerns about potential changes to compensation structures, with AI offloading tasks from physicians, hint at the economic dimensions of AI’s adoption. The expectation that hospitals might employ fewer physicians to maintain profitability underscores the potential economic pressures driving AI’s integration, “In any case, [AI] dramatically reduces the need for physicians on staff, and hospitals will not hold onto useless staff members” (Thread 29, Table 2).”

The discourse on AI replacement in medicine was characterized by a recognition of AI’s transformative potential alongside a critical appraisal of its limitations. The nuanced discussions reflected conditional acceptance of AI’s inevitable role, while contrasting perspectives on its impact on the future healthcare workforce.

Discussion

Our thematic analysis of discourse on AI in medicine revealed a landscape characterized by both enthusiasm for AI’s potential to enhance various aspects of medical practices and apprehension about its capacity to replace human roles. The identified themes of AI Enhancement and AI Replacement capture a collective anticipation of AI’s transformative impact, tempered by concerns about its implications for the medical workforce. This nuanced perspective highlights the complex dynamics of integrating AI into healthcare, reflecting a cautious optimism where AI is perceived both as a complement to and a challenger of traditional medical roles.

Discussions around AI in educational support, writing assistance, and clinical efficiency reflect optimism about its ability to augment healthcare delivery [11,12]. These perspectives align with best practices in the field, supporting AI’s role as a tool that enhances, but does not replace professional expertise [13]. Nevertheless, concerns regarding AI’s limitations also emerged, underscoring the ongoing need for human oversight to ensure safe and reliable AI applications [14].

Discussions also touched on AI’s capacity to replace human jobs within the medical hierarchy, revealing a nuanced debate that extends beyond technological feasibility to touch on social and professional hierarchies. This perspective on AI’s role in healthcare—where the perceived replaceability of human roles by AI is influenced by one’s position within the medical hierarchy—raises important questions about the intersection of technology, power, collaboration, and professional identity.

Our findings suggest that professionals perceive AI’s threat to replace human roles as stratified across different levels of the medical hierarchy. Notably, tasks performed by those lower in the hierarchy are perceived as more vulnerable to automation, while those requiring complex judgment by physicians and specialists are viewed as less replicable. This hierarchy-driven perception highlights how social and professional structures influence expectations about AI’s impact, reflecting broader workforce concerns about technology’s effects on jobs [15]. These views are also consistent with literature indicating that tasks requiring high perceived levels of cognitive complexity and emotional intelligence are less likely to be replaced by AI [16]. Such perspectives emphasize the enduring importance of human qualities in healthcare: empathy, moral judgment, and the physician-patient relationship, which may be difficult to replicate with AI. Emanuel and Wachter further support this view by arguing for AI as an augmentative tool, emphasizing the irreplaceable value of human judgment in clinical decision-making [17].

The implications of these findings extend beyond individual roles to broader healthcare systems. Hierarchical perceptions may reinforce biases, foster mistrust, degrade communication, and heighten workplace tension. Low interdisciplinary rapport has been repeatedly linked to poorer patient outcomes, undermining both patient-physician trust and public confidence in healthcare–effects which are likely amplified by entrenched social hierarchies [18].

Beyond the clinical setting, hierarchical perceptions carry broader professional and societal consequences. Healthcare hierarchies not only shape workplace culture but also influence policy, often privileging higher-status professionals while marginalizing others [19]. Because higher-ranked professionals are perceived as less replaceable, they may discount AI’s relevance to their own roles and underestimate its evolving capabilities in replicating these sophisticated tasks. Given their disproportionate influence on policy and reform, this perception risks diminishing AI’s value and slowing its adoption. In this way, entrenched hierarchies may obscure AI’s potential to enhance care across all levels of practice. As Emanuel and Wachter argue, such misconceptions may hinder the equitable and effective rollout of AI technologies, underscoring the need to reassess how roles are valued and how AI is integrated into healthcare [17].

Similar dynamics appear in curriculum development, where entrenched hierarchies may limit interdisciplinary representation or AI literacy. Without deliberate adaptation, this risks reinforcing outdated professional silos rather than preparing clinicians for collaborative, technology-enhanced practice. Effective curricula must teach not only AI skills but also interprofessional collaboration in contexts where AI performs routine cognitive tasks. Recent frameworks recommend integrating AI literacy across the learning continuum and expanding interprofessional education so learners understand shifting scopes, task allocation, and shared responsibilities when AI is introduced [20,21]. Failure to do so may lead trainees to over-trust technology or undervalue colleagues, ultimately compromising safe and effective AI implementation.

Taken together, these findings highlight the urgent need to reassess hierarchical assumptions and role valuations in healthcare. Acknowledging and addressing biased perceptions through education, policy reform, and equitable integration is essential to ensure that AI serves as a universal augmentative tool, enhancing care across all levels, rather than reinforcing outdated hierarchies.

Limitations

Our study is not without limitations. As a qualitative analysis, it does not offer quantitative measures of the prevalence or weight of the identified themes; future research could incorporate such methods to enhance generalizability. Although ChatGPT-4.0 was the most advanced version available during the study period, we did not confirm which version was used during coding, which may have influenced the outputs analyzed. Also, the use of Reddit discussions introduces uncertainty around the authenticity and background of commenters; users are not verified as pre-medical students, medical students, or healthcare professionals, and the platform tends to attract more tech-savvy individuals. Therefore, the voices captured may be disproportionately those of internet-active, self-selecting users and earlier adopters of AI, which may in turn under-represent the perspectives of older clinicians, nurses and allied health professionals, as well as those working in settings with limited digital infrastructure. Additionally, comments were not run through an AI and bot content detector, raising the possibility that some posts were partially or fully generated by language models–an added layer of complexity when interpreting user perspectives. Lastly, the rapid evolution of AI technology and its applications in healthcare may outpace the relevance of these findings, emphasizing the need for ongoing research and dialogue. Future studies using survey-based attitude metrics may bring to light changing attitudes regarding AI in medicine as it becomes increasingly incorporated into the healthcare field.

Conclusion

We demonstrate the perceived opportunities and threats of AI in medicine. We identify a particular social hierarchical model of replacement, whereby professionals perceived AI’s threat to replace human roles as stratified across different levels of the medical hierarchy. Of note, tasks performed by physicians deemed to be lower in the hierarchy are viewed as more susceptible to AI replacement. This line of thinking could hinder the true potential and benefits of AI in the healthcare field, while also threatening its proper adoption and equitable implementation. We need to recognize and address these potentially biased perceptions in the medical field if we are to fully leverage AI to its utmost potential.

Supporting information

S1 Table. Summary of threads included in thematic analysis.

https://doi.org/10.1371/journal.pdig.0001212.s001

(DOCX)

References

  1. 1. Meskó B, Hetényi G, Győrffy Z. Will artificial intelligence solve the human resource crisis in healthcare?. BMC Health Serv Res. 2018;18(1):545. pmid:30001717
  2. 2. Lee P, Bubeck S, Petro J. Benefits, Limits, and Risks of GPT-4 as an AI Chatbot for Medicine. N Engl J Med. 2023;388(13):1233–9. pmid:36988602
  3. 3. Ahuja AS. The impact of artificial intelligence in medicine on the future role of the physician. PeerJ. 2019;7:e7702. pmid:31592346
  4. 4. Blease C, Kaptchuk TJ, Bernstein MH, Mandl KD, Halamka JD, DesRoches CM. Artificial Intelligence and the Future of Primary Care: Exploratory Qualitative Study of UK General Practitioners’ Views. J Med Internet Res. 2019;21(3):e12802. pmid:30892270
  5. 5. Vellido A. Societal Issues Concerning the Application of Artificial Intelligence in Medicine. Kidney Dis (Basel). 2019;5(1):11–7. pmid:30815459
  6. 6. Reddit. Reddit by the Numbers. 2019. cited 31 Mar 2020. Available from: https://www.redditinc.com/press
  7. 7. De Choudhury M, De S. Mental Health Discourse on reddit: Self-Disclosure, Social Support, and Anonymity. ICWSM. 2014;8(1):71–80.
  8. 8. Medvedev AN, Lambiotte R, Delvenne JC. The anatomy of Reddit: An overview of academic research. Dynamics on and of Complex Networks. 2019;3:183–204.
  9. 9. Park A, Conway M. Longitudinal Changes in Psychological States in Online Health Community Members: Understanding the Long-Term Effects of Participating in an Online Depression Community. J Med Internet Res. 2017;19(3):e71. pmid:28320692
  10. 10. Braun V, Clarke V. Using thematic analysis in psychology. Qualitative Research in Psychology. 2006;3(2):77–101.
  11. 11. Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230–43. pmid:29507784
  12. 12. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44–56. pmid:30617339
  13. 13. Luxton DD. Artificial intelligence in psychological practice: Current and future applications and implications. Professional Psychology: Research and Practice. 2014;45(5):332–9.
  14. 14. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019;6(2):94–8. pmid:31363513
  15. 15. Autor DH. Why Are There Still So Many Jobs? The History and Future of Workplace Automation. Journal of Economic Perspectives. 2015;29(3):3–30.
  16. 16. Brynjolfsson E, McAfee A. The second machine age: Work, progress, and prosperity in a time of brilliant technologies. New York, NY: W.W. Norton & Company. 2014.
  17. 17. Emanuel EJ, Wachter RM. Artificial Intelligence in Health Care: Will the Value Match the Hype?. JAMA. 2019;321(23):2281–2.
  18. 18. Alvarez G, Coiera E. Interdisciplinary communication: an uncharted source of medical error?. J Crit Care. 2006;21(3):236–42; discussion 242. pmid:16990088
  19. 19. Singh PK, Singh S, Ahmad S, Singh VK, Kumar R. Navigating power dynamics and hierarchies in medical education: Enhancing faculty experiences and institutional culture. J Postgrad Med. 2025;71(2):82–90. pmid:40488556
  20. 20. Blanco MA, Nelson SW, Ramesh S, Callahan CE, Josephs KA, Jacque B, et al. Integrating artificial intelligence into medical education: a roadmap informed by a survey of faculty and students. Med Educ Online. 2025;30(1):2531177. pmid:40660466
  21. 21. Tolentino R, Baradaran A, Gore G, Pluye P, Abbasgholizadeh-Rahimi S. Curriculum Frameworks and Educational Programs in AI for Medical Students, Residents, and Practicing Physicians: Scoping Review. JMIR Med Educ. 2024;10:e54793. pmid:39023999