Artificial and virtual technologies in healthcare have advanced rapidly, and healthcare systems have been adapting care accordingly. An intriguing new development is the virtual physician, which can diagnose and treat patients independently.
Methods and findings
This qualitative study of advanced degree students aimed to assess their perceptions of using a virtual primary care physician as a patient. Four focus groups were held: first year medical students, fourth year medical students, first year engineering/data science graduate students, and fourth year engineering/data science graduate students. The focus groups were audiotaped, transcribed verbatim, and content analyses of the transcripts was performed using a data-driven inductive approach. Themes identified concerned advantages, disadvantages, and the future of virtual primary care physicians. Within those main categories, 13 themes emerged and 31 sub-themes.
While participants appreciated that a virtual primary care physician would be convenient, efficient, and cost-effective, they also expressed concern about data privacy and the potential for misdiagnosis. To garner trust from its potential users, future virtual primary physicians should be programmed with a sufficient amount of trustworthy data and have a high level of transparency and accountability for patients.
Citation: Goetz CM, Arnetz JE, Sudan S, Arnetz BB (2020) Perceptions of virtual primary care physicians: A focus group study of medical and data science graduate students. PLoS ONE 15(12): e0243641. https://doi.org/10.1371/journal.pone.0243641
Editor: Maria Rosaria Gualano, University of Turin, ITALY
Received: April 14, 2020; Accepted: November 20, 2020; Published: December 17, 2020
Copyright: © 2020 Goetz et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All de-identified transcripts have been attached as Supporting Information.
Funding: This study was funded by Dr. Bengt Arnetz’s Michigan State University start-up funds.
Competing interests: The authors have declared that no competing interests exist.
As artificial intelligence (AI) technology continues to advance, healthcare systems are adapting care delivery to incorporate these technologies and enhance the services they provide [1–3]. Moreover, the ongoing COVID-19 pandemic has dramatically accelerated the use of telehealth and AI-informed decision-making in healthcare [4–6]. Additionally, patients have begun to rely on artificial intelligence to inform their healthcare decisions. For example, with the introduction of online tools such as symptom checkers, healthcare consumers utilize primary care differently, often reducing their urgency to receive care or intended level of care (urgent care, primary care, hospital, etc.) . Patients have also begun to utilize other healthcare technologies, such as telemedicine, receiving primary care via text or video chat . Further, young people seem to be using healthcare differently than previous generations, generally choosing more convenient and cost-effective options for their care .
The incorporation of artificial intelligence and other technologies into everyday healthcare has elicited both confidence and suspicion from providers and healthcare teams. Those who are concerned about the use of AI in healthcare commonly cite the potential for the technology to replace healthcare workers, although there is little empirical evidence to support this . Other concerns include issues of accuracy, accountability, transparency, and privacy . There is also a dearth of empirical evidence of the benefits of AI on patients and consumers . A particularly interesting development in AI is virtual primary care. In contrast to other at-home telehealth healthcare options such as phone and video visits, virtual primary care would allow patients to receive care virtually from an AI “physician.” Insurers, such as Humana, continue to expand their offerings of healthcare plans centered around virtual care. Employers like Amazon and Walmart are also starting to offer virtual care to their employees . Companies such as Babylon Health have already developed AI doctors with similar diagnostic accuracy to human physicians . The market for AI in healthcare has been projected to grow to $27 billion a year in 2025 . Despite this, there is little peer-reviewed research on the impact of switching to virtual care on providers or patients.
Virtual primary care
Virtual primary care, an artificial intelligence system—not a human physician- used via audio/videoconferencing, chat, or email, is becoming more possible as technology advances [1,10,12]. It is therefore not unrealistic to imagine that a virtual primary care physician may be a reality in the not-too-distant future [1,12,14]. Technology is likely to transform healthcare, and considerable developments have already been made. One impactful development in health technology is remote monitoring, which is the use of digital technologies to monitor and capture health data from patients that is electronically transmitted to healthcare systems. This technology has utility for conditions like diabetes and hypertension, where data like blood glucose and blood pressure can be automatically monitored by AI systems [15,16]. Recently, technology has also markedly contributed to the field of radiology. For example, artificial intelligence has recently been shown to be more accurate than radiologists in breast cancer prediction , and to have similar or better accuracy than dermatologists in diagnosis of melanoma [18,19].
While it is clear that artificial intelligence technologies may transform healthcare, less is known about the perspectives of potential users. In fact, a review study concluded that, rather than developing additional algorithms and systems, researchers should be seeking the perspectives of patients and consumers to ensure that the technology will actually be used by patients, and that it will have a meaningful impact on their health . As younger generations age and begin to make their own healthcare decisions, their opinions are especially relevant. Data from the 2018 U.S. Census show that young adults ages 19–34 are the most likely to be uninsured . In addition, a recent Blue Cross Blue Shield Association study showed that one in three millennials do not have a primary care physician, instead choosing more convenient and cost-effective options like urgent care and retail clinics . Technology use is a potential strategy to increase healthcare access and utilization among young people, perhaps while reducing costs [1,21]. Research shows that younger generations use more modern technology  and are more likely than older generations to be early adopters of new technologies . For example, various studies of direct-to-consumer telehealth utilization demonstrate that a majority of patients are young adults [21,24,25]. It has also been demonstrated that those who are uninsured are likely to prefer telehealth over a traditional office visit , and that telehealth may increase utilization among patients who otherwise would not have sought care .
In order to better understand the perspectives of younger adults, the current study explored the attitudes of current medical/technology-focused graduate students toward virtual primary care physicians (vPCPs), artificial intelligence systems that diagnose and provide care. We chose to focus on advanced degree students in these disciplines because they were likely to have an informed understanding of healthcare or engineering, which might enhance their understanding of this new technology. In addition, research has shown that higher education level may predict early adoption of health technology [26,27]. Of note, this study took place in early 2019, before the COVID-19 pandemic. As such, these results should be considered in context.
Participants were recruited from two midwestern universities to participate in focus groups. Mass recruitment emails were sent to students in medical school and data science graduate programs. Five first-year medical students and three fourth-year medical students were recruited, along with four first-year engineering graduate students and three fourth-year computer/data science graduate students. We choose these groups to represent those who had a working knowledge of care provision, as well as those who had an understanding of the science behind AI technology. In addition, we chose students in the first and fourth (final) year of their schooling, in order to assess possible differences based on level of education and experience. The fourth-year students were also likely to have more experience using healthcare as a patient, as a factor of age. While this study was determined exempt by the Institutional Review Board of the investigators’ university, all participants signed a consent form explaining that the sessions would be recorded, but their responses would not be linked back to them as individuals. Participants were provided with a meal during the focus groups and were also sent a $25 gift card for their participation.
Data collection took place between March and June in 2019. One-hour focus groups were held with each of the four participant groups, i.e., first-year medical students, fourth-year medical students, first-year engineering/data science graduate students, and fourth-year graduate students. One researcher facilitated the focus groups, while two others were note-takers. The same researcher (JA), who has expertise in conducting focus groups, facilitated each group. Each focus group was audiotaped. Researchers developed a list of questions that were aimed to elicit students’ opinions about their willingness to use, as a patient, a virtual primary care provider (vPCP), the pros and cons related to use, and how they imagined it would be operationalized (S1 Appendix). These questions were developed by the research group in a series of discussions informed by relevant literature and past experience conducting focus group research. When conducting the focus groups, the facilitator followed a question script in order to limit variation between groups. We defined virtual primary care physician as “an artificial intelligence system, not a physician via audio/videoconferencing, chat, or email”. In addition, we iterated to the medical students that we wanted to hear their opinions of using a vPCP as a patient, rather than incorporating a vPCP into their future clinical practice. Upon completion of the fourth focus group, no new themes emerged and it was determined that thematic saturation was achieved. Thus, no further focus groups were held .
The audiotaped focus group discussions were transcribed verbatim. Qualitative content analysis of the transcribed texts was conducted using a data-driven inductive approach to code content into themes .
To begin, two researchers, one who was present during the focus groups and another who was not, examined transcripts of each focus group separately, coding dominant responses. These codes were then aggregated into main themes. The two coders compared these themes jointly and discussed agreements and differences. The initial agreement between the coders was about 80%. These coders then reevaluated the data until agreement was reached and no new themes emerged. A third researcher who was present during the focus groups then read the four transcripts and validated these findings.
We hypothesized that the technology-focused graduate students would be more supportive of the vPCP than the medical students. We assumed that the graduate students would have a more thorough knowledge of artificial intelligence technology, and thus would not feel as much uncertainty as the medical students regarding virtual care. In addition, we imagined that the medical students might be more concerned than the graduate students about the loss of human connection in their care. We also hypothesized that, compared to the fourth-year students, the younger first-year students in both disciplines would be more supportive of using a vPCP. We imagined that the younger students would be more likely to consider convenience and cost when making decisions about their medical care, while the older students might place more value on the human connection, since they were likely to have more experience as a patient.
The consolidated criteria for reporting qualitative research (COREQ) were used to guide data collection and reporting . Qualitative rigor was fulfilled using Guba and Lincoln’s criteria (creditability, transferability, dependability, and confirmability) as a guide . Credibility (analogous to internal validity) was achieved by comprehensiveness in data collection and analysis. All coders became thoroughly familiar with the data by reading through the transcripts multiple times. Transferability (analogous to external validity) was assured by using verbatim quotes as relevant examples given by students from multiple ages and academic disciplines. Dependability (analogous to reliability) was achieved by using one coder who was not involved in the data collection. Confirmability was achieved through triangulation, involving three researchers, one of whom had not been present during the focus group discussions. All coders analyzed the verbatim reports, then validated findings amongst themselves.
The analysis revealed that distinct themes emerged that were readily categorized according to the focus group discussion question areas of advantages, disadvantages, and the future of vPCPs. Within those main categories, 13 themes emerged and 31 sub-themes. An overview of the main themes and sub-themes is provided in Table 1.
Students’ perceptions of the advantages of vPCPs could be categorized into five main themes: convenience/access, efficiency, standardized care, low cost, and accuracy (Table 1). A total of 15 sub-themes emerged, with multiple subthemes within each main theme.
Participants felt that a vPCP would be convenient to use and would increase access to primary care. This theme could be separated into three subthemes: routine illnesses, transportation, and anytime/anywhere.
Students felt that a virtual PCP could be used for routine illnesses, for example:
“… cold, flu, or high-blood pressure… These very common diseases, I think [a] virtual physician can handle…” (First year medical students)
Students noted that using a vPCP would eliminate transportation demands for the patient:
“I was ill last semester… And it would have been really helpful for me not to have to drive to a clinic…” (First year medical student)
Participants felt it was advantageous that the vPCP could be accessed anytime and anywhere, including in rural areas and underdeveloped countries where access to healthcare is limited.
“….you can basically contact whenever you have the problem.. you can access it really easily.” (First year graduate student)
Students agreed that using a vPCP would make some healthcare processes more efficient, both for patients and providers. This theme could be further defined into three subthemes: one-stop shop, knowledge support, and physician support.
The students viewed the vPCP as a sort of one-stop shop for all specialties:
“If you use a computer, I would think… it should be a specialist on everything.” (First year graduate student)
Participants noted that the vPCP could provide knowledge support to a human physician.
“If I was in the room, and essentially got two doctors, basically, cause you got the machine and the doc, that would be another cool thing.” (Fourth year medical student)
The participants believed that the vPCP could complete simpler tasks (documentation, prescription orders, etc.) so that human physicians could focus on and complete more complex tasks.
“I could imagine a physical could be something the machine could do ‘cause it’s not really something you need to diagnose. Just putting in a bunch of data.” (Fourth year medical student)
Participants believed that using a virtual PCP would bring about lower costs for both patients and systems, directly and indirectly. Three major themes emerged: money, time, and manpower.
Students felt that using a virtual PCP would cost less than visiting a human physician.
“Cheaper and faster…. as a student, I really care about like, how much I would pay for a visit in a hospital sometime.” (Fourth year graduate student)
They noted that using a virtual PCP would result in less time being spent in waiting rooms or traveling to clinics or hospitals, saving time.
“I don’t wanna waste the time to go wait in the waiting room… whereas I could just sit home in my pajamas and talk to someone for 5 minutes–in and out.” (First year medical student)
The use of a vPCP would also lessen manpower demands, both for clinics and at-home caregivers.
“…there are caregiver roles that you have to play for family members, and if this work could be done by the tool, it could actually be very helpful.” (First year medical student)
Increased accuracy was perceived as being an advantage of using a vPCP. From this theme, three subthemes emerged: human error, information capacity, and diagnostic bias.
The groups felt that the vPCP would not be inhibited by human error or emotional decision making, leading to a lower likelihood of mistakes.
“I don’t see any involvement of, like, emotion at that point, which is like something they tell to doctors. Like, ‘Keep your emotions away,’ right?” (Fourth year graduate student)
Students also acknowledged that a virtual PCP has a much greater capacity for information than a human physician.
“[H]uman brains are not really fully used… They cannot memorize everything… you cannot remember every single patient… what do they look like, what’s their disease look like, whatever. They may make mistakes… AI can record all the information." (Fourth year graduate student)
The students added that a vPCP may have less diagnostic bias. They visualized the ideal vPCP as incorporating large amounts of data from patients worldwide, potentially eliminating diagnostic bias towards locally frequent illnesses:
“…it might catch some of the more obscure diagnoses or things that are frequently missed from a normal human physician perspective.” (Fourth year medical student)
The students felt that using a vPCP might result in reduced stigma. This theme could be dissected into two subgroups: discrimination and embarrassment.
Students felt that using a vPCP may reduce bias or discrimination. A few students gave personal stories of instances where they have felt stigmatized while using medical care.
“… there are certain stigma or certain assumptions people make based on how they look… You don’t have to deal with that when you’re dealing with a tool.” (First year medical student)
While this was a recurring theme, it was not shared among all of the students. A few students thought that availability of a vPCP could lead to increased discrimination related to medical care. For example, one student expressed concern that the vPCP would be made available only to poorer or uninsured patients, while wealthier patients would be able to see a human physician.
The groups thought that the lack of human contact might make it easier for patients to speak about stigmatized or “embarrassing” things, such as sexually transmitted infections and mental health issues.
“And in many cases, you won’t feel like sharing what you have, what you’re going through with another person. So, in that case, for maintaining your confidentially or secrecy. It would be better to be with the machine…maybe if you have a machine you can talk, you can tell what you’re going through.” (First year graduate student)
The students in each of the focus groups saw a number of disadvantages with using a virtual PCP, relating to the following themes: data security, humanness, misdiagnosis, and suitability (Table 1).
This theme had one subtheme: information theft. Each group was concerned about the potential for their personal information to be stolen, sold, or otherwise shared without their consent:
“…it could be sort of sold, it could be hacked, it could be taken into a wrong direction.” (Fourth year medical student)
“I would never consult a virtual physician… The data… would go into hands of someone that I do not trust.” (First year graduate student)
Another disadvantage that was discussed in our focus groups was the lack of humanness that would come with using a vPCP. From this theme came three subthemes: physical exam, shared decision-making, and patient compliance.
There was concern among the groups about the lack of a physical exam when using a vPCP:
“…the complete lack of a physical exam. I wouldn’t be examined. My heart wouldn’t be listened to, my lungs wouldn’t be listened to.. the complete lack of touch, the complete[ly] lack of a physical exam would bother me.” (First year medical student)
Another concern was that the vPCP would not involve the patient in the decision-making process.
“…having that shared discussion as far as these are the positives and negatives of this treatment…that removes a lot of the personal decision that come into healthcare.” (Fourth year medical student)
The students also believed that patient compliance would be lower when using a vPCP.
“[I]f it were to write prescriptions, maybe I wouldn’t take them, who knows. Like, it wouldn’t be as serious to me as a real physician…” (First year graduate student)
Misdiagnosis. Another major concern was the potential for misdiagnosis. Two subthemes emerged: accountability and reporting inaccuracy.
The students questioned who would be responsible for mistakes made by the vPCP. Many students expressed that this would need to be made clear to them before they would use a virtual physician.
“…it’s worse if a computer makes the mistake because then the idea is, well, who do you sue? Whose fault is it?” (Fourth year medical student)
They also theorized that the vPCP could be easily misled by a patient’s misunderstanding of their own symptoms, as their reporting to the vPCP may not always be clear and straight forward:
“So, if I’m not aware of what I’m going through, I might [put something] wrong into that, I might enter that I’m dealing with some other disease.” (First year graduate student)
Students in each group agreed that a vPCP would not be suitable in all cases. Two themes emerged from these discussions: rare conditions and mental health.
Students believed a vPCP would not be appropriate for rare conditions:
“What if an event that has never been present, that was never used to train that AI system, presents itself in the future? What happens?” (Fourth year graduate student)
Some students believed a vPCP would be inappropriate for mental health concerns, especially in conditions like depression where human connection is important:
“…so much of medicine is not medicine. It’s being a person, being a listener, being somebody you can talk to…” (Fourth year medical student)
Future of vPCP
Students were asked to discuss how they envisioned a vPCP would work in the future. Multiple themes emerged, centered around physicians, patients, population health, and data technology (Table 1).
Much of the discussion around the future of virtual primary care centered around how it might affect human physicians. From these discussions, two subthemes emerged: cannot replace and data checking.
Students agreed that the vPCP would be a tool for the human physician, but could not replace them:
“Not necessarily as a stand-in for a physician. I just see it as a way of having 10,000 brains about this one problem as opposed to just one.” (First year medical student)
The students also noted that any data or algorithm used by the vPCP would need to be checked and verified by human physicians.
“You end up building some kind of algorithm and have it take all these factors in consideration, then you test it… you have human physicians check it… anytime the computer makes a diagnosis or prescribes a treatment or whatever, then a physician would look at what the computer’s doing and say, okay. Yes.” (Fourth year medical student)
Another theme centered around how a future vPCP might function to benefit patients. Two themes emerged: patient engagement and chronic conditions.
Students saw the potential for more personalized patient engagement when using the vPCP.
“…it would be nice if the AI came in and took exactly what the doctor said and was able to formulate something specific to you, you know, not generic, not just the standard machine jargon, specifically.” (Fourth year medical student)
They also envisioned that the vPCP would be beneficial for continuous monitoring of chronic conditions, for example:
“You could have someone who’s a diabetic, and you feed the artificial intelligence data about your last blood glucoses and your A1C’s without leaving your home.” (First year medical student)
A theme that arose in multiple groups was the use of a vPCP for population health and epidemiology.
“[If] it was like a virus, and it actually, it was like spreading, and so the person who was, like, helping, like, nurses and all, they’re also getting infected…If there would have been a virtual machine to do that… let’s put the machine inside the room; let’s not enter there…. I feel like some extent, like, in some areas, we definitely need AI.” (First year graduate student)
“[I]n addition to that, epidemiologically… we could predict real-time health outbreaks that are occurring… And all of a sudden, we have epidemiologic data that suggests, oh, in this region the outbreak is concentrated. It was this batch of lettuce that was contaminated. We can trace it back, and so I see a place for that.” (First year medical student)
Students also discussed how the data technology might function in a future vPCP. Four subthemes emerged: dynamic systems, adequate testing, trustworthy data, and transparency.
Students discussed the importance of a dynamic system.
… (if) the system is very open source and, like, doctors and stuff can keep putting information in there, constantly, then it might be a very robust virtual physician.” (First year graduate student)
They also iterated that any vPCP must be adequately tested before its release:
“So, if I know that it’s an effective tool, it’s been proven, it’s been vetted, that would make me feel more comfortable.” (First year medical student)
Students expressed that data entered into the vPCP must come from or be verified by multiple trustworthy sources without competing interests.
“[Y]ou have this human checking, was this product right or wrong… we have to make sure that those companies…they’re not implementing a product just for the sake of having it tomorrow…we need some professionals in the field, and everybody who is trying to implement this sort of AI would have to meet [their] requirements before they can actually send it to hospitals” (Fourth year graduate student)
Finally, the students urged transparency regarding the data source used by the vPCP. They would want to know who was developing and implementing the system, the data that was being used to train the vPCP, how the vPCP was tested during development, and how their data would be used:
“Knowing who designed it and whether it was a physician or a group of physicians or a hospital… what dataset did they use. I’d want to know more about that.” (First year medical student)
The researchers also analyzed differences between groups. No major differences were identified between focus group participants at different education levels (first vs. fourth year). Between academic disciplines, opinions differed regarding two themes: mental health and trust. Medical students expressed hesitancy over using a vPCP for mental health concerns:
“Well, I think difficult topics, difficult situations, more psych-related issues, things that you actually just want to talk to a person about, you lose the relatability. ‘Cause a lot of times, even docs, they are very relatable people.” (Fourth year medical student)
Conversely, students in the engineering/data science groups thought that a vPCP would be easier to talk to:
“Like, as a human, there is some point where you really don’t want to express everything to a human…without the fear of like, what does the other one think about you?” (Fourth-year graduate student)
While medical students generally did not trust vPCPs, citing data privacy and potential for misdiagnosis, the engineering/data science students were supportive.
“…we certainly live in an era of big data, but like, I think that I would not be okay with sharing my information… I don’t even care if it’s depersonalized. I don’t care if it’s numbers attached without my name. I wouldn’t want to share my information with anybody.” (First year med student)
“…when they would come out, they will be at the same levels or better than human. And as time goes by, they will only get better.” (Fourth-year grad student)
While there has been extensive work on the algorithms and data science behind a virtual physician, and some companies have already developed virtual physicians, , research on the perceptions of its potential users has been limited . Accordingly, this study aimed to understand how young people think about using a vPCP. We hypothesized that the technology-focused graduate students would be more supportive of the vPCP than medical students. This hypothesis was supported. Graduate students expressed that they would feel comfortable seeking care from a vPCP, while medical students were less trusting, citing data privacy concerns. We also hypothesized that the younger first-year students would be more supportive of using a vPCP than the fourth-year students, as a function of age and education experience. This hypothesis was not supported, as no differences were found between these groups.
Young people are already using health care differently than previous generations and are likely to be early adopters of this virtual primary care technology [9,23]. Appropriately, convenience and cost were two of the major advantages discussed during the focus groups. Another major advantage was efficiency. The students conceptualized the vPCP as being a helping tool for a human physician, something that could fill in on “simpler tasks” including documentation, treating common illnesses, and patient education. They theorized that this would free up the physician for more complex tasks, such as diagnostics, complex patient care, and even research and medication development. These ideas map closely onto the prominent concerns of current physicians, who face high productivity and administrative demands, contributing partly to elevated levels of burnout  and moral distress .
Students debated whether a vPCP would alleviate or exacerbate existing care disparities and stigmatization in healthcare. Many students visualized the vPCP as a way to avoid potential stigmatization, since a vPCP would not have any unconscious biases. However, some students thought that the vPCP may exacerbate inequalities and care disparities. Considering the substantial existing disparities in insurance status and access to healthcare in the US , these concerns were reflective of existing literature, suggesting they would likely need to be addressed in order for a vPCP to be widely accepted .
Students felt that a vPCP would have increased diagnostic accuracy over a human physician in many cases. Artificial intelligence systems have already demonstrated similar or higher diagnostic accuracy to human physicians, in primary care  and other disciplines . Conversely, though, a major concern was the potential for misdiagnosis. In order to be accurate, an AI-physician system would need to have a sufficient amount of reliable, trustworthy, and validated data input from humans. The technology would also need to be sufficiently developed to understand and interpret the communicated input from human patients. Moreover, vPCPs are limited only to the information that has been entered by humans, and thus cannot know things that humans have not discovered and input to the machine. Accordingly, students wondered what might happen in these instances. They felt it was unclear who they might contact regarding mistakes and other concerns–would it be the company who developed the system, the hospital whose data is used, or someone else?.
Students also expressed some major concerns regarding data privacy and accountability for mistakes. This made sense, considering information hacking and privacy rights are major concerns in broader society today. Privacy issues are a common concern about AI in healthcare broadly . The students cited examples of large-scale information theft and unauthorized information sharing, and for those reasons, some students were hesitant about sharing their health information with an artificial intelligence system. Many students noted they would not automatically trust a vPCP, and that transparency would be required in order to garner their trust.
Another perceived disadvantage to many of the participants was the lack of human connection that would come with using a vPCP. Many students saw this as an integral piece of their healthcare experience, especially in respect to mental health. The loss of humanity in medicine is also a concern of many who feel wary of artificial intelligence in healthcare . However, some of the data science/engineering students didn’t share this concern, and thought patients would eventually adapt, just as humans have adapted to other technologies like calculators, cell phones, and personal computers. Regardless of these concerns, all groups thought that a vPCP could be a valuable tool when working in tandem with a human physician, rather than replacing them. This echoes existing literature about the role of artificial intelligence in healthcare [1,3,35].
These results offer insight into the potential development and implementation of virtual primary care systems. Some of the advantages, such as increased efficiency  and accuracy , have already been proposed in the literature as benefits of using AI in healthcare. However, a few of the major concerns, such as data safety, accountability, and health equity, haven’t been proposed in research as patient concerns. Likewise, primary health care providers and health informatics experts shave been shown to hold similar concerns about AI systems in patient care . Healthcare systems, AI developers, and policymakers should take these concerns into consideration as virtual care develops. Organizations should consider the perceptions of these consumers and make the necessary adjustment to their systems. For example, transparent and easy-to-understand privacy policies should be distributed to the user before their first use. All data used by the virtual PCP should be sufficient for reliable diagnoses, checked and verified by multiple disinterested parties, and these conditions should be explained to all patients using the system.
The students also discussed how virtual physicians have the potential to either exacerbate or alleviate existing care disparities, considering both the AI system itself and the context in which it is used. While it could be an affordable option for those who are not covered by health insurance, it is important that vPCPs do not replace in-person care for these populations, while more privileged patients are free to choose their method of care. In addition, data should not be used or shared in a way that can disadvantage certain groups of people, such as to make decisions about eligibility for health insurance coverage . While the vPCP may expand access-to-care, there is still potential for bias.
In addition, care must be taken during the development of AI physician systems to ensure that data from a range of gender identities and racial and ethnic backgrounds is included . The impact of relying on data from non-diverse datasets has already been demonstrated in the field of medical research. For example, a 2016 study  showed that only about 5% of genetic variants associated with asthma in European Americans were replicated in African Americans . This means that evidence-based treatment strategies for asthma, which have been developed in research using mostly white European-Americans, are likely not the best fit for African-Americans. Correspondingly, deaths due to asthma in the U.S. are nearly 10 times higher in non-White children and over 2 times higher in non-White adults . Moreover, there have already been recognized instances of racial bias in healthcare AI. One study found that a major risk-prediction algorithm assigned lower risk scores to Black patients than to White patients with comparable health conditions, attributable to mistakes in the algorithm’s initial design . There are many more examples of the impact of disparities in research [39,42,43], and AI [44,45] and it is crucial to not repeat this oversight when developing a virtual physician.
According to the focus group participants, setting standards for transparency and accountability would be necessary before they would use a virtual physician. This should start early with the development of the virtual physician, and continue throughout its implementation and use so as to reduce and prevent associated harms [37,46]. One potential barrier to complete transparency is known as the “black box” of AI, wherein decision-making processes used by AI are so complex that even engineering cannot decipher them [37,46]. Virtual physicians should be designed so that diagnoses and other decisions can be explained . These explanations could be provided to patients at request, upon which they could share them with a human physician for a second opinion . If it is not possible for decisions to be explained, extreme care should be taken during development of the AI to reduce the risk for errors . The diagnostic sensitivity and specificity of the vPCP should also be explicitly published and providers -and patients- should be educated about its accuracy [37,46]. A reporting structure should be established between implementers of the vPCP and its developers, and implementers should be expected to report instances of social bias or poor decision-making [37,46]. Of course, these are just a fraction of the ethical issues that should be considered before implementation of virtual physicians.
Considering the ongoing pandemic of COVID-19, these results are even more important. Interestingly, the potential for virtual primary care to be used in epidemiology and disease control was a theme that emerged in the Future of vPCP category, discussed in several of the focus groups. One student noted that the vPCP would be advantageous in active infection disease situations, as patients could receive care without the risk of healthcare workers becoming infected. Now, many primary care clinics have shifted quickly to telemedicine, and AI-based screening tools for COVID-19 are accessible on the Internet [47,48]. People worldwide may have to practice social distancing to some extent for months to come . In addition, high infection rates are causing high demand for physicians . Virtual primary care is possibly more relevant now than ever. Patient perceptions like those in this study should be used to optimize vPCPs for future use. Patients will ultimately decide whether or not to use these systems, so their opinions and concerns should be addressed during the development and implementation of virtual primary care.
Strengths and limitations
To the best of our knowledge, this is the first study to explore young adults’ perceptions of using vPCPs for health care. Specifically, this study explored perceptions of using a virtual primary care physician among advanced degree students. Since these groups had an informed understanding of either healthcare or engineering, they provided a good starting point to inform future research about patient perceptions of AI in healthcare and virtual physicians. Considering that these groups are already potential users and will soon enter the workforce as potential developers of vPCPs, their opinions are especially interesting. However, it may also be considered a limitation of the study that participants were young adults in graduate programs. Although they represented different disciplines, their perceptions may not be generally representative of young adults of a similar age, especially those with limited access to higher education. Future research should seek perceptions from patients of all ages and educational backgrounds.
S1 Appendix. Focus group questions.
The authors would like to thank the focus group participants for their participation.
- 1. Lin SY, Mahoney MR, Sinsky CA. Ten ways artificial intelligence will transform primary care. J Gen Intern Med. 2019 Aug;34(8):1626–30. pmid:31090027
- 2. Lau AYS, Staccini P, Section Editors for the IMIA Yearbook Section on Education and Consumer Health Informatics. Artificial intelligence in health: new opportunities, challenges, and practical implications. Yearb Med Inform. 2019 Aug;28(01):174–8. pmid:31419829
- 3. Yu K-H, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nat Biomed Eng. 2018 Oct;2(10):719–31. pmid:31015651
Lee K. COVID-19 Will Accelerate the AI Health Care Revolution. Wired. 2020 May 22 [cited 2020 Oct 6]. Available from: https://www.wired.com/story/covid-19-will-accelerate-ai-health-care-revolution/.
National Institutes of Health. NIH harnesses AI for COVID-19 diagnosis, treatment, and monitoring. 2020 Aug 5 [cited 2020 Oct 6]. Available from: https://www.nih.gov/news-events/news-releases/nih-harnesses-ai-covid-19-diagnosis-treatment-monitoring.
- 6. Wosik J, Fudim M, Cameron B, Gellad ZF, Cho A, Phinney D, et al. Telehealth transformation: COVID-19 and the rise of virtual care. J Am Med Inform Assoc. 2020 Jun 1;27(6):957–62. pmid:32311034
- 7. Winn AN, Somai M, Fergestrom N, Crotty BH. Association of use of online symptom checkers with patients’ plans for seeking care. JAMA Netw Open. 2019 Dec 27;2(12):e1918561. pmid:31880791
- 8. Dorsey ER, Topol EJ. State of telehealth. N Engl J Med. 2016 Jul;375(2):154–61. pmid:27410924
Blue Cross Blue Shield Association. The Health of Millenials. Blue Cross Blue Shield Association. 2019 Apr [cited 2020 Apr 5]. Available from: https://www.bcbs.com/sites/default/files/file-attachments/health-of-america-report/HOA-Millennial_Health_0.pdf.
- 10. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019 Jun;6(2):94–8. pmid:31363513
Ladika S. Virtual Primary Care ‘Visits’? That Future Is Already Here. Managed Care. 2020 Feb 7 [cited 27 Mar 2020]. Available from: https://web.archive.org/web/20200215232202/https://www.managedcaremag.com/archives/2019/12/virtual-primary-care-visits-future-already-here.
Razzaki S, Baker A, Perov Y, Middleton K, Baxter J, Mullarkey D, et al. A comparative study of artificial intelligence and human doctors for the purpose of triage and diagnosis. ArXiv:180610698. 2018 Jun 27 [cited 2020 Apr 8]; Available from: http://arxiv.org/abs/1806.10698.
Lagasse J. Artificial intelligence in healthcare projected to be worth more than $27 billion by 2025. Healthcare Finance. 2019 [cited 2020 Apr 7]. Available from: https://www.healthcarefinancenews.com/news/artificial-intelligence-healthcare-projected-be-worth-more-27-billion-2025.
- 14. Liu X, Faes L, Kale A, Wagner S, Fu DJ, Bruynseels A, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. The Lancet Digital Health. 2019;1(6):e271–e297.
- 15. Walker RC, Tong A, Howard K, Palmer SC. Patient expectations and experiences of remote monitoring for chronic diseases: systematic review and thematic synthesis of qualitative studies. Int J Med Inform. 2019 Apr;124:78–85. pmid:30784430
- 16. Fagherazzi G, Ravaud P. Digital diabetes:perspectives for diabetes prevention, management and research. Diabetes Metab. 2019 Sep;45(4):322–9. pmid:30243616
- 17. McKinney SM, Sieniek M, Godbole V, Godwin J, Antropova N, Ashrafian H, et al. International evaluation of an AI system for breast cancer screening. Nature. 2020 Jan; 577(7788):89–94. pmid:31894144
- 18. Haenssle HA, Fink C, Toberer F, Winkler J, Stolz W, Deinlein T, et al. Man against machine reloaded: performance of a market-approved convolutional neural network in classifying a broad spectrum of skin lesions in comparison with 96 dermatologists working under less artificial conditions. Ann Oncol. 2020 Jan;31(1):137–43. pmid:31912788
- 19. Efimenko M, Ignatev A, Koshechin K. Review of medical image recognition technologies to detect melanomas using neural networks. BMC Bioinformatics. 2020 Sep;21:270. pmid:32921304
Berchick ER, Barnett JC, Upton RD. Health insurance coverage in the United States: 2018. U.S. Census Bureau. 2019 [cited 2020 Sep 25]. Available from: https://www.census.gov/content/dam/Census/library/publications/2019/demo/p60-267.pdf.
- 21. Uscher-Pines L, Mehrotra A. Analysis of Teladoc use seems to indicate expanded access to care for patients without prior connection to a provider. Health Aff. 2014 Feb;33(2):258–64. pmid:24493769
Vogels E. Millenials Stand Out for Their Technology Use, but Older Generations also Embrace Digital Life. Pew Research Center. 2019 [cited 2020 Apr 5]. Available from: https://www.pewresearch.org/fact-tank/2019/09/09/us-generations-technology-use/.
Kennedy B, Funk C. 28% of Americans Are ‘Strong’ Early Adopters of Technology. Pew Research Center. 2016 [cited 2020 Apr 5]. Available from: https://www.pewresearch.org/fact-tank/2016/07/12/28-of-americans-are-strong-early-adopters-of-technology/.
- 24. Martinez KA, Rood M, Jhangiani N, Kou L, Rose S, Boissy A. Artificial intelligence in healthcare. Nat Biomed Eng. 2018 Oct;2(10):719–31. pmid:31015651
- 25. Polinski JM, Barker T, Gagliano N, Sussman A, Brennan TA, Shrank WH. Patients’ satisfaction with and preference for telehealth visits. J Gen Intern Med. 2016 Mar;31(3):269–75. pmid:26269131
- 26. Lewis KL, Han PKJ, Hooker GW, Klein WLP, Biesecker LG, Biesecker BB. Characterizing participants in the Clinseq genome sequencing cohort as early adopters of a new health technology. PLoS One. 2015 Jul;10(7): e0132690. pmid:26186621
Rogers E. Diffusion of innovations. 5th ed. New York, NY: Free Press; 2003.
- 28. Bowen GA. Naturalistic inquiry and the saturation concept: a research note. Qual Res. 2008 Feb;8(1):137–52.
Boyatzis R. Transforming qualitative information: thematic analysis and code development. Thousand Oaks, CA: Sage Publications; 1998. https://doi.org/10.1080/00221329809596155 pmid:9729839
- 30. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007 Sep 16;19(6):349–57. pmid:17872937
Guba E, Lincoln Y. Fourth generation evaluation. Newbury Park, London, and New Delhi: Sage Publications; 1989.
- 32. Shanafelt TD, West CP, Sinsky C, Trockel M, Tutty M, Satele DV, et al. Changes in burnout and satisfaction with work-life integration in physicians and the general US working population between 2011 and 2017. Mayo Clin Proc. 2019 Sep;94(9):1681–94. pmid:30803733
- 33. Whitehead PB, Herbertson RK, Hamric AB, Epstein EG, Fisher JM. Moral distress among healthcare professionals: report of an institution-wide survey. J Nurs Scholarsh. 2015;47(2):117–25. pmid:25440758
Office of Disease Prevention and Health Promotion. Access to health services. HealthyPeople.gov. [cited 9 April 2020]. Available from: https://www.healthypeople.gov/2020/topics-objectives/topic/Access-to-Health-Services.
- 35. Israni ST, Verghese A. Humanizing artificial intelligence. JAMA. 2019;321(1):29. pmid:30535297
- 36. Liyanage H, Liaw S, Jonnagaddala J, Schreiber R, Kuziemsky , Terry AL, et al. Artificial intelligence in primary health care: perceptions, issues, and challenges. Yearb Med Inform. 2019;28(1):41–46. pmid:31022751
- 37. Johnson SLJ. AI, machine learning, and ethics in health care. J Leg Med. 2019;39(4):427–41. pmid:31940250
- 38. White MJ, Risse-Adams O, Goddard P, Contreras MJ, Adams J, Hu D, et al. Novel genetic risk factors for asthma in African American children: Precision medicine and the SAGE II Study. Immunogenetics. 2016;68(6–7):391–400. pmid:27142222
Jacewicz, N. Why are health studies so white? The Atlantic. 2016 Jun 26 [cited 2020 Sep 28]. Available from: https://www.theatlantic.com/health/archive/2016/06/why-are-health-studies-so-white/487046/.
Office of Minority Health. Asthma and African Americans. 2018 [accessed 2020 Sep 28]. Available from: https://minorityhealth.hhs.gov/omh/browse.aspx?lvl=4&lvlid=15.
- 41. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019 Oct 25;366(6464):447–53. pmid:31649194
- 42. Hoffman KM, Trawalter S, Axt JR, Oliver MN. Racial bias in pain assessment and treatment recommendations, and false beliefs about biological differences between blacks and whites. PNAS. 2016;113(16):4296–301. pmid:27044069
- 43. Mehta LS, Beckie TM, DeVon HA, Grines CL, Krumholz HM, Johnson MN, et al. Acute myocardial infarction in women: a scientific statement from the American Heart Association. Circulation. 2016;133(9):916–47. pmid:26811316
- 44. Röösli E, Rice B, Hernandez-Boussard T. Bias at warp speed: how AI may contribute to the disparities gap in the time of COVID-19. J Am Med Inform Assoc. 2020 Aug 17 [cited 2020 Oct 7]. [Epub ahead of print]. pmid:32805004
- 45. Zou J, Schiebinger L. AI can be sexist and racist—it’s time to make it fair. Nature. 2018 Jul;559(7714):324–6. pmid:30018439
- 46. Lysaght T, Lim HY, Xafis V, Ngaim KY. AI-assisted decision-making in healthcare: the application of an ethics framework for big data in health and research. Asian Bioethics Review. 2019;11(3):299–314.
- 47. Greenhalgh T, Koh GCH, Car J. Covid-19: a remote assessment in primary care. BMJ. 2020;368:m1182. pmid:32213507
Centers for Disease Control and Prevention. Testing for COVID-19 [Internet]. Available from: https://www.cdc.gov/coronavirus/2019-ncov/symptoms-testing/testing.html.
Resnick, B. Scientists warn we may need to live with social distancing for a year or more. Vox. 2020 Mar 17 [cited 2020 Apr 10]. Available from: https://www.vox.com/science-and-health/2020/3/17/21181694/coronavirus-covid-19-lockdowns-end-how-long-months-years.
Gold J. Surging health care worker quarantines raise concerns as coronavirus spreads. Kaiser Health News. 2020 Mar 9 [cited 2020 Apr 10]. Available from https://khn.org/news/surging-health-care-worker-quarantines-raise-concerns-as-coronavirus-spreads/.