Correction
30 Dec 2025: Gulamali F, Kim JY, Pejavara K, Thomas C, Mathur V, et al. (2025) Correction: Eliminating the AI digital divide by building local capacity. PLOS Digital Health 4(12): e0001173. https://doi.org/10.1371/journal.pdig.0001173 View correction
Figures
Abstract
Over the past few years, health delivery organizations (HDOs) have been adopting and integrating AI tools, including clinical tools for tasks like predicting risk of inpatient mortality and operational tools for clinical documentation, scheduling and revenue cycle management, to fulfill the quintuple aim. The expertise and resources to do so is often concentrated in academic medical centers, leaving patients and providers in lower-resource settings unable to fully realize the benefits of AI tools. There is a growing divide in HDO ability to conduct AI product lifecycle management, due to a gap in resources and capabilities (e.g., technical expertise, funding, data infrastructure) to do so. In previous technological shifts in the United States including electronic health record and telehealth adoption, there were similar disparities in rates of adoption between higher and lower-resource settings. The government responded to these disparities successfully by creating centers of excellence to provide technical assistance to HDOs in rural and underserved communities. Similarly, a hub-and-spoke network, connecting HDOs with technical, regulatory, and legal support services from vendors, law firms, other HDOs with more AI capabilities, etc. can enable all settings to be well equipped to adopt AI tools. Health AI Partnership (HAIP) is a multi-stakeholder collaborative seeking to promote the safe and effective use of AI in healthcare. HAIP has launched a pilot program implementing a hub-and-spoke network, but targeted public investment is needed to enable capacity building nationwide. As more HDOs are striving to utilize AI tools to improve care delivery, federal and state governments should support the development of hub-and-spoke networks to promote widespread, meaningful adoption of AI across diverse settings. This effort requires coordination among all entities in the health AI ecosystem to ensure these tools are implemented safely and effectively and that all HDOs realize the benefits of these tools.
Citation: Gulamali F, Kim JY, Pejavara K, Thomas C, Mathur V, Eigen Z, et al. (2025) Eliminating the AI digital divide by building local capacity. PLOS Digit Health 4(10): e0001026. https://doi.org/10.1371/journal.pdig.0001026
Editor: Po-Chih Kuo, National Tsing-Hua University: National Tsing Hua University, TAIWAN
Published: October 23, 2025
Copyright: © 2025 Gulamali et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: This work was supported by the Gordon and Betty Moore Foundation (#10849 to MPS, MP, and SB). The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: I have read the journal’s policy and the authors of this manuscript have the following competing interests: MPS is a co-inventor of intellectual property licensed by Duke University to Clinetic, Inc., KelaHealth, Inc., Cohere-Med, Inc., and Vega Health, Inc. MPS holds equity in Clinetic, Inc. and Vega Health, Inc. SB is a co-inventor of intellectual property licensed by Duke University to Clinetic, Inc., Cohere-Med, Inc., and Vega Health, Inc. SB holds equity in Clinetic, Inc.
Abbreviations:: HDOs, Healthcare delivery organizations; LLMs, large language models; EHRs, electronic health records; RECs, Regional Extension Centers; COEs, Centers for Excellence; TRCs, Telehealth Resource Centers; HRSA, Health Resources and Services Administration.
Introduction
The emergence of generative AI offers new opportunities to advance the quintuple aim [1], which is defined by improving population health, enhancing the care experience, reducing costs, reducing burnout, and advancing health equity. Healthcare delivery organizations (HDOs) have seized opportunities to achieve these advancements at scale by investing in new partnerships to implement large language models (LLMs) that complement other AI tools.
Healthcare in the United States continues to suffer from low quality, low value care [2]. New investments in AI tools seek to help optimize the delivery of high-value interventions [3]. With record levels of physician burnout, ambient AI scribing solutions—AI tools that transcribe physician-patient interactions and summarize them into encounter notes—hold promise to help clinicians shift their focus from the documentation burden to enhancing patient interaction [4]. LLMs can also summarize information from discharge notes for patient accessibility [5] and retrieve information on patients’ health-related social needs from clinical notes to more effectively deploy resources to support patients [6]. Population health challenges are being addressed by other types of AI tools that can identify patients at risk for rapid disease progression [7] and expand patient access to psychotherapy [8].
Driven by urgency and competitive pressures to address healthcare challenges, many organizations are looking to adopt AI tools. According to one industry report, 76% of major payers and providers surveyed are seeking to establish AI pilots in the next year [9]. HDOs with the expertise and resources to implement administrative and clinical AI tools effectively are optimizing efficiency and improving quality of care. However, many HDOs are poorly equipped to conduct local AI quality management and are either not implementing AI tools at all or conducting limited testing and monitoring. The latter can exacerbate issues AI was designed to solve like improving efficiency and providing high quality care by increasing burden through alert fatigue, worsening inequities with biased predictions, negatively impacting care, and increasing liability on physicians and organizations. Implementing the same AI clinical decision support tool with workflows optimized for local context has been shown to reduce sepsis mortality in one context and be ineffective in another context [10,11]. Recent studies on the ROI of AI scribes are also showing mixed results across settings [12,13]. Substantial expertise is needed to realize the value of these tools, and ineffective AI implementations may negatively impact patients and clinicians while reinforcing existing biases [14] and discourage HDO uptake of beneficial technologies. While HDOs with resources and capabilities to leverage AI responsibly can achieve advancements in care, other HDOs may lag, driving a deeper digital divide and gap in quality of care.
Digital divide in AI adoption
The digital divide is typically defined by lack of access to internet services, perpetuating socioeconomic disparities [15]. These disparities may be exacerbated as HDOs lacking personnel with relevant expertise, resources to purchase and localize tools, organizational processes and capabilities to conduct AI product lifecycle management, and IT infrastructure, including electronic health records (EHRs) will be unable to fully benefit from AI [16]. As of 2021, more than 20% of office-based practices did not adopt 2015 Certified EHR Technology, which serves as a backbone for many AI applications [17]. This disparity contributes to an AI divide, where HDOs with certified EHRs may have access to AI tools, while those without it are left at a disadvantage. The World Health Organization 2021 Guidance on AI Ethics and Governance in Health identifies the impact of the digital divide on AI adoption at a global scale and recognizes the potential for AI to improve health outcomes if greater efforts are made to address this divide [18].
The AI divide is particularly evident in one study, which found that only 61% of hospitals conduct local evaluations to assess for accuracy and 44% to assess for bias on most or all implemented AI products [19]. This finding suggests a gap in the resources or capabilities available to different HDOs to properly evaluate AI tools. Considering hospitals more commonly reported local evaluation on inpatient risk tools than outpatient administrative tools, authors of the study suggest one factor contributing to lower evaluation rates may be a misperception of the risk of outpatient tools. The AI divide, furthered by lack of awareness, resources, and capabilities to evaluate AI tools, can lead to unequal access to safe and effective AI use across different demographic subgroups, particularly among at-risk populations. As a result, there is a high risk of ineffective implementations that worsen inequities.
Lessons from previous technological shifts in healthcare
Prior technological advancements that transformed healthcare delivery include the adoption and implementation of EHRs and telehealth. Both required federal support at the individual practice level, as well as incentives and infrastructure development to promote adoption. Federal programs that supported these transitions offer valuable examples for how the United States can rapidly facilitate the safe and effective adoption of AI tools.
The HITECH Act, enacted in 2009 and implemented in 2011, provided incentive payments to providers who adopted EHR systems. Of providers, 75% report that adopting EHR systems enabled them to deliver better patient care [20]. While these incentives drove adoption overall, small, non-teaching, and rural hospitals continued to lag behind [21]. To address this issue, the Office of the National Coordinator for Health IT (ASTP/ONC) established Regional Extension Centers (RECs) in 2010, aiming to increase EHR adoption in rural and underserved settings [22]. RECs were funded to deliver technical, legal, and financial assistance; education and training; assistance in vendor selection; privacy and security support; and more to help practices achieve meaningful use of EHRs. The SAFER guidelines checklist provided additional resources and information that stakeholders required, enabling practices to track their progress capturing value from EHRs [23]. By 2014, 89% of REC participants adopted all or part of EHRs compared to 58% of non-participants [24]. This demonstrated that financial incentives alone could not drive EHR adoption across all settings. Complementary professional services were essential to support smaller practices, and these services will be needed to promote responsible AI adoption across HDOs as well.
Similarly, Telehealth Centers for Excellence (COEs) and the National Consortium of Telehealth Resource Centers (TRCs) were created to provide professional support services, education, and training to facilitate digital health adoption in rural and underserved communities. The COEs, housed at the Medical University of South Carolina and University of Mississippi Medical Center, established telehealth-enabled locations nationwide and evaluated Telehealth Palliative Care and Tele-Behavioral Health programs [25].
The Health Resources and Services Administration (HRSA) provided $600,000 in the first year and $16.25 million over five years to support COEs. The 12 regional and two national TRCs, also funded by HRSA, played a critical role in advancing telehealth adoption during the COVID-19 pandemic [26]. These centers were complemented by infrastructure investments with the 2021 Bipartisan Infrastructure Law [27] and American Rescue Plan [28], which expanded access to broadband. Furthermore, the 2020 1135 Waiver [29] allowed Medicare to cover telehealth services more broadly. Philanthropies like the California Health Care Foundation complemented government funding to further catalyze adoption and evaluate program impact [30]. As a result, the percent of physicians whose practices had telehealth capabilities increased from 25.1% in 2018 to 74.4% in 2022 [31]. This growth in telehealth adoption also required a multi-pronged approach consisting of professional support services, infrastructure investments, and reimbursement changes.
In these major technological advancements in healthcare, capacity building programs played a crucial role in diffusing expertise to rural and underserved settings. They often took the form of a hub-and-spoke model, where RECs, COEs, and TRCs served as hubs, providing resources, professional services, and a peer learning community, while HDOs functioned as the spokes, receiving the support. Congress worked closely with federal agencies to empower HDOs to adopt EHR systems and telehealth through infrastructure investments, incentives, and hub-and-spoke networks. These EHR and telemedicine investments created significant value and ROI for a variety of stakeholders [32,33]. It is essential to draw insights from these examples, understanding how public and private entities partnered to promote EHR and telemedicine adoption, when addressing the current technological shift that is AI in healthcare. While the specific capabilities required and challenges involved in adopting AI differ from those posed by EHR systems and telehealth systems, the capacity building approach has proved valuable in past implementations and would be valuable to pursue in this case as well.
Building capabilities before enforcing compliance
Currently, states are addressing the risks of AI in healthcare by strengthening protections related to patient privacy, consent, transparency, and non-discrimination (Table 1). At a federal level, there are protections, of which the degree of enforcement may alter with the change in administration (Table 2). The role of federal agencies is further complicated by the elimination of the Chevron deference, which previously had courts defer to federal agencies for interpretation of ambiguous statutes [34]. This could slow down enforcement of regulatory compliance at the federal agency level and put the onus on Congress to enact legislation in response to risks of AI. Agency efforts may shift towards providing support to enable safe and effective use of AI within HDOs.
The patchwork of current regulatory efforts is mostly focused on one-size-fits-all compliance, where market access controls limit commercial availability of AI products and HDOs bear some burden to ensure that AI products are safe, effective, and equitable. This crude approach falsely assumes that all HDOs are equally equipped to bear this compliance burden and centers on the product rather than the numerous factors required for implementation. Similarly to how ONC established RECs to aid smaller, lower-resourced practices that were lagging in the adoption of EHR systems after the HITECH Act was passed, there must be accompanying investment in capacity building to help HDOs in achieving AI-related compliance requirements. In 2017, the disparity was still present as 93% of small, rural non-federal acute care hospitals had certified health IT compared to 99% of large hospitals [35]. If regulators and legislative bodies pursue a compliance-first approach that requires local validation of AI, low-resource HDOs will be unable to assess local AI performance and will be unable to utilize AI. On the other hand, if regulators and legislative bodies pursue a capacity building-first approach to support local validation of AI, low-resource HDOs will receive the necessary technical assistance to assess local AI performance and will be able to utilize AI. A compliance-first approach will widen the digital divide, whereas a capacity building-first approach can eliminate the digital divide.
This article advocates for local capacity building through a hub-and-spoke model for technical, operational, and educational assistance so that all HDOs can be empowered to realize the benefits of AI. Only after significant investment in capacity building and assistance establishing foundational AI capabilities can HDOs broadly bear complex compliance responsibilities.
Hub-and-spoke model: Sharing AI lifecycle management processes
Defining a hub-and-spoke model for capacity building
Capacity building requires engaging the appropriate stakeholders within an organization to build expertise and capabilities. This can occur through a hub-and-spoke model where hubs (which possess the necessary expertise and capabilities) provide training and support to spokes (that lack these attributes), enabling teams within spokes to implement AI tools within their organizations (Fig 1). Hubs can be specialized and interconnected to pool diverse expertise and balance workloads at a national level. The exchange of expertise would be bidirectional as spokes identify and raise awareness of challenges with adoption as well as unmet needs and share innovative approaches to conduct AI product lifecycle management (procurement, development, integration, and monitoring) in resource-limited environments. HAIP has detailed challenges and approaches to the AI product lifecycle, analyzing nearly 90 interviews to determine eight key decision points that HDOs experience when adopting and implementing an AI tool [16].
(A) The diagram above depicts a hub-and-spoke network where the coordinating center forms connections between hubs (professional services, payers, universities and professional societies, vendors, and other HDOs with more expertise in AI adoption and implementation) to spokes (community hospitals, federally qualified health centers, and other HDOs with less expertise in AI adoption and implementation). Black lines represent connection from hubs and spokes to the coordinating center, and orange lines represent connections from hubs to spokes facilitated by the coordinating center. Spokes may independently form partnerships with hubs not facilitated by the coordinating center, which is not illustrated in this diagram. (B) The diagram below demonstrates an example network for a community hospital seeking support in adopting and implementing a sepsis risk prediction tool. HDOs include university-affiliated medical centers as well as non-affiliated ones that have developed thorough expertise from having implemented a sepsis AI tool.
The hub-and-spoke approach, establishing resource centers that coordinate support services delivery to lower-resource HDOs, has succeeded in augmenting rates of EHR adoption after the HITECH Act was passed and advancing telehealth adoption, particularly during the pandemic. Applying this approach to AI capacity building can help address this new technological shift and reduce the AI divide by providing all HDOs with the specific set of support services they need to address the unique challenges of implementing AI tools safely and effectively.
The network of hub stakeholders
Hubs have the potential to provide a range of technical and operational support services. They will require initial and ongoing funding to provide support services to external spoke sites, and such funding could be linked to the scope of services offered. Technical services could empower spoke sites to interface with vendors; identify the best solution within a product category; validate the performance of an AI solution locally; conduct AI risk assessments; and monitor an AI product post-implementation. Operational services could include supporting spokes to navigate ethical, legal, and regulatory challenges; develop and disseminate tools for program evaluation; and manage the AI product lifecycle. To enhance scalability, hubs can specialize in specific service areas (e.g., technical, regulatory, change management), specific AI use cases (e.g., sepsis care, chronic disease progression, mental health), or care delivery types (e.g., urban safety net, rural critical access hospital). Multiple hubs can collaborate to serve a spoke site, bringing together the necessary multidisciplinary capabilities and expertise.
Hubs may consist of HDOs that have invested significantly in developing AI capabilities as well as other expert stakeholders within the AI ecosystem to ensure broad adoption of AI in healthcare. Below, we describe potential relevant stakeholders.
Technological firms.
First, technology firms can play a critical role by providing tooling and infrastructure to integrate, evaluate, and continuously monitor AI products. AI product vendors should assume some of these responsibilities through service contracts, while cloud service providers can provide tooling to support these activities. Technology firms and universities are also spurring open-source innovation by releasing software that allows local evaluation of AI models with locally curated data for sensitivity, specificity, precision, and recall, publishing externally validated models, and partnering with HDOs to operationalize responsible AI principles [36–38]. Once AI tools are implemented, liability often lies on clinicians and local HDO facilities [39]. Both are often poorly equipped to safeguard against automation bias and inappropriate use of AI, especially as AI products increasingly use complex methods like neural networks and LLMs [40]. An HDO’s ability to make informed decisions when assessing safety and ROI also requires greater transparency from vendors. This includes the ability to assess bias, evaluate and monitor local product performance, and provide educational support to frontline users using tools like Model Fact Labels to enhance targeted clinical actions [41].
Payers.
Second, payers and states can identify AI use cases that are high value for HDOs and payers while funding assistance programs to support broad adoption. Payers like Blue Cross Blue Shield Michigan are providing incentives to HDOs using high-value AI tools [42]. These kinds of payer-driven programs that incentivize use of risk-stratification tools can improve patient care and reduce costly outcomes like readmission rates and ED visits. Coordination between payers to prioritize similar AI use cases with consistent performance measures can reduce the burden on HDOs procuring different tools for different payer programs. Payers with significant AI expertise may also be equipped to provide technical support services as a hub to spoke sites providing care to covered populations.
Professional service firms.
Third, professional service firms with implementation, regulatory, and data science expertise can provide site-specific support when working with proprietary and confidential data. These firms may provide a base set of services through a hub, with additional specialized services requiring further contracting.
Universities and professional societies.
Fourth, universities and professional societies can develop training programs to incorporate AI product lifecycle management skills into professional licensing and certification. Future generations of physicians, nurses, IT leaders, and healthcare administrators will need to understand how to manage AI systems integrated into care delivery.
Coordinating centers.
Hub sites would be a connected community to align recommendations to provide consistently high-quality support services across the nation. Objective measures of organizational maturity will be needed to identify potential hub sites that have invested thoroughly in AI capabilities and can provide AI technical and operational assistance. Ongoing monitoring and re-assessment of hub sites would ensure high-quality AI assistance to spoke sites. Significant effort will be required to coordinate diverse hub sites and ensure an equitable distribution of resources. This coordination will be facilitated by coordinating entities. Participation is beneficial for all stakeholders due to facilitation of contracts, increased ROI from utilizing AI tools, and reduced burden of adhering to varying standards.
The network of spokes
Spokes might include community health centers, federally qualified health centers, community hospitals, Indian Health Service (IHS) facilities, VA Medical Centers (VAMCs), critical access hospitals, and other health systems with limited expertise in AI adoption. Ultimately, a set of freely accessible resources could be commissioned from experts within hub sites to benefit all HDOs. Organizational capabilities are dynamic and can evolve. Processes can be developed describing the steps a spoke site can take to transition to a hub site once they have sufficient internal capabilities and expertise. This train-the-trainer approach promotes sustainability and scalability of the hub-and-spoke model.
For spoke sites to benefit from hub services, each spoke could be best served by maintaining an in-house interdisciplinary team that includes clinical and operational staff (Table 3). This cross-functional team assumes responsibility for the quality and safety of AI implementations and collaborates with hubs to effectively evaluate and continuously monitor AI solutions within the local context. The ways in which members of this team come in during different points of implementation are described in Table 3. The core team must have a base set of capabilities, and this team can be composed of individuals both within the spoke site as well as individuals from professional service firms or hub organizations that complement the capabilities within the spoke site. AI product lifecycle management requires clinical and organizational leaders to gain additional expertise in generalized AI risk frameworks and set up feedback mechanisms for adverse event reporting to proactively ensure tools are working safely, effectively, and equitably. Spoke sites should have flexibility to pursue AI initiatives that align with their specific needs and organizational priorities. Long-term public sector financing to implement AI solutions and the accompanying IT infrastructure may be necessary to sustain this AI enablement for spoke sites.
Coordinating entity for hub–spoke collaboration
Although a decentralized hub-and-spoke model enables dissemination of AI expertise across diverse HDOs, significant coordination efforts will be required to maximize benefit to spoke sites. A single spoke site aiming to implement a single AI product may need to draw upon expertise from multiple hub sites as well as receive support from other stakeholder groups listed above. Coordinating entities will be essential to assemble the capabilities and expertise needed to assist spoke sites through the various stages of the AI product lifecycle and disseminate lessons captured across settings. These coordinating centers can be established and funded by federal agencies (e.g., ONC grants funded RECs, HRSA grants fund TRCs) or state agencies (e.g., state Offices of Rural Health fund telehealth support within states). The agencies establishing these centers must also delineate the structure and authority of these centers. Day-to-day operations of these centers can vary but are likely to include meeting with spoke sites to identify needs while assessing the effectiveness of support services provided by hubs to address those needs.
Implementing the hub-and-spoke model: Health AI Partnership Practice Network pilot program
A philanthropically funded demonstration project testing AI capacity building is already underway. Launched in 2024, the Health AI Partnership (HAIP) Practice Network program supports four FQHCs in southern California, Arizona, Texas, and Minnesota, and one community hospital in North Carolina in navigating the AI product lifecycle [43]. The program aims to build AI product lifecycle management capabilities within spoke sites and create a community of peer support and learning. Each HAIP Practice Network site has prioritized an internal use case, including an FDA-approved medical device, two generative AI scribe tools, and two EHR vendor-developed AI products. HAIP brings together capabilities across multiple hub sites, drawing on real-world AI expertise from HAIP members and contributors to host monthly Health AI Hubs on best practices for all spokes and hubs [44], an AI in Action Series for safety net organizations [45], and office hour sessions between hubs and spokes for more individualized supports.
Rather than imposing a compliance-first approach that may restrict innovation and AI use, the HAIP Practice Network supports HDOs in evaluating and integrating AI products prioritized by organizational needs. A rigorous independent evaluation of the program is being conducted so that learnings from this first demonstration project can inform future iterations of AI capacity building. There has been early news coverage of the pilot program detailing the program’s mission and early challenges [46,47]. The findings from the evaluation are not available at this time and will be shared in future work. These findings can shape the design of future hub-and-spoke networks, including the necessary degree and type of engagement from hubs, technical and personnel resources required at spokes, and the role of additional stakeholders.
While the program will generate valuable learnings by providing direct support to five HDOs, there is an urgent need for targeted public investment to scale the approach. There are 1,500 community health centers [48] and 6,200 hospitals [49] across the United States, many of which urgently need AI capacity building to optimize the safe and effective use of AI.
Scaling lessons from the pilot program
In the next 12 months, federal and state agencies can design regional and use case-specific AI capacity building programs, issuing a call for proposals, and funding a portfolio of AI capacity-building demonstration projects. These projects should then be evaluated for improvements in efficiency and quality of care delivery to ensure that different stakeholders are delivering high-quality assistance services to support spoke sites in rural and underserved settings. Outcome measures for program evaluations must focus on scaling safe and effective use of AI rather than just rates of adoption. Not all AI solution integrations will be successful and AI capacity-building programs can increase investments in AI solutions that create value and eliminate investments in AI solutions that don’t. While public funding will promote sustainability of the hub-and-spoke network, philanthropy can play a significant role in designing and evaluating programs to demonstrate value as well as determining high-value-use cases of AI in various settings. Impact from philanthropy-funded programs can promote public investment and incentivize development of national programs. As one preliminary example, HAIP is working with The SCAN Foundation and California Health Care Foundation to develop a technical assistance program for California safety net providers. Similar programs should be designed, funded, implemented, and tested in other regions.
Limitations
This article highlights the need for local capacity building to eliminate the AI digital divide through emphasizing the urgency of this issue and analyzing previous technological shifts. More information is necessary from pilot program learnings about how to most efficiently connect resources available in hubs to spokes to develop policies on how spokes can transition to hubs, the structure of coordinating entities, and other specific elements of the hub-and-spoke network. Rigorous evaluation of the HAIP Pilot Program is underway, and findings from this evaluation can provide further insight into how other hub-and-spoke networks should operate. The resources required by spokes and available from hubs differ significantly across settings. This means that designing and testing network structures is an essential next step to further understand the nuances in how these structures should differ across geographies, resource settings, and according to the AI use case.
Conclusion
The digital divide between organizations that have the capacity to effectively utilize AI tools to enhance quality of care and address healthcare challenges and those that lack these capabilities is widening and calcifying. Similar to actions taken for EHR and telehealth technologies, there must be public and private investments in technical infrastructure and incentives to support widespread, meaningful adoption of AI across diverse HDO settings. In the past, the federal government invested in telehealth capabilities through HRSA and EHR adoption through ONC. State governments invested in health IT capabilities through state offices of rural health. Federal and state agencies have an opportunity to address the AI digital divide by supporting local capacity building, investing in infrastructure, and providing incentives for adoption. If coordination at the federal level proves politically infeasible, state-run programs piloting hub-and-spoke networks with financial incentives for AI adoption can also be an effective means to generate evidence of different approaches to support the safe and effective implementation of AI across diverse HDOs.
References
- 1. Nundy S, Cooper LA, Mate KS. The quintuple aim for health care improvement: a new imperative to advance health equity. JAMA. 2022;327(6):521.
- 2.
Mirror, mirror 2024: a portrait of the failing U.S. health system [Internet]; 2024 [cited 2025 Jan 25]. Available from: https://www.commonwealthfund.org/publications/fund-reports/2024/sep/mirror-mirror-2024
- 3.
Health AI partnership [Internet]. A Summit on AI Product Lifecycle Management in Healthcare; 2024 [cited 2025 Jan 25]. Available from: https://drive.google.com/file/d/14qL9MYctX76pd0W87p2lONZnasQ21ucB/view?usp=embed_facebook
- 4. Tierney AA, Gayre G, Hoberman B, Mattern B, Ballesca M, Kipnis P, et al. Ambient artificial intelligence scribes to alleviate the burden of clinical documentation. NEJM Catal. 2024;5(3):CAT.23.0404.
- 5. Subramanian RC, Yang DA, Khanna R. Enhancing health care communication with large language models—the role, challenges, and future directions. JAMA Netw Open. 2024;7(3):e240347.
- 6. Guevara M, Chen S, Thomas S, Chaunzwa TL, Franco I, Kann BH, et al. Large language models to identify social determinants of health in electronic health records. NPJ Digit Med. 2024;7(1):1–14.
- 7.
Duke connected care chronic kidney disease care improvement project [Internet]. Duke Institute for Health Innovation. [cited 2025 Jan 25. ]. Available from: https://dihi.org/project/duke-connected-care-chronic-kidney-disease-care-improvement-project//
- 8. Stade EC, Stirman SW, Ungar LH, Boland CL, Schwartz HA, Yaden DB, et al. Large language models could change the future of behavioral healthcare: a proposal for responsible development and evaluation. NPJ Ment Health Res. 2024;3(1):1–12.
- 9.
Inside the C-suite: payer & provider leaders unveil their vision for AI - define ventures [Internet]. [cited 2025 Jan 25]. Available from: https://www.definevc.com/insights/inside-the-c-suite-payer-provider-leaders-unveil-their-vision-for-ai
- 10. Tarabichi Y, Cheng A, Bar-Shain D, McCrate BM, Reese LH, Emerman C, et al. Improving timeliness of antibiotic administration using a provider and pharmacist facing sepsis early warning system in the emergency department setting: a randomized controlled quality improvement initiative. Crit Care Med. 2022;50(3):418–27.
- 11. Downing NL, Rolnick J, Poole SF, Hall E, Wessels AJ, Heidenreich P, et al. Electronic health record-based clinical decision support alert for severe sepsis: a randomised evaluation. BMJ Qual Saf. 2019;28(9):762–8. pmid:30872387
- 12. Shah SJ, Devon-Sand A, Ma SP, Jeong Y, Crowell T, Smith M, et al. Ambient artificial intelligence scribes: physician burnout and perspectives on usability and documentation burden. J Am Med Inform Assoc. 2025;32(2):375–80.
- 13. Liu T-L, Hetherington TC, Stephens C, McWilliams A, Dharod A, Carroll T, et al. AI-powered clinical documentation and clinicians’ electronic health record experience: a nonrandomized clinical trial. JAMA Netw Open. 2024;7(9):e2432460. pmid:39240568
- 14. Jabbour S, Fouhey D, Shepard S, Valley TS, Kazerooni EA, Banovic N, et al. Measuring the impact of AI in the diagnosis of hospitalized patients: a randomized clinical vignette survey study. JAMA. 2023;330(23):2275–84.
- 15. Sanders CK, Scanlon E. The digital divide is a human rights issue: advancing social inclusion through social work advocacy. J Hum Rights Soc Work. 2021;6(2):130–43. pmid:33758780
- 16. Kim JY, Boag W, Gulamali F, Hasan A, Hogg HDJ, Lifson M, et al. Organizational governance of emerging technologies: AI adoption in healthcare. 2023 ACM Conference on Fairness, Accountability, and Transparency [Internet]. Chicago, IL, USA: ACM; 2023 [cited 2025 Jan 25. ]. p. 1396–417. Available from: https://dl.acm.org/doi/10.1145/3593013.3594089
- 17.
Assistant secretary for technology policy [Internet]. Office-based Physician Electronic Health Record Adoption | HealthIT.gov. [cited 2025 Jan 25. ]. Available from: https://www.healthit.gov/data/quickstats/office-based-physician-electronic-health-record-adoption
- 18.
Ethics and governance of artificial intelligence for health [Internet]. [cited 2025 Jul 15]. Available from: https://www.who.int/publications/i/item/9789240029200
- 19. Nong P, Adler-Milstein J, Apathy NC, Holmgren AJ, Everson J. Current use and evaluation of artificial intelligence and predictive models in US hospitals. Health Aff (Millwood). 2025;44(1):90–8.
- 20.
Assistant secretary for technology policy [Internet]. Improved Diagnostics & Patient Outcomes | HealthIT.gov. [cited 2025 Jan 25. ]. Available from: https://www.healthit.gov/topic/health-it-and-health-information-exchange-basics/improved-diagnostics-patient-outcomes
- 21. DesRoches CM, Worzala C, Joshi MS, Kralovec PD, Jha AK. Small, nonteaching, and rural hospitals continue to be slow in adopting electronic health record systems. Health Aff (Millwood). 2012;31(5):1092–9.
- 22.
Regional Extension Centers (RECs) | HealthIT.gov [Internet]. [cited 2025 Jan 25]. Available from: https://www.healthit.gov/topic/regional-extension-centers-recs
- 23.
Assistant secretary for technology policy [Internet]. SAFER Guides | HealthIT.gov. [cited 2025 Jan 25. ]. Available from: https://www.healthit.gov/topic/safety/safer-guides
- 24.
Audet AMJ, Bagley B, McLaughlin C, Newcomer K. Evaluation of the regional extension center program; 2016; Available from: https://www.healthit.gov/sites/default/files/Evaluation_of_the_Regional_Extension_Center_Program_Final_Report_4_4_16.pdf
- 25.
Telehealth centers of excellence [Internet]. Telehealth Centers of Excellence; 2025 [cited 2025 Jan 25. ]. Available from: https://telehealthcoe.org//
- 26.
National consortium of telehealth resource centers [Internet]. Centers. [cited 2025 Jan 25. ]. Available from: https://telehealthresourcecenter.org/centers//
- 27.
Federal Funding | BroadbandUSA [Internet]. [cited 2025 Jan 25]. Available from: https://broadbandusa.ntia.doc.gov/resources/federal/federal-funding
- 28.
Rep. Yarmuth JA [D K 3. H.R.1319 - 117th Congress (2021-2022): American Rescue Plan Act of 2021 [Internet]; 2021 [cited 2025 Jan 25]. Available from: https://www.congress.gov/bill/117th-congress/house-bill/1319
- 29.
COVID-19 emergency declaration blanket waivers for health care providers.
- 30. Uscher-Pines L, Sousa J, Jones M, Whaley C, Perrone C, McCullough C, et al. Telehealth use among safety-net organizations in California during the COVID-19 pandemic. JAMA. 2021;325(11):1106–7.
- 31.
American Medical Association [Internet]. 74% of physicians work in practices that offer telehealth; 2023 [cited 2025 Jan 25]. Available from: https://www.ama-assn.org/practice-management/digital/74-physicians-work-practices-offer-telehealth
- 32.
Marks J, Augenstein J, Brown A, Lee S. A framework for evaluating the return on investment of telehealth.
- 33. Adler-Milstein J, Daniel G, Grossmann C, Mulvany C, Nelson R, Pan E, et al. Return on information: a standard model for assessing institutional return on electronic health records. NAM Perspect [Internet]. 2014 [cited 2025 Jan 25. ]; Available from: https://nam.edu/perspectives-2014-return-on-information-a-standard-model-for-assessing-institutional-return-on-electronic-health-records//
- 34.
22-451 Loper Bright Enterprises v. Raimondo (06/28/2024); 2024.
- 35.
Percent of hospitals, by type, that possess certified health IT | HealthIT.gov [Internet]. [cited 2025 Aug 6]. Available from: https://www.healthit.gov/data/quickstats/percent-hospitals-type-possess-certified-health-it
- 36.
Source M. Stories. New consortium of healthcare leaders announces formation of Trustworthy & Responsible AI Network (TRAIN), making safe and fair AI accessible to every healthcare organization; 2024 [cited 2025 Jan 25]. Available from: https://news.microsoft.com/2024/03/11/new-consortium-of-healthcare-leaders-announces-formation-of-trustworthy-responsible-ai-network-train-making-safe-and-fair-ai-accessible-to-every-healthcare-organization//
- 37.
epic-open-source/seismometer [Internet]. Epic Open Source; 2025 [cited 2025 Jan 25. ]. Available from: https://github.com/epic-open-source/seismometer
- 38. Kamran F, Tang S, Otles E, McEvoy DS, Saleh SN, Gong J, et al. Early identification of patients admitted to hospital for covid-19 at risk of clinical deterioration: model development and multisite external validation study. BMJ. 2022;376:e068576.
- 39.
Ii WNP, Gerke S, Cohen IG. Chapter 9: liability for use of artificial intelligence in medicine; 2024 [cited 2025 Jan 25]. Available from: https://www.elgaronline.com/edcollchap-oa/book/9781802205657/book-part-9781802205657-16.xml
- 40. Adler-Milstein J, Redelmeier DA, Wachter RM. The limits of clinician vigilance as an AI safety bulwark. JAMA. 2024;331(14):1173–4.
- 41. Sendak MP, Gao M, Brajer N, Balu S. Presenting machine learning model information to clinical end users with model facts labels. NPJ Digit Med. 2020;3:41. pmid:32219182
- 42.
The record [Internet]. The PGIP evolution: new and revised participation and quality expectations; 2023 [cited 2025 Jan 25]. Available from: https://www.bcbsm.com/content/dam/microsites/corpcomm/provider/the_record/2023/dec/Record_01223j.html
- 43.
Health AI Partnership [Internet]. Practice Network Sites. [cited 2025 Jan 25. ]. Available from: https://healthaipartnership.org/haip-practice-network
- 44.
Health AI Partnership [Internet]. health AI hub. [cited 2025 Aug 31. ]. Available from: https://healthaipartnership.org/healthaihub
- 45.
Health AI Partnership [Internet]. AI in action. [cited 2025 Aug 31]. Available from: https://healthaipartnership.org/ai-in-action-practical-applications-for-safety-net-providers
- 46.
Beavins E. ‘I wish we could be more optimistic’: AI at an Arizona FQHC [Internet]; 2025 [cited 2025 Aug 6]. Available from: https://www.fiercehealthcare.com/ai-and-machine-learning/i-wish-we-could-be-more-optimistic-about-everything-ai-implementation
- 47.
5 under-resourced health centers tackle AI’s challenges together | Healthcare IT News [Internet]; 2025 [cited 2025 Aug 6]. Available from: http://www.healthcareitnews.com/news/5-under-resourced-health-centers-tackle-ais-challenges-together
- 48.
NACHC [Internet]. America’s Health Centers: By the Numbers. [cited 2025 Jan 25. ]. Available from: https://www.nachc.org/resource/americas-health-centers-by-the-numbers//
- 49.
Fast facts on U.S. Hospitals, 2024 | AHA [Internet]; 2025 [cited 2025 Jan 25]. Available from: https://www.aha.org/statistics/fast-facts-us-hospitals