Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A multivariate statistical evaluation of actual use of electronic health record systems implementations in Kenya

  • Philomena Ngugi ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Writing – original draft, Writing – review & editing

    Affiliations Department of Information Science and Media studies, University of Bergen, Bergen, Norway, Institute of Biomedical Informatics, Moi University, Eldoret, Kenya

  • Ankica Babic,

    Roles Methodology, Project administration, Supervision, Writing – original draft, Writing – review & editing

    Affiliations Department of Information Science and Media studies, University of Bergen, Bergen, Norway, Department of Biomedical Engineering, Linköping University, Linköping, Sweden

  • Martin C. Were

    Roles Conceptualization, Funding acquisition, Methodology, Project administration, Resources, Supervision, Writing – original draft, Writing – review & editing

    Affiliations Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, United States of America, Vanderbilt Institute of Global Health, Vanderbilt University Medical Center, Nashville, TN, United States of America



Health facilities in developing countries are increasingly adopting Electronic Health Records systems (EHRs) to support healthcare processes. However, only limited studies are available that assess the actual use of the EHRs once adopted in these settings. We assessed the state of the 376 KenyaEMR system (national EHRs) implementations in healthcare facilities offering HIV services in Kenya.


The study focused on seven EHRs use indicators. Six of the seven indicators were programmed and packaged into a query script for execution within each KenyaEMR system (KeEMRs) implementation to collect monthly server-log data for each indicator for the period 2012–2019. The indicators included: Staff system use, observations (clinical data volume), data exchange, standardized terminologies, patient identification, and automatic reports. The seventh indicator (EHR variable Completeness) was derived from routine data quality report within the EHRs. Data were analysed using descriptive statistics, and multiple linear regression analysis was used to examine how individual facility characteristics affected the use of the system.


213 facilities spanning 19 counties participated in the study. The mean number of authorized users who actively used the KeEMRs was 18.1% (SD = 13.1%, p<0.001) across the facilities. On average, the volume of clinical data (observations) captured in the EHRs was 3363 (SD = 4259). Only a few facilities(14.1%) had health data exchange capability. 97.6% of EHRs concept dictionary terms mapped to standardized terminologies such as CIEL. Within the facility EHRs, only 50.5% (SD = 35.4%, p< 0.001) of patients had the nationally-endorsed patient identifier number recorded. Multiple regression analysis indicated the need for improvement on the mode of EHRs use of implementation.


The standard EHRs use indicators can effectively measure EHRs use and consequently determine success of the EHRs implementations. The results suggest that most of the EHRs use areas assessed need improvement, especially in relation to active usage of the system and data exchange readiness.


Electronic Health Records systems (EHRs) have been introduced widely into medical processes in many countries worldwide, making patient data readily available for treatment, care and analysis [13]. These EHRs implementations promise to improve quality of patient care, patient safety and to reduce costs [46]. For instance, introduction of Electronic Medical records systems (EMRs) in health care has shown improvement in time dependent events such as patient waiting time, time to processing specimen in the laboratory from test request to results reporting among others benefits [7,8]. Moreover, a systematic review on utilization of EHRs for public health in Asia revealed their ability to help identify and predict seasonal outbreaks and high risk areas and prevent infections or diseases, leading to better health outcomes [9]. Schoen et al. noted an overall increase in EHR adoption and a significant variation in the growth rate across countries in their survey of primary care doctors in health reforms [10]. Despite the infrastructural and technical challenges experienced and reported in developing countries, the uptake of EHRs in healthcare processes have also been on the rise [2,11]. However, adoption of EHRs in Sub-Saharan Africa are largely driven by HIV treatment international programs such as President’s Emergency Plan for AIDS Relief (PEPFAR) to support patient data management [11,12].

EHRs implementations involve a significant up-front investment in software design and development, infrastructure, implementation, training and IT support [13]. Sponsors, donors and management are demanding demonstrated value of EHRs implementations to inform investments and sustainability of the implementations [14,15]. Furthermore, EHRs implementations are complex, multi-faceted and impact healthcare organizations on many levels [15,16]. Consequently, chances of dismal performance of these systems are high, which may be unknown especially in public healthcare facilities. Therefore, it becomes necessary to evaluate information systems to provide evidence on system functional status and its fitness for purpose with a view to inform future deployments. Maximum benefits of information systems (IS) implementation can only be realized if the systems are deeply used in the post-adoption phase [17]. As such, evaluation of actual use of EHRs once implemented provides vital information relevant to informing approaches to improve success of existing and subsequent implementations.

Assessment of information system (IS) implementation success is both complex and never a straightforward task [18]. Thus, a range of evaluation methodologies and frameworks have emerged with divergent approaches, strengths, and limitations [19,20]. DeLone & McLean (D&M) IS success model is a mature and validated model for measuring health information systems success that was established in 1992 and revised in 2003 [21]. The model has been used to evaluate implementation success for a wide range of health information systems. Berhe et al., recently used the model to evaluate EMRs effectiveness from a user’s perspective in Ayder Referral hospital in Ethiopia [22]. Cho et al. also used the model to evaluate the performance of newly-developed information systems in three public hospitals in Korea [23].

The revised D&M model has seven dimensions used to measure IS implementation success, namely: System quality, Information quality, Service quality, System Use, intention to use, User satisfaction and Net benefits. Of these dimensions, ‘System Use’ was identified as the most appropriate variable for measuring the success of IS [21,24]. System use is the utilization of an IS in work processes by individuals, groups or organizations [11]. A number of studies have measured the actual EHRs use in terms of extent, frequency, duration of use and functions of the system based majorly on behavioural response of users through questionnaires, interview and/or focus group discussions [2,11,17,25,26]. However, only limited evaluation studies utilizing computer-generated data to assess EHRs use are available. This study was conducted to fill this gap by evaluating actual use of a national level EHR system implemented in healthcare facilities in Kenya, as a demonstration of how similar approaches could be applied across other low- and middle-income countries (LMICs) to evaluate use.

In most LMICs, measure of success of EHRs scale-up often relies on simple counts of the number of EHRs implementations. This study demonstrates that: (a) through use of standardized indicators [27], key new insights and gaps on actual status of EHRs implementations within countries use can be identified; (b) aspects of national-level EHRs usage assessments need not be time- or resource-intensive, as assessments can be automated using data already within the EHRs; and (c) mechanisms that allow efficient EHRs usage assessments offer insights to enable any identified EHRs usage gaps to be addressed in a timely manner.

Materials and methods

Study setting

This evaluation was conducted in Kenya, a country in East Africa with approximately 50 million persons [28]. Recognizing the role that EHRs play in patient data management, the government of Kenya through the Ministry of Health (MoH) and in collaboration with its development partners, namely Centres of Disease Control (CDC) and United States Agency for International Development (USAID), has implemented EHRs in over 1,000 public health facilities countrywide [29]. These implementations mainly support HIV care and treatment programs. While two EHRs (KenyaEMR and IQCare) by different vendors were initially endorsed for national deployment in support of HIV care, the country has since 2019 transitioned to supporting and deploying only KenyaEMR system (KeEMRs). In Kenya, KeEMRs is implemented in facilities spread across 22 Counties with varying numbers of sites per county (S1 Appendix). This study evaluated the actual use of KeEMRs within the facilities in which the system is deployed to inform actual EHRs usage across the country, based on computer-generated data. The study was conducted using census method with all 376 facilities that had KeEMRs implemented between 2012–2019 eligible to participate. For efficiency in care delivery, these public facilities are organised into Kenya Essential Package for Health (KEPH) service levels as follows: Level 1—community level; Level 2—dispensaries and clinics; Level 3—Health centres, maternity homes and sub district hospitals; Level 4—primary facilities which include District hospitals; Level 5—secondary facilities/Provincial hospitals; and Level 6—Tertiary/National hospitals.

EHR system

KeEMRs is an implementation and adaptation of the open source OpenMRS system platform, which is widely deployed in many countries in Africa [30]. KeEMRs supports both retrospective and point-of-care data entry (RDE & POC) with most of the facilities equipped for POC implementation. It was designed and developed(customized) by International Training and Education Center for Health (I-TECH) in the year 2012 to support care and treatment of HIV/AIDS [31]. Currently, Kenya Health Management Information system II (KeHMIS II) project supports the implementation of KeEMRs in over 370 health facilities throughout Kenya [32]. Fig 1 shows the homepage of the EHRs under study.

Fig 1. Screenshot of KeEMRs home page.

Reprinted from [33] under a CC BY license, with permission from The Palladium Group- KeHMIS II Project, original copyright 2012.

KeEMRs uses a communication layer referred to as interoperability layer (IL) to enable health data exchange with other health information systems such as pharmacy system (ADT). KeEMRs version 16.0.2 and above enforced the use of a nationally-endorsed 10-digit patient identifier number (five digits representing master facility list (MFL) code and five digits comprehensive care clinic number (CCCNo)) as from the year 2017 for unique patient identification.

EHRs usage indicators

The EHRs use indicators used for this study are detailed in Ngugi et al [27]. The 15 rigorously derived indicators are modelled after the HIV Monitor, Evaluation and Reporting (MER) indicators, that facilities and implementations providing HIV care would be well-familiar with [34]. This study specifically focussed on the subset of the indicators that could be generated from within the implemented EHRs. This was because the ultimate goal is to have a module within the EHRs that can automatically generate indicators without human input for reporting and sharing with relevant stakeholders. The subset of the seven EHRs indicators included are outlined in Table 1. Three of the eight excluded indicators (namely Reporting rate, Report timeliness and Report completeness) rely on data in the national data aggregate system, the Kenya Health Information System (KHIS), and had already been evaluated and reported in a different study [29]. The other five excluded indicators (namely: Data entry statistics, System Uptime, EHR Variable concordance, Report Concordance and Clinical Data timeliness) required a level of human input to generate based on how the indicators are defined [27].

EHRs indicators queries

Queries were developed using MySQL to generate monthly indicator reports for the evaluated indicators except EHR Variable Completeness. These queries were programmed to be run within each EHRs implementation and were tested for accuracy in a training server prior to deployment. The data generated from the testing phase were reviewed by the researchers together with a data analyst to ensure validity of the indicator outputs (data) and needed revisions made to the queries. The resulting six queries were then combined and packaged into a script that comprised the queries and Linux bash script for creating a zipped archive file as an output after running the script. Pilot testing of the script was conducted in six facilities selected randomly in two counties to ensure feasibility of data collection within facilities. The final script was distributed to the study healthcare facilities, with accompanying instructions detailing the step-by-step process (S2 Appendix) for executing the script. Data for the EHR Variable Completeness indicator (key data elements related to HIV care and treatment) were derived from routine data quality assessment (RDQA) report that were already being generated from the EHRs.

Data collection and analysis

All the 376 facilities implemented with KeEMRs were approached to participate in the study. Nevertheless, data collection script was distributed to 312 sites that gave authority for the commencement of the study and had used the EHRs for at least six months. Experienced system champions at each facility ran the query script as per the outlined protocol (S2 Appendix). Further support on running the query and generating the report was provided through a toll-free line to the EHRs developers helpdesk as needed. Monthly indicator data were generated from each EHRs implementing facility from January 2012 (the earliest possible time for system deployment) to December 2019. Generated reports (data) were transmitted electronically to the research team for consolidation and data cleaning thereby enforcing data quality. No personal identifiable information were contained in the resulting indicator reports. All the EHRs implementations used the same terminology service, hence assessment of the Standardized Terminologies indicator evaluated the proportion of terms in this dictionary that mapped to standard terminologies such as SNOMED and ICD [35,36]. Data collection for this study occurred over a period of eight weeks between April and June 2020.

Facility characteristics (KEPH levels, facility-type-category, ownership, services and mode of EHRs use) data were derived from Master Facility List (MFL) website maintained by the MoH. These data were summarized using descriptive statistics. Mean values and standard deviations of the collective performance by facilities for each indicator were calculated. One-way ANOVA (with Tukey’s b “post-hoc” test) were performed to measure the variance in variables means (Staff System Use, clinical volume, and patient identification indicators) across the counties. Correlation analysis was also performed to measure the relationship between staff system use indicator and volume of the clinical data for insight on user productivity. Weighted mean of Staff System Use and Patient Identification indicators was computed to determine the overall performance of each facility. The two indicators assumed a weighting mean of 1, hence each was assigned a weight of 0.5 in order to have an unbiased mean. A summation of the weighted mean of the two indicators for each facility was then computed and finally ranked in descending order. The two indicators were chosen because they are the key variables that show EHRs utilization in the facility. Data exchange indicator data were treated and analysed as dichotomous data (presence or absence) of interoperability layer (IL) software that facilitates data exchange with external systems.

Finally, we fitted multiple linear regression model to establish how individual facility characteristics affected the use of the system. The dependent variable was number of active system users while the covariates were the facility characteristics (KEPH level, ownership services and mode of EHRs use). All analyses were performed using IBM SPSS statistics 25 [37].

The primary outcome of interest for this study was to determine the collective performance by facilities on each of the seven indicators over the period of KeEMRs implementation in Kenya, as a measure of overall EHRs usage. In addition, this study had several secondary outcomes of interests, namely: (a) evaluation of variability in EHRs usage between counties, (b) relationship between number of active users of systems and the clinical volume for insight on user productivity, and (c) the effect of facility characteristics on EHRs use.

Ethical statement

The study was approved by the Institutional Review and Ethics Committee at Moi University, Eldoret (MU/MTRH-IREC approval Number FAN: 0003348). Permission to collect data was also obtained from Ministry of Health (MoH), County Directors of Health of each county, as well as Service Delivery Partners (SDPs) responsible for EHRs implementations and HIV data at the facility level. Permission to collect data from 312 (out of 376) facilities in 19 counties were granted. All participants filled a consent form before taking part in the study. No personal identification data were collected from either patient records/system database or the healthcare facilities or personnel who executed the queries.


Organizational characteristics of the responding facilities

Out of the 312 facilities that assented to participate in the study, 213 (68.3%) spanning 19 Counties responded. Characteristics of the responding facilities are detailed in Table 2. The responding facilities were largely between KEPH levels 2 and 4, as these were the ones offering HIV services and in which the EHRs was deployed. Most of these facilities offered care and treatment (C&T) service 161(72.3%). Over 86% of these facilities were either Health Centres or Hospitals and were largely owned and run by the Ministry of Health (88.7%). Only 9.4% of the facilities were completely paperless, with slightly over a third of the facilities (38.0%) still doing retrospective data entry (RDE) fully.

Table 2. Frequency distribution for the facility characteristics (n = 213).

The total number of responding facilities with EHRs implementation varied by county, with the lowest county having three while the highest had 25. Most of these implementations occurred in 2014 (113 implementations, 53.1%) followed by 2013 (91 implementations, 42.7%) (S3 Appendix). No implementations occurred in the period 2015–2017 whilst there were only four new implementations (1.8%) between 2018 and 2019 in line with the country’s planned implementation strategy.

EHRs usage indicator results

Staff system use.

An average of 18.1% (SD = 13.1%) staff members with EMRs access rights used the system in any given period. The best and worst facilities had a mean usage of 46.8% (SD = 23.3%) and 7.3% (SD = 3.3%) respectively (p< .001) (S4 Appendix).

Observations (clinical data volume).

On average, the facilities captured 3,363 (SD = 4,249) patient-related data elements (clinical data volume) monthly, based on the mandatory 23 data types of interest for HIV reporting in Kenya [38] showing there was high dispersion in the data collected (S4 Appendix). The facility with highest mean monthly volume captured 28,937 (SD = 11,356) data elements while the least had 251 (SD = 167). There was a weak positive correlation between Observations (Clinical data volume) and Staff System Use indicators (coefficient r = 0.01).

EHR variable completeness.

We observed that all the 23 data elements required for HIV patients by the MoH were contained within the records for each patient in the EHRs. Hence the EHR Variable Completeness indicator as per the country’s standard operating procedures (SOP) was 100% across the study facilities.

Data exchange.

Majority of the facilities (183/213) lack the interoperability layer (IL) module and hence had no capability to exchange health data with external systems (S5 Appendix). Of the 14.1% facilities which had data exchange capability, 56.7% of them were in one county. None of the facilities (n = 108) in 13 of the 19 counties had data exchange capability.

Standardized terminologies.

On average 97.6% (52,098 out of 53,353) of KeEMR system concepts were mapped to the standardized (international) terminologies/concept dictionaries such as CIEL and SNOMED.

Patient identification.

Only 50.5% (SD = 35.4%, p< 0.001) of the patient records had patients with identifiers in the nationally-endorsed patient identifier format (10-digit number = 5 MFL+5 CCCNo.) (S4 Appendix). There was a wide range of 3% to 100% conformity across the facilities, indicating the need for further investigation on why such low conformity rates. Three of the healthcare facilities fully adopted the approved patient identifier (100%) while 28 facilities had an average mean of < 10% conformity in the use of the national patient identifier.

Automatic reports.

KeEMRs is configured to generate monthly Ministry of Health routine reports (MoH 731) for transmission to the national reporting system (KHIS). However, by the time of this study, we could not capture the data to compute automatic reports indicator (the proportion of expected reports to the national level that are automatically generated and transmitted to the national reporting system). This was because the records of the generated reports and their transmission are not saved, with tables refreshed on a daily basis.

Performance of the facilities

Using the weighted mean of the means scores of Staff System Use and Patient Identification indicators, facilities were benchmarked against each other using the “best performer” and “worst performer” approach. The weighted mean ranged from 9% to 65% across the 213 facilities. S6 Appendix presents facility performance list from the highest to the lowest. The top ten performing facilities had an average weighted mean of 61% (range 59–65%) while the bottom ten facilities had an average mean of 11% (range 9–12%).

EHRs use against facility characteristics

The relationship between the facility characteristics and the number of active system users assessed by the multiple linear regression analysis was statistically significant (p = 0.000) for all the covariates (Table 3). The characteristics influenced system usage positively, with the exception of Mode of EHRs use characteristic. RDE mode of EHRs use had the highest negative impact on the use of the system.

Table 3. Multiple linear regression model for staff system use and facility characteristics.


To our knowledge, this is the first national-level study that has systematically evaluated actual EHRs use post-implementation utilizing computer-generated real-time data based on robustly developed EHRs usage indicators. A systematic review on measuring EHRs use in primary care revealed that most studies measured use through assessing the utilization of individual EHRs functions [26]. The findings from our study highlight the fact that simply counting number of EHRs implementations is highly inadequate in determining IS implementation success. Multidimensional set of indicators for evaluating EHRs use in this study align with the three main components of EHRs meaningful use, namely: (1) EHRs must be used in the care processes such as prescribing, (2) EHRs must encompass electronic health data exchange for improved health care quality and (3) EHRs must support reporting of clinical measures [39,40]. In this study, indicators reflecting system use and interoperability domains indicated low measures, suggesting the need for further improvement.

Measuring system-use at the application level sheds light on how fully or effectively organizations are using IT [41]. In our study, the overall Staff System Use was very low across all the facilities regardless of the period of EHRs implementation. The study established existence of many dormant user accounts in the EHRs across all facilities portraying a high number of users authorized to use the system (denominator) compared to actual number of users (numerator) hence the low average mean. Another possibility of low mean could have been occasioned by shared login credentials or shared computers resulting into multiple users on one user account. This presented a scenario like only one user performed activities around patients’ files i.e., create, update or delete, which were the assessed Staff system use indicator measures. Consequently, this compromised the accuracy of the numerator count. A study investigating users’ behaviour in password utilization revealed users share passwords for convenience as well as a show of trust [42]. The finding from our study warrants deeper assessment on user credentialing processes and account usage patterns (such as sharing of credentials). It also highlights the need to re-emphasize good password practices to the system users and active monitoring of user accounts by the system administrators. We also recommend further research to establish user-computer ratio in the healthcare facilities.

While our results show KeEMRs’ readiness to interoperate with other external systems due to high mapping rate of its concepts to standard terminology services like CIEL [38,43], the study established a slow incorporation of the interoperability layer (IL) within the EHRs. Integration with other systems is one of system quality measures among ease-of-use, functionality, reliability and flexibility [44]. The low data exchange indicator findings from this study suggests the need for investigation on other system quality measures. Technological barriers, such as functionality and compatibility issues, and non-user-friendliness can limit system use [45]. The actual uptake of the nationally-accepted patient identifiers was average although with large variations in uptake levels between facilities and between counties. Several studies reveal lack of interoperability as a well-known impediment to EHRs successful adoption and use [4649]. As such, interoperability layer should be incorporated into all EHRs implementations as well as concerted efforts towards nationwide adoption and use of unique patient identifier, which promises to improve patient safety and care efficiency [50].

The study expected a strong positive correlation between Staff System Use and Observations (clinical data volume) recorded in the EHRs, which was not the case. This could be attributed to the possibility of users sharing login credentials as intimated earlier. Several factors determine facility clinical volume such as patients’ volume, frequency of patients’ visits (encounters), EHRs mode of use and active usage of the system during care, all unique to each facility. Ideally, facilities entering data retrospectively should efficiently transfer paper records into the EHRs in a timely fashion for 100% concordance. However, a study on EHRs use and user satisfaction by Tilahum and Fritz revealed retrospective data entry as a major cause of dissatisfaction of EHRs use among users, especially when the same individuals collecting the data are tasked to enter it into the system later [2]. Indeed, our study revealed that point of care (POC) and hybrid modes of data capture were associated with increased system usage compared to retrospective data entry. Thus, EHRs implementors should aim at point of care mode of operation right from initiation.

Study strengths and limitations

The key strength of the study was the use of empirical data extracted directly from EHRs hence not subject to bias normally introduced by human judgment prevalent in self-reports such as questionnaires. Boon et al in their study on antecedents of continued use and extended use of enterprise systems strongly recommended use of system log file data to overcome human related response bias [51]. Secondly, the study period (2012–2019) was long enough to reveal the state of the EHRs use in the health care facilities. Also, the study results are reliable due to the use of census method in the collection of the primary data. Furthermore, these facilities had diverse range of characteristics in terms of ownership and facility levels and covered broad geographic area of Kenya. The study does, however, acknowledge a few limitations. It was only conducted in one country (Kenya) and the findings do not necessarily translate directly to other countries. However, the study provides a demonstration case that can be modelled by other countries to inform similar EHRs usage evaluations. Finally, this study only focused on facilities where the EHRs were in actual use, without mention of locations where the EHRs were implemented and actually failed. Attention needs to be paid to failed implementations, to ensure that usage rates are not being over-reported.

In the next step of our research, we will conduct qualitative assessments to better understand the observed findings. This will be done through Focus Group Discussions (FGD) and semi-structured interviews with EHRs users and key stakeholders. Further, we will work with relevant partners to help integrate outputs and visualizations of the usage reports within the EHRs, and to provide various visualizations and dashboards for managers and decision-makers to increase visibility on system usage within and across facilities. It is also recognized that continued usage of EHRs in the patient care processes do not necessarily lead to better work performance or improved care quality. Further research is needed to investigate impact of EHRs usage on care quality and outcomes.


Assessment of actual use of implemented EHRs within LMICs is important. The systematically generated standard EHRs usage indicators can be adopted and used successfully within facilities across countries. Results from this study demonstrate that there are many areas of improvement in EHRs use, as well as the need for continuous monitoring of EHRs use to inform timely interventions. Simply counting number of implementations, as is currently done in many settings, remains a highly inadequate measure for evaluating EHRs implementations success.

Supporting information

S1 Appendix. Distribution of KeEMRs implementations as of June 2020.


S2 Appendix. Standard operating procedures for query extraction.


S3 Appendix. KeEMRs implementations distribution in the period 2012–2019 across the counties (n = 19).


S4 Appendix. Facilities descriptive statistics for staff system use, observations & patient identification indicators.


S5 Appendix. Interoperability layer (IL) module (data exchange) presence/absence in facilities across the counties.


S6 Appendix. Facilities performance using weighted means.



Authors would like to acknowledge KeEMR system developers for providing input into the testing of the study instrument (query script), and helpdesk support to study participants. We also appreciate logistical support by Kenya Ministry of Health, County health directors, AMPATH Plus and FACES service development partners, and County Health Records Information Officers (CHRIOs). Much appreciation also to all the healthcare facilities and the system champions for their participation in the study.


  1. 1. Carolyn Petersen, “Visualization for All: The Importance of Creating Data Representations Patients Can Use,” in Proceedings of the 2016 Workshop on Visual Analytics in Healthcare in conjunction with AMIA, 2016, pp. 46–49.
  2. 2. Tilahun B. and Fritz F., “Comprehensive Evaluation of Electronic Medical Record System Use and User Satisfaction at Five Low-Resource Setting Hospitals in Ethiopia,” JMIR Med. Informatics, vol. 3, no. 2, p. e22, 2015. pmid:26007237
  3. 3. Nguyen L., Bellucci E., and Nguyen L. T., “Electronic health records implementation: An evaluation of information system impact and contingency factors,” Int. J. Med. Inform., vol. 83, no. 11, pp. 779–796, 2014. pmid:25085286
  4. 4. King J., Patel V., Jamoom E. W., and Furukawa M. F., “Clinical benefits of electronic health record use: National findings,” Health Serv. Res., vol. 49, no. 1 PART 2, pp. 392–404, 2014. pmid:24359580
  5. 5. Hydari M. Z., Telang R., and Marella W. M., “Electronic health records and patient safety,” Commun. ACM, vol. 58, no. 11, pp. 30–32, 2015.
  6. 6. Health Infoway C, “The emerging benefits of EMR use in ambulatory care in Canada: Benefits Evaluation Study,” 2016.
  7. 7. Alamo S. T. et al., “Electronic medical records and same day patient tracing improves clinic efficiency and adherence to appointments in a community based HIV/AIDS care program, in Uganda,” AIDS Behav., vol. 16, no. 2, pp. 368–374, 2012. pmid:21739285
  8. 8. Jawhari B., Ludwick D., Keenan L., Zakus D., and Hayward R., “Benefits and challenges of EMR implementations in low resource settings: A state-of-the-art review,” BMC Med. Inform. Decis. Mak., vol. 16, no. 1, pp. 1–12, 2016. pmid:27600269
  9. 9. Dornan L., Pinyopornpanish K., Jiraporncharoen W., Hashmi A., Dejkriengkraikul N., and Angkurawaranon C., “Utilisation of Electronic Health Records for Public Health in Asia: A Review of Success Factors and Potential Challenges,” Biomed Res. Int., vol. 2019, 2019. pmid:31360723
  10. 10. Schoen C. et al., “A Survey Of Primary Care Doctors In Ten Countries Shows Progress In Use Of Health Information Technology, Less In Other Areas,” Health Aff., vol. 12, no. 31, pp. 2805–2816, 2012. pmid:23154997
  11. 11. Akanbi M. O. et al., “Use of Electronic Health Records in sub-Saharan Africa: Progress and challenges.,” J. Med. Trop., vol. 14, no. 1, pp. 1–6, 2012. pmid:25243111
  12. 12. Kang’a S. G., Muthee V. M., Liku N., Too D., and Puttkammer N., “People, Process and Technology: Strategies for Assuring Sustainable Implementation of EMRs at Public-Sector Health Facilities in Kenya.,” AMIAAnnu. Symp. proceedings. AMIA Symp., vol. 2016, pp. 677–685, 2016. pmid:28269864
  13. 13. Kruse C. S., Kristof C., Jones B., Mitchell E., and Martinez A., “Barriers to Electronic Health Record Adoption: a Systematic Literature Review,” J. Med. Syst., vol. 40, no. 12, 2016. pmid:27714560
  14. 14. Piette J. D. et al., “Impacts of e-health on the outcomes of care in low- and middle-income countries: where do we go from here?,” Bull. World Health Organ., vol. 90, no. 5, pp. 365–372, 2012. pmid:22589570
  15. 15. Bardhan I. R. and Thouin M. F., “Health information technology and its impact on the quality and cost of healthcare delivery,” Decis. Support Syst., vol. 55, no. 2, pp. 438–449, 2013.
  16. 16. Alolayyan M. N., Alyahya M. S., Alalawin A. H., Shoukat A., and Nusairat F. T., “Health information technology and hospital performance the role of health information quality in teaching hospitals,” Heliyon, vol. 6, no. 10, p. e05040, 2020. pmid:33088935
  17. 17. Tubaishat A. and AL-Rawajfah O. M., “The use of electronic medical records in Jordanian hospitals: A nationwide survey,” CIN—Comput. Informatics Nurs., vol. 35, no. 10, pp. 538–545, 2017. pmid:28976916
  18. 18. Sligo J., Gauld R., Roberts V., and Villac L., “A literature review for large-scale health information system project.pdf,” Int. J. Med. Inform., vol. 97, pp. 86–97, 2017. pmid:27919399
  19. 19. Eslami Andargoli A., Scheepers H., Rajendran D., and Sohal A., “Health information systems evaluation frameworks: A systematic review,” Int. J. Med. Inform., vol. 97, pp. 195–209, 2017. pmid:27919378
  20. 20. Stylianides A., Mantas J., Roupa Z., and Yamasaki E. N., “Development of an evaluation framework for health information systems (DIPSA),” Acta Inform. Medica, vol. 26, no. 4, pp. 230–234, 2018.
  21. 21. Mardiana S., Tjakraatmadja J. H., and Aprianingsih A., “DeLone–McLean Information System Success Model Revisited: The Separation of Intention to Use -Use and the Integration of Technology Acceptance Models,” Int. J. Econ. Financ. Issues, vol. 5, no. 5, pp. 172–182, 2015.
  22. 22. Berhe M., Tadesse K., Berhe G., and Gebretsadik T., “Evaluation of Electronic Medical Record Implementation from User’s Perspectives in Ayder Referral Hospital Ethiopia,” J. Heal. Med. Informatics, vol. 08, no. 01, pp. 1–13, 2017.
  23. 23. Cho K. W., Bae S. K., Ryu J. H., Kim K. N., An C. H., and Chae Y. M., “Performance evaluation of public hospital information systems by the information system success model,” Healthc. Inform. Res., vol. 21, no. 1, pp. 43–48, 2015. pmid:25705557
  24. 24. Delone W. H. and Mclean E. R., “Information Systems Success Measurement,” Found. Trends®in Inf. Syst., vol. 2, no. 1, pp. 1–116, 2016.
  25. 25. Maillet E., Sicotte C., and Mathieu L., “The actual use of an electronic medical record (EMR) by acute care nurses: Examining a multidimensional measure at different adoption stages,” Stud. Health Technol. Inform., vol. 250, pp. 241–242, 2018. pmid:29857451
  26. 26. Huang M. Z., Gibson C. J., and Terry A. L., “Measuring Electronic Health Record Use in Primary Care: A Scoping Review,” Appl. Clin. Inform., vol. 9, no. 1, pp. 15–33, 2018. pmid:29320797
  27. 27. Ngugi P. N., Babic A., Kariuki J., Santas X., Naanyu V., and Were M., “Development of Standard Indicators to Assess Use of Electronic Health Record Systems Implemented in Low- and Medium-Income Countries,” PLoS One, no. 4, pp. 1–15, 2021. pmid:33428656
  28. 28. Kenya National Bureau of Statistics, “2019 Kenya Population and Housing Census Volume 1: Population by County and Sub-County,” 2019.
  29. 29. Ngugi P. N., Gesicho M. B., Babic A., and Were M. C., “Assessment of HIV Data Reporting Performance by Facilities During EMR Systems Implementations in Kenya,” Stud. Health Technol. Inform., vol. 272, pp. 167–170, 2020. pmid:32604627
  30. 30. “KenyaEMR Distribution—Documentation—OpenMRS Wiki.” [Online]. Available: [Accessed: 16-Sep-2020].
  31. 31. “KenyaEMR Implemented at More Than 340 Sites in Under Two Years–I-TECH.” [Online]. Available: [Accessed: 22-Nov-2017].
  32. 32. “KeHMIS Applications.” [Online]. Available: [Accessed: 16-Sep-2020].
  33. 33. “KenyaEMR.” [Online]. Available: [Accessed: 16-Sep-2020].
  34. 34. PEPFAR, “Monitoring, Evaluation, and Reporting Indicator Reference Guide. MER 2.0 (Version 2.4),” no. September. 2019.
  35. 35. Randorff Højen A. and Rosenbeck Gøeg K., “Snomed CT implementation. Mapping guidelines facilitating reuse of data.,” Methods Inf. Med., 2012. pmid:23038162
  36. 36. WHO, “International Statistical Classification of Diseases and Related Health Problems, 10th Revision ICD-10: Tabular List,” World Heal. Organ., vol. 1, pp. 332–345, 2016.
  37. 37. “SPSS Software—United Kingdom | IBM.” [Online]. Available: [Accessed: 16-Sep-2020].
  38. 38. Keny A., Wanyee S., Kwaro D., Mulwa E., and Were M. C., “Developing a National-Level Concept Dictionary for EHR Implementations in Kenya,” Stud. Health Technol. Inform., vol. 216, pp. 780–784, 2015. pmid:26262158
  39. 39. Bowens F. M., Frye P. A., and Jones W. A., “Health information technology: integration of clinical workflow into meaningful use of electronic health records.,” Perspect. Health Inf. Manag., vol. 7, 2010. pmid:21063545
  40. 40. “United States Department of Health and Human Services. Centers for Medicare and Medicaid Services. CMS HER meaningful use overview.” [Online]. Available: [Accessed: 04-Sep-2020].
  41. 41. Maillet É., Mathieu L., and Sicotte C., “Modeling factors explaining the acceptance, actual use and satisfaction of nurses using an Electronic Patient Record in acute care settings: An extension of the UTAUT,” Int. J. Med. Inform., vol. 84, no. 1, pp. 36–47, 2015. pmid:25288192
  42. 42. Morris R. and Thompson K., “Password security,” Int. J. Secur., vol. 8, no. 1, 2014.
  43. 43. “SNOMED—Columbia International eHealth Laboratory concept dictionary for OpenMRS.” [Online]. Available: [Accessed: 16-Sep-2020].
  44. 44. DeLone W. H. and Mclean E. R., “The DeLone and McLean Model of Information Systems Success: A Ten-Year Update,” J. Manag. Inf. Syst./Spring, vol. 19, no. 4, pp. 9–30, 2003.
  45. 45. Mohamadali N. A. and Aziz N. F. A., “The Technology Factors as Barriers for Sustainable Health Information Systems (HIS)-A Review,” Procedia Comput. Sci., vol. 124, pp. 370–378, 2017.
  46. 46. Ngugi P., Were M. C., and Babic A., “Facilitators and Barriers of Electronic Medical Records Systems Implementation in Low Resource Settings: A Holistic View,” Stud. Heal. Technol. Informatics IOS Press, vol. 251, pp. 187–190, 2018. pmid:29968634
  47. 47. Kruse C. S., Mileski M., Alaytsev V., Carol E., and Williams A., “Adoption factors associated with electronic health record among longterm care facilities: A systematic review,” BMJ Open, vol. 5, no. 1, 2015.
  48. 48. Farzianpour F., Amirian S., and Byravan R., “An Investigation on the Barriers and Facilitators of the Implementation of Electronic Health Records (EHR),” Health (Irvine. Calif)., vol. 7, no. December, pp. 1665–1670, 2015.
  49. 49. Jawhari B. et al., “Barriers and facilitators to Electronic Medical Record (EMR) use in an urban slum,” Int. J. Med. Inform., vol. 94, pp. 246–254, 2016. pmid:27573333
  50. 50. Waruhari P., Babic A., Nderu L., and Were M. C., “A review of current patient matching techniques,” Stud. Health Technol. Inform., vol. 238, pp. 205–208, 2017. pmid:28679924
  51. 51. See B. P., Yap C. S., and Ahmad R., “Antecedents of continued use and extended use of enterprise systems,” Behav. Inf. Technol., vol. 38, no. 4, pp. 384–400, 2019.