Figures
Abstract
Background
Electronic Health Record Systems (EHRs) are being rolled out nationally in many low- and middle-income countries (LMICs) yet assessing actual system usage remains a challenge. We employed a nominal group technique (NGT) process to systematically develop high-quality indicators for evaluating actual usage of EHRs in LMICs.
Methods
An initial set of 14 candidate indicators were developed by the study team adapting the Human Immunodeficiency Virus (HIV) Monitoring, Evaluation, and Reporting indicators format. A multidisciplinary team of 10 experts was convened in a two-day NGT workshop in Kenya to systematically evaluate, rate (using Specific, Measurable, Achievable, Relevant, and Time-Bound (SMART) criteria), prioritize, refine, and identify new indicators. NGT steps included introduction to candidate indicators, silent indicator ranking, round-robin indicator rating, and silent generation of new indicators. 5-point Likert scale was used in rating the candidate indicators against the SMART components.
Results
Candidate indicators were rated highly on SMART criteria (4.05/5). NGT participants settled on 15 final indicators, categorized as system use (4); data quality (3), system interoperability (3), and reporting (5). Data entry statistics, systems uptime, and EHRs variable concordance indicators were rated highest.
Citation: Ngugi P, Babic A, Kariuki J, Santas X, Naanyu V, Were MC (2021) Development of standard indicators to assess use of electronic health record systems implemented in low-and medium-income countries. PLoS ONE 16(1): e0244917. https://doi.org/10.1371/journal.pone.0244917
Editor: Chaisiri Angkurawaranon, Chiang Mai University Faculty of Medicine, THAILAND
Received: August 26, 2020; Accepted: December 20, 2020; Published: January 11, 2021
This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication.
Data Availability: All relevant data are within the paper and its Supporting information files.
Funding: Author: MCW Norwegian Programme for Capacity Development in Higher Education and Research for Development (NORAD: Project QZA-0484) through the HITRAIN program) https://norad.no/en/front/funding/norhed/projects/#&sort=date The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Electronic Health Record Systems (EHRs) are increasingly being implemented within low-and middle-income countries (LMICs) settings, with the goal of improving clinical practice, supporting efficient health reporting and improving quality of care provided [1,2]. System implementation is the installation and customization of information systems in organizations making them available for use to support service delivery, e.g. EHRs in healthcare [3,4]. National-level implementations of EHRs in many LMICs primarily aim to support HIV care and treatment, with funding for these systems coming from programs such as the US President’s Emergency Plan for AIDS Relief (PEPFAR) [5,6]. Several countries, such as Rwanda, Uganda, Mozambique, and Kenya, have gone beyond isolated and pilot implementations of EHRs to large-scale national rollout of systems within government-run public facilities [7]. For example, Kenya has had over 1000 electronic medical systems (EMRs) implementations progressively since 2012 in both private and public facilities supporting patient data management mainly in HIV care and treatment [8]. With such large-scale EHRs implementations, developing countries are finding themselves in the unenviable position of being unable to easily track the status of each implementation, especially given that most of the EHRs implementations are standalone and are distributed over large geographical areas. A core consideration is the extent to which the EHRs implemented are actually in use to support patient care, program monitoring, and reporting. Without robust evidence of use of the implemented EHRs, it becomes difficult to justify continued financial support of these systems within these resource-constrained settings and to realize the anticipated benefits of these systems.
In LMICs, implementation of EHRs within a clinical setting does not automatically translate to use of the system. While the evidence is mounting on the benefits of EHRs in improving patient care and reporting in these settings, a number of studies reveal critical challenges to realizing these benefits [9–11]. Some of these challenges include: poor infrastructure (lack of stable electricity, unreliable Internet connectivity, inadequate computer equipment), inadequate technical support, limited computer skills and training, and limited funding [12–17]. Additionally, implementation of EHRs is complex and can be highly disruptive to conventional workflows. Disruption caused by the EHRs can affect its acceptance and use; this is more likely to happen if the implementation was not carefully planned and if end-users were not adequately involved during all stages of the implementation [18–21]. The use of the EHRs can also be affected by data quality issues, such as completeness, accuracy, and timeliness [22]. This is a particular risk in LMICs given the lack of adequate infrastructure, human capacity, and EHRs interoperability across healthcare facilities [23].
Although LMICs have embraced national-level EHRs implementations, little evidence exists to systematically evaluate actual success of these implementations, with success largely defined as a measure of effectiveness of the EHRs in supporting care delivery and health system strengthening [24–26]. Success of EHRs implementation depends on numerous factors, and these often go beyond simple consideration of the technology used [19,20]. Many information system (IS) success frameworks and models incorporate a diverse set of success measures, such as “effectiveness, efficiency, organizational attitudes and commitment, users’ satisfaction, patient satisfaction, and system use” [27–34]. Among numerous IS success frameworks and models, “system use” is considered an important measure in evaluating IS success; IS usage being “the utilization of information technology (IT) within users’ processes either individually, or within groups or organizations” [29,31]. There are several proposed measures for system use, such as frequency of use, extent of use, and number of system accesses, but these tend to differ between models. The system use measures are either self-reported (subjective) or computer-recorded (objective) [22,29,30].
There is compelling evidence that IS success models need to be carefully specified for a given context [34]. EHRs implementations within LMICs have unique considerations, hence system use measures need to be defined in a way to ensure that they are relevant, meet the EHRs monitoring needs, while not being too burdensome to accurately collect. Carefully developed EHRs use indicators and metrics are needed to regularly monitor the status of the EHRs implementations, in order to identify and rectify challenges to advance effective use. A common set of EHRs indicators and metrics would allow for standardized aggregation of performance of implementations across locations and countries. This is similar to the systems currently in use for monitoring the success of HIV care and treatment through a standard set of HIV Monitoring, Evaluation and Reporting (MER) indicators [35].
All care settings providing HIV care through the PEPFAR program and across all countries are required to report the HIV indicators per the MER indicator definitions. An approach that develops EHRs indicators along the same lines and format as HIV MER indicators assures that the developed EHRs system use indicators are in a format well-familiar to most care settings within LMICs. This approach reduces the learning curve to understanding and applying the developed indicators. In this paper, we present development and validation of a detailed set of EHRs use indicators that follows the HIV MER format, using nominal group technique (NGT) and group validation technique. This was developed for Kenya, however, it is applicable to LMICs and similar contexts.
Materials and methods
Identification of candidate set of EHRs use indicators
Using desk review, literature review, and discussions with subject matter experts, the study team (PN, MW, JK, XS, AB) identified an initial set of 14 candidate indicators for EHRs use [36–39] The candidate set of indicators were structured around four main thematic areas, namely: system use, data quality, interoperability, and reporting. System use and data quality dimensions broadly reflect IS system use aspects contained in the DeLone and McLean IS success model, while interoperability and reporting dimensions enhance system availability and use [39]. The focus was to come up with practical indicators that were specific, measurable, achievable, relevant, and time-bound (SMART) [40]. This would allow the developed indicators to be collected easily, reliably, accurately, and in a timely fashion within the resource constraints of clinical settings where the information systems are implemented.
Each of the 14 candidate indicators was developed to clearly outline the description of the indicator, the data elements constituting the numerator and denominator, how the indicator data should be collected, and what data sources would be used for the indicator. These details for the indicators were developed using a template adapted from the HIV MER 2.0 indicator reference guide, given that information systems users in most of these implementation settings were already familiar with this template (S1 Appendix) [35]. Nevertheless, it will require short training time for those unfamiliar due the simplicity of the format.
Nominal group technique (NGT)
NGT is a ranking method that enables a controlled group of nine or ten subject matter experts to generate and prioritize a large number of issues within a structure that gives the participants an equal voice [41]. The NGT involves several steps, namely: 1) silent, written generation of responses to a specific question, 2) round-robin recording of ideas, 3) serial discussion for clarification and, 4) voting on item importance. It allows for equal participation of members, and generates data that is quantitative, objective, and prioritized [42,43]. Nominal group technique (NGT) was used in the study to reach consensus on the final set of indicators for monitoring EHRs use.
NGT participants
Indicator development requires consultation with broad-range of subject matter experts with knowledge of the development, implementation, and use of EHRs. With guidance from Kenya Ministry of Health (MoH), a heterogeneous group of 10 experts was invited for a two-day workshop led by two of the researchers (M.W. and P.N.) and a qualitative researcher (V.N.). Inclusion in the NGT team was based on the ability of the NGT participant to inform the conversation around EHRs usage metrics and indicators, with an emphasis on assuring that multiple perspectives were represented in the deliberations. The NGT participants’ average age was 40 years where majority were males (69%). The participants included: the researchers acting as facilitators; one qualitative researcher (an associate professor and lecturer); two MoH representatives from the Division of Health Informatics and M&E (health information systems management experts); one Service Development Partners (SDPs) representative (oversees EHRs implementations and training of users); four users of the EHRs (clinical officers (2) & Health records information officers (2)); CDC funding agency representative (an informatics service fellow in the Health Information Systems); and two representatives from the EHRs development and implementing partners (Palladium and International Training and Education Center for Health (I-TECH)), who have been involved in the EHRs implementations and who selected sites for EHRs implementations [44,45]. The study participants were consenting adults, and participation in the group discussion was voluntary. All participants filled a printed consent form before taking part in the study. Discussions were conducted in English, with which all participants were conversant. For analysis and reporting purposes, demographic data and roles of participants were collected, but no personal identifiers were captured. The study was approved by the Institutional Review and Ethics Committee at Moi University, Eldoret (MU/MTRH-IREC approval Number FAN:0003348).
NGT process
The NGT exercise was conducted on April 8–9, 2019, in Naivasha, Kenya. After providing informed consent, the NGT participants were informed about the purpose of the session through a central theme question: “How can we determine the actual use of EHRs implemented in our healthcare facilities?” Participants were first given an overview on the NGT methodology and how it has been used in the past. Given that candidate indicators had already been defined in a separate process, we did not include the first stage of silent generation of ideas. Ten NGT participants (excluding research team members) evaluated the candidate indicators on quality using the SMART criteria on a 5-point Likert scale rating on each of the five quality components. The NGT exercise was conducted using the following five specific steps:
- Step 1: Clarification of indicators. For each of the 14 candidate indicators, the facilitator took five minutes to introduce and clarify details of the candidate indicator to ensure all participants understood what each indicator was meant to measure and how it would be generated. Where needed, participants asked questions and facilitators provided clarifications.
- Step 2: Silent indicator rating. The participants were given 10 minutes per indicator and were asked to: (1) individually and anonymously rate each candidate indicator on each of the SMART dimensions using a 5-point Likert scale for each dimension where 1 = Very Low, 2 = Low, 3 = Neutral, 4 = High, and 5 = Very high level of quality; (2) provide an overall rating of each indicator on a scale from 1–10, with 10 being the highest overall rating for an indicator; (3) indicate whether the indicator should be included in the final list of indicators or removed from consideration; and (4) provide written comments on any aspect regarding the indicator and their rating process. To help with this process, a printed standardized indicator ranking form was provided (S2 Appendix), and the indicator details were projected on a screen.
- Step 3: Round-robin recording of indicator rating. Each participant in turn was asked to give their overall rating of each indicator and these were recorded on a frequency table. No discussions, questions, or comments were allowed until all the participants had given their ratings. At the end of the round-robin, each participant in turn elucidated his/her criteria for the indicator overall rating score. At this stage, open discussions, questions and comments on the indicator were allowed. The discussions were recorded verbatim. The participants were not allowed to revise their individual rating score after the discussion.
- Step 4: Silent generation of new indicators. After steps 2 and 3 were repeated for all 14 candidate indicators, the participants were given ten minutes to think and write down any missing indicators in line with the central theme question. The new indicator ideas were shared in a round-robin without repeating what had been shared by other participants. These new proposed indicators were written on a flip chart and discussed to ensure all participants understood and approved any new indicator suggestions. The facilitator ensured that all participants were given an opportunity to contribute. From this exercise, new indicators were generated and details defined collectively by the team.
- Step 5: Ranking and sequencing the indicators. After Step 4, with exclusion of some of the original candidate indicators and addition of new ones based on team discussions, a final list of 15 indicators was generated. Each participant was asked to individually and anonymously rank the final list of the 15 indicators in order of importance, with rank 1 being the most important and rank 15 the least important. The participants were also asked to group the 15 indicators by the implementation priority and sequence into Phase 1 or 2. Phase 1 indicators would be those deemed as not requiring much work to collect, while Phase 2 indicators would require more human input and resources to collect.
Selection of final indicators
All the individual rankings for each indicator were summed across participants and the final list of prioritized consensus-based EHRs use indicators was derived from the rank order based on the average scores. The ranked indicator list was shared for final discussion and approval by the full team of NGT participants. The relevant indicator reference sheets for every indicator were also updated based on discussions from the NGT exercise. No fixed threshold number was used to select the indicators for inclusion. Finally, the indicator details were reviewed (including indicator definition or how data elements are collected, and indicator calculated) as guided by the NGT session discussions, resulting in the final consensus-based EHRs use reference sheets with details for each indicator.
Data analysis
Descriptive statistics were computed to investigate statistical differences on the rating of the 14 candidate indicators among the participants. Chi-square test was used to determine if there were statistically significant differences in rating of indicators across each of the SMART dimensions. The ratings totals per SMART dimension from the crosstabs analysis output were summarized in a table (Table 1), indicating the p-value generated from the Chi-square output for each dimension. The variability between the SMART dimensions and the rating was tested using Chi-square since the parameters under investigation were categorical variables (non-parametric data). The totals include rating count and its percentage. Weighted mean for each SMART dimension across all the 14 indicators was calculated to identify how the participants rated various candidate indicators. For the final indicator list, descriptive statistics were computed to determine the average rank score for each indicator and to assign priority numbers from the lowest average score to the highest. As such, the indicator with the lowest average score was considered the most important per the participants’ consensus. All analyses were performed in SPSS version 25 (IBM, https://www.ibm.com/analytics/spss-statistics-software). The indicators were also grouped according to implementation phase number assigned by the participants (either 1 or 2) to form the implementation order phases.
Results
SMART criteria rating for candidate indicators
The participants rated the collective set of the 14 candidate indicators highly (i.e. 4 or 5) across all the SMART dimensions (Table 1). However, a variation in the totals across the SMART components was due to some participants’ non-response in rating some of the components.
From the analysis, the indicators were rated high for specific and time-bound SMART quality dimensions with a mean of 3.96 (p-value = 0.141) for specific and 4.17 (p-value = 0.228) for time-bound. However, the two dimensions did not show any statistically significant difference in how various participants rated them. Measurable, achievable, and relevant dimensions were also high, with the mean of 3.86(p-value = 0.009), 4.01(p-value = 0.039) and 4.27(p-value = 0.023), respectively, and showed statistically significant difference in how the participants rated them across all the indicators.
Individual indicator ratings
Table 2 shows the participants’ overall ratings for each of the 14 candidate indicators on a scale of 1 to 10, reflecting lowest to highest rating respectively. Generally, the participants rated the candidate set of indicators highly with an overall mean rating of 6.6. Data concordance and automatic reports were rated highest with a mean above 8.0. However, the participants rated the observations indicator low with a mean of 3.8, while staff system use, system uptime, and report completeness indicators were moderately rated with a mean of 4.4, 5.9, and 5.8 respectively. The individual indicator ratings and ratings against SMART criteria served as a validation metric for candidate indicators.
Final indicators list
The NGT team reached a consensus to include all 14 candidate indicators in the final list of indicators, and added one additional indicator, report concordance, for a total of 15 EHRs usage indicators. The final set of indicators fell into four categories, namely (Fig 1 and Table 3):
- System Use—these indicators are used to identify how actively the EHRs is being used based on the amount of data, number of staff using the system, and uptime of the system.
- Data Quality—these indicators are used to highlight proportion and timeliness of relevant clinical data entered into the EHRs. They also capture how well EHRs data captures an accurate clinical picture of the patient.
- Interoperability—given that a major perceived role of EHRs is to improve sharing of health data, these indicators are used to measure maturity level of implemented EHRs to support interoperability.
- Reporting—aggregation and submission of reports is a major goal of the implemented EHRs, and these indicators capture how well the EHRs are actively used to support the various reporting needs.
As part of the NGT exercise, the details of each of the indicators was also refined. S3 Appendix presents the detailed EHRs MER document, with agreed details for each indicator provided. In this document, we also highlight the changes that were suggested for each indicator as part of the NGT discussions.
Indicator ranking
The score and rank procedure generated a prioritized consensus-based list of EHRs use indicators with a score of 1 (highest rated) to 15 (lowest rated). As such, a low average score Mean’ meant that the particular indicator was on average rated higher by the NGT participants. Table 4 presents the ordered list of ranking for the indicators as rated by nine of the NGT participants as one participant was absent during this NGT activity. Data Entry Statistics and System Uptime indicators were considered to be the most relevant in determining EHRs usage, while Reporting Rate indicator was rated as least relevant.
Indicator implementation sequence
Nine of the 15 indicators were recommended for implementation in the first phase of the indicator tool rollout, while the other six indicators were recommended for Phase 2 rollout (Table 5). The implementation sequence largely aligns with the indicator priority ranking by the participants (Table 4). The indicators proposed for Phase 1 implementation are a blend from the four indicator categories but are mostly dominated by the System Use subcategory.
Discussion
To the best of our knowledge, this is the first set of systematically developed indicators to evaluate the actual status of EHRs usage once an implementation is in place within LMIC settings. At the completion of the modified NGT process, we identified 15 potential indicators for monitoring and evaluating status of actual EHRs use. These indicators take into consideration constraints within the LMIC’s setting such as system availability, human resource constraints, and infrastructure needs. Ideally, an IS implementation is considered successful if the system is available to the users whenever and wherever it is needed for use [46]. Clear measures of system availability, use, data quality, and reporting capabilities will ensure that decision makers have clear and early visibility into success and challenges facing system use. Further, the developed indicators allow for aggregation of usage indicators to evaluate performance of systems by type, regions, facility level, and implementing partners.
An important consideration of these indicators is the source of measure data. Most published studies on evaluating success of information system focus on IS use indicators or variables such as ease of use, frequency of use, extent of use, and ease of learning, mostly evaluated by means of self-reporting tools (questionnaires and interviews) [19,39,47]. As such, the resulting data can be subjective and prone to bias. We tailored our indicators to ensure that most can be computer-generated through queries, hence incorporating objectivity into the measurement. However, a few of these indicators, such as data entry statistics as well as those on concordance (variable concordance and report concordance) derive measure data from facility records in addition to computer logs data.
Although the NGT expert panel was national, we are convinced the emerging results are of global interest. First, we developed the indicators in-line with the internationally renowned PEPFAR Monitoring, Evaluation, and Reporting (MER) indicators Reference Guide [35]. Secondly, the development process was mainly based on methodological criteria that are valid everywhere [48,49]. Furthermore, the indicators are not system-specific and hence can be used to evaluate usage of other types of EHRs, including other clinical information systems implementations like laboratory, radiology, and pharmacy systems. However, we recognize that differences exist in systems database structure; hence, the queries to determine indicator measures data from within each system will need to be customized and system-specific. It is important to also point out that these indicators are not based on real-time measures and can be applied both for point of care and non–point of care systems.
The selected set of indicators have a high potential to determine the status of EHRs implementations considering that the study participants rated all five SMART dimensions high (over 70%) across all the indicators. Further, the indicators reference guide provides details on “how to collect” and the sources of measure data for each indicator (S3 Appendix). This diminishes the level of ambiguity in regard to measurability of the indicators. Nonetheless, some of the indicators need countries to define their own thresholds and reporting frequencies. For instance, a country would need to define the length of acceptable time duration within which a clinical encounter should be entered into the EHRs for that encounter to be considered as having been entered in a timely fashion. As such, the indicator and reference guide need to be adapted for specific country and use context. Despite staff system use and observations indicators low overall rating (4.4 and 3.8 respectively), they were included in the final list of indicators after consensus-based discussions as part of the NGT exercise. We believe this is due to the indicators’ direct role in determining system usage and the fact that they were scored highly in the SMART assessment. Further assessment with a wider group of intermediate system users would be beneficial to estimate the value of the indicators in question before rendering them irrelevant.
This study has several limitations. It was based on a multidisciplinary panel of 10 experts, which is adequate for most NGT exercises, but still has a limited number of individuals who might not reflect all perspectives. On average, 5–15 participants per group are recommended depending on the nature of the study [50,51]. The low ranking of Data Exchange and Standardized Terminologies indicators indicate that the participants might have limited knowledge or appreciation of certain domains and their role in enhancing system use. Further, all participants were drawn from one country. Nevertheless, a notable strength was the incorporation of participants from more than one EHRs (KenyaEMR and IQCare systems) and a diverse set of expertise. In addition, the derived indicators do not assess the "satisfaction of use" dimension outlined in Delone & McLean mode [39] and future work should extend the indicators to explore this dimension.
A next step in our research is to conduct an evaluation on actual system use status for an information system rolled-out nationally, using the developed set of indicators. We will also evaluate the real-world challenges of implementing the indicators and refine them based on the findings. We also anticipate sharing these indicators with a global audience for input, validation, and evaluation. We are cognizant of the fact that the indicators and reference guides are living documents and they are bound to evolve over time, given the changing nature of the IS field and maturity of EHRs implementations.
Conclusion
An NGT approach was used to generate and prioritize a list of consensus-based indicators to assess actual EHRs usage status in Kenya. However, the indicators can be applicable to LMICs and similar contexts. This list of indicators can allow for monitoring and aggregation of EHRs usage measures to ensure that appropriate and timely actions are taken at institutional, regional, and national levels to assure effective use of EHRs implementations.
Supporting information
S3 Appendix. Monitoring, Evaluation and Reporting (MER v1.0): Electronic Health Record (EHR) system usage indicator reference guide.
https://doi.org/10.1371/journal.pone.0244917.s003
(DOCX)
Acknowledgments
Authors would like to acknowledge the US Centers for Disease Control and Prevention (CDC) for providing input into the candidate set of indicators. We also appreciate the insights and contributions from all the workshop participants drawn from CDC-Kenya, Kenyan Ministry of Health, Palladium (EHRs development partners), EHRs implementing partners, Moi University, and EHRs users.
References
- 1. Ludwick DA, Doucette J. Adopting electronic medical records in primary care: Lessons learned from health information systems implementation experience in seven countries. Int J Med Inform. 2009;78(1):22–31. pmid:18644745
- 2. Blaya JA, Fraser HSF, Holt B. E-health technologies show promise in developing countries. Health Aff. 2010;29(2):244–51. pmid:20348068
- 3. Hillestad R, Bigelow J, Bower A, Girosi F, Meili R, Scoville R, et al. Can electronic medical record systems transform health care? Potential health benefits, savings, and costs. Health Aff. 2005;24(5):1103–17.
- 4.
Laudon KC, Laudon JP. Information Systems, Organizations, and Strategy. In: Management Information Systems: Managing the digital firm. 2015. p. 81–123.
- 5. Akanbi MO, Ocheke AN, Agaba P a, Daniyam C a, Agaba EI, Okeke EN, et al. Use of Electronic Health Records in sub-Saharan Africa: Progress and challenges. J Med Trop [Internet]. 2012;14(1):1–6. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=4167769&tool=pmcentrez&rendertype=abstract pmid:25243111
- 6.
Report SA. The U. S. President ‘ s Emergency Plan for AIDS Relief Seventh Annual Report to Congress. 2019.
- 7. Tierney WM, Sidle JE, Diero LO, Sudoi A, Kiplagat J, Macharia S, et al. Assessing the impact of a primary care electronic medical record system in three Kenyan rural health centers. J Am Med Informatics Assoc. 2016;23(3):544–52. pmid:26260246
- 8. Ngugi PN, Gesicho MB, Babic A, Were MC. Assessment of HIV Data Reporting Performance by Facilities During EMR Systems Implementations in Kenya. Stud Health Technol Inform. 2020;272:167–70. pmid:32604627
- 9. Zlabek JA, Wickus JW, Mathiason MA. Early cost and safety benefits of an inpatient electronic health record. J Am Med Informatics Assoc. 2011;18(2):169–72. pmid:21292703
- 10. Singer A, Yakubovich S, Kroeker AL, Dufault B, Duarte R, Katz A. Data quality of electronic medical records in Manitoba: Do problem lists accurately reflect chronic disease billing diagnoses? J Am Med Informatics Assoc. 2016;23(6):1107–12. pmid:27107454
- 11. Wang SJ, Middleton B, Prosser LA, Bardon CG, Spurr CD, Carchidi PJ, et al. A cost-benefit analysis of electronic medical records in primary care. Am J Med. 2003;114(5):397–403. pmid:12714130
- 12. Odekunle FF, Odekunle RO, Shankar S. Why sub-Saharan Africa lags in electronic health record adoption and possible strategies to increase its adoption in this region. Int J Health Sci (Qassim) [Internet]. 2017;11(4):59–64. Available from: http://www.ncbi.nlm.nih.gov/pubmed/29085270%0A http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=PMC5654179 pmid:29085270
- 13. Ngugi P, Were MC, Babic A. Facilitators and Barriers of Electronic Medical Records Systems Implementation in Low Resource Settings: A Holistic View. Stud Heal Technol Informatics IOS Press. 2018;251:187–90. pmid:29968634
- 14. Khalifa M. Barriers to health information systems and electronic medical records implementation a field study of Saudi Arabian hospitals. Procedia Comput Sci [Internet]. 2013;21:335–42. Available from: http://dx.doi.org/10.1016/j.procs.2013.09.044
- 15. Farzianpour F, Amirian S, Byravan R. An Investigation on the Barriers and Facilitators of the Implementation of Electronic Health Records (EHR). Health (Irvine Calif). 2015;7(December):1665–70.
- 16.
Sood SP, Nwabueze SN, Mbarika VWA, Prakash N, Chatterjee S, Ray P, et al. Electronic medical records: A review comparing the challenges in developed and developing countries. In: Proceedings of the Annual Hawaii International Conference on System Sciences. 2008.
- 17. Jawhari B, Ludwick D, Keenan L, Zakus D, Hayward R. Benefits and challenges of EMR implementations in low resource settings: A state-of-the-art review. BMC Med Inform Decis Mak [Internet]. 2016;16(1):1–12. Available from: http://dx.doi.org/10.1186/s12911-016-0354-8 pmid:27600269
- 18. Abraham C, Junglas I. From cacophony to harmony: A case study about the IS implementation process as an opportunity for organizational transformation at Sentara Healthcare. J Strateg Inf Syst [Internet]. 2011;20(2):177–97. Available from: http://dx.doi.org/10.1016/j.jsis.2011.03.005
- 19. Landis-Lewis Z, Manjomo R, Gadabu OJ, Kam M, Simwaka BN, Zickmund SL, et al. Barriers to using eHealth data for clinical performance feedback in Malawi: A case study HHS Public Access. Int J Med Inf [Internet]. 2015;84(10):868–75. Available from: http://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC4841462&blobtype=pdf
- 20. Zviran M, Erlich Z. Measuring IS User Satisfaction: Review and Implications. Commun Assoc Inf Syst [Internet]. 2003;12(12):81–103. Available from: http://aisel.aisnet.org/cais%0Ahttp://aisel.aisnet.org/cais/vol12/iss1/5
- 21. Boonstra A, Broekhuis M. Barriers to the acceptance of electronic medical records by physicians from systematic review to taxonomy and interventions. BMC Health Serv Res. 2010;10. pmid:20691097
- 22. Barkhuysen P, De Grauw W, Akkermans R, Donkers J, Schers H, Biermans M. Is the quality of data in an electronic medical record sufficient for assessing the quality of primary care? J Am Med Informatics Assoc. 2014;21(4):692–8.
- 23. Kihuba E, Gheorghe A, Bozzani F, English M, Griffiths UK. Opportunities and challenges for implementing cost accounting systems in the Kenyan health system [Internet]. Vol. 9, Global Health Action. 2016. Available from: https://www.tandfonline.com/doi/full/10.3402/gha.v9.30621 pmid:27357072
- 24. Delone WH, Mclean ER. Information Systems Success Measurement. Found Trends®in Inf Syst. 2016;2(1):1–116.
- 25. Van Der Meijden MJ, Tange HJ, Troost J, Hasman A. Determinants of Success of Inpatient Clinical Information Systems: A Literature Review. J Am Med Inform Assoc. 2003;10(3):235–43. pmid:12626373
- 26. Erlirianto LM, Ali AHN, Herdiyanti A. The Implementation of the Human, Organization, and Technology-Fit (HOT-Fit) Framework to Evaluate the Electronic Medical Record (EMR) System in a Hospital. Procedia Comput Sci [Internet]. 2015;72:580–7. Available from: http://dx.doi.org/10.1016/j.procs.2015.12.166
- 27. Ammenwerth E, Gräber S, Herrmann G, Bürkle T, König J. Evaluation of health information systems—Problems and challenges. Int J Med Inform. 2003;71(2–3):125–35. pmid:14519405
- 28. Heeks R. Health information systems: Failure, success and improvisation. Int J Med Inform. 2006;75(2):125–37. pmid:16112893
- 29. Prijatelj V. Success factors of hospital information system implementation: What must go right? Vol. 68, Studies in Health Technology and Informatics. 1999. p. 197–200. pmid:10724868
- 30. Seddon PB. DeLone and McLean Model of IS Success A Respecification and Extension of the. Inf Syst Res. 1997;8(3):240–53.
- 31.
Yusof MM, Paul RJ, Stergioulas LK. Towards a Framework for Health Information Systems Evaluation. In: Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS’06) [Internet]. 2006. p. 95a-95a. http://ieeexplore.ieee.org/document/1579480/
- 32.
Cuellar MJ, McLean ER, Johnson RD. The measurement of information system use:Primary considerations. Proc 2006 ACM SIGMIS CPR Conf Comput Pers Res Forty four years Comput Pers Res Achiev challenges Futur—SIGMIS CPR ‘06 [Internet]. 2006;(May 2014):164–8. http://dl.acm.org/citation.cfm?id=1125170.1125214
- 33. Szajna B. Determining information system usage: Some issues and examples. Inf Manag. 1993;25(3):147–54.
- 34. Eslami Andargoli A, Scheepers H, Rajendran D, Sohal A. Health information systems evaluation frameworks: A systematic review. Int J Med Inform [Internet]. 2017;97:195–209. Available from: http://dx.doi.org/10.1016/j.ijmedinf.2016.10.008 pmid:27919378
- 35.
PEPFAR. Monitoring, Evaluation, and Indicator Reference (MER 2.0) Indicator Reference Guide. 2017.
- 36. Straub D, Limayem M, Karahanna-Evaristo E. Measuring System Usage: Implications for IS Theory Testing. Manage Sci [Internet]. 1995;41(8):1328–42. Available from: http://pubsonline.informs.org/doi/abs/10.1287/mnsc.41.8.1328
- 37. Boland MR, Trembowelski S, Bakken S, Weng C. An Initial Log Analysis of Usage Patterns on a Research Networking System. Clin Transl Sci. 2012;5(4):340–7. pmid:22883612
- 38. Iivari J. An empirical test of the DeLone-McLean model of information system success. ACM SIGMIS Database [Internet]. 2005;36(2):8–27. Available from: http://portal.acm.org/citation.cfm?doid=1066149.1066152
- 39.
DeLone WH, McLean ER. Information systems success revisited. In: Proceedings of the 35th Hawaii International Conference on System Sciences (HICSS) [Internet]. 2002. p. 238–49. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=994345
- 40. Bours D. A good start with S.M.A.R.T. (indicators). Adaptation and Resilience M & E. 2014.
- 41. Delp P., Thesen A., J. M, N. S. Nominal Group Technique. Systems tools for project planning. Bloomington, Indiana: International Development Institute; 1977.
- 42. Gallagher M, Hares T, Spencer J, Bradshaw C, Webb I. The nominal group technique: A research tool for general practice? Fam Pract. 1993;10(1):76–81. pmid:8477899
- 43.
Delbecq AL, Van de Ven AH, Gustafson DH. Group Techniques for Program Planning: A guide to Nominal Group and Delphi Processes. Glenview, Illinois: Scott, Foresman and Company; 1975. pmid:126959
- 44.
I-TECH. Health Information Systems in Kenya [Internet]. 2019 [cited 2019 Jan 18]. www.go2itech.org/2017/08/health-information-systems-in-kenya/
- 45.
Palladium Group International [Internet]. 2018 [cited 2018 Oct 20]. https://en.m.wikipedia.org/wiki/Palladium_International
- 46.
Xu J, Quaddus M. Managing Information Systems: Ten Essential Topics. In 2013. p. 27–41. http://link.springer.com/10.2991/978-94-91216-89-3
- 47. Dirksen CD. Nominal group technique to select attributes for discrete choice experiments: an example for drug treatment choice in osteoporosis. 2019;
- 48. Berhe M, Tadesse K, Berhe G, Gebretsadik T. Evaluation of Electronic Medical Record Implementation from User’s Perspectives in Ayder Referral Hospital Ethiopia. J Heal Med Informatics [Internet]. 2017;08(01):1–13. Available from: https://www.omicsonline.org/open-access/evaluation-of-electronic-medical-record-implementation-from-usersperspectives-in-ayder-referral-hospital-ethiopia-2157-7420-1000249.php?aid=85647
- 49. Despont-Gros C, Mueller H, Lovis C. Evaluating user interactions with clinical information systems: A model based on human-computer interaction models. J Biomed Inform. 2005;38(3):244–55. pmid:15896698
- 50. Harvey N, Holmes CA. Nominal group technique: An effective method for obtaining group consensus. Int J Nurs Pract. 2012;18(2):188–94. pmid:22435983
- 51.
Lennon R, Glasper A, Carpenter D. Nominal Group Technique: Its utilisation to explore the rewards and challenges of becoming a mental health nurse, prior to the introduction of the all graduate nursing curriculum in England. [Internet]. Working Papers in Health Sciences 1:2 ISSN 2051-6266/20120000. 2012. http://www.southampton.ac.uk/assets/centresresearch/documents/wphs/NominalGroupTechnique.pdf