Skip to main content
Advertisement
  • Loading metrics

Electronic maternal and child health application usability, feasibility and acceptability among healthcare providers in Amhara region, Ethiopia

  • Esubalew Alemneh,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Project administration, Supervision, Writing – review & editing

    Affiliation ICT4D Research Center, Bahir Dar Institute of Technology, Bahir Dar University, Bahir Dar, Ethiopia

  • Tegegn Kebebaw,

    Roles Conceptualization, Funding acquisition, Investigation, Project administration, Software, Writing – review & editing

    Affiliation ICT4D Research Center, Bahir Dar Institute of Technology, Bahir Dar University, Bahir Dar, Ethiopia

  • Dabere Nigatu ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Writing – original draft, Writing – review & editing

    daberen@yahoo.com

    Affiliation School of Public Health, College of Medicine and Health Sciences, Bahir Dar University, Bahir Dar, Ethiopia

  • Muluken Azage,

    Roles Conceptualization, Funding acquisition, Investigation, Methodology, Project administration, Writing – original draft, Writing – review & editing

    Affiliation School of Public Health, College of Medicine and Health Sciences, Bahir Dar University, Bahir Dar, Ethiopia

  • Eyaya Misgan,

    Roles Conceptualization, Investigation, Project administration, Validation, Writing – review & editing

    Affiliation School of Medicine, College of Medicine and Health Sciences, Bahir Dar University, Bahir Dar, Ethiopia

  • Enyew Abate

    Roles Conceptualization, Investigation, Project administration, Validation, Writing – review & editing

    Affiliation School of Medicine, College of Medicine and Health Sciences, Bahir Dar University, Bahir Dar, Ethiopia

Abstract

An innovative electronic Maternal and Child Health (eMCH) application was developed to support operational and clinical decision-making in maternal and child health services. End-user-based evaluation of eHealth application is a critical step to ascertain how successfully users can learn and use it, and improve the technology. Therefore, this study aimed to evaluate the eMCH tool usability, feasibility, and acceptability among healthcare providers (HCPs) in the Amhara region, Ethiopia. A cross-sectional study was conducted among HCPs working in six public healthcare facilities. The usability evaluation was done on 24 HCPs across three professional categories using the ISO 9241–11 usability guideline. One hundred nine HCPs were participated in the feasibility and acceptability study. Data were collected using a standard usability tool, think-aloud protocol, a self-administered approach, and Open Broadcaster Software Studio version 26.1.1 video recorder. Descriptive statistics were used to describe the data. A Kruskal-Wallis test was used to measure the association between mean scores and categories of HCPs. The recorded videos were used for the log file analysis method. None of the HCP categories were able to complete all the tasks without errors. The average number of errors and restarts were 7.5 and 2.8, respectively. The average number of restarts was directly proportional to the average number of errors. The participants successfully completed more than 70% of the tasks without requiring any assistance or guidance. Forty-seven comments or errors were identified from the think-aloud analysis and 22 comments from the usability metrics analysis. Overall, statistically significant performance differences were observed among the three HCP groups across the majority of the usability evaluation metrics. Fifty-seven percent of HCPs scored higher than the mean on the feasibility study. Slightly higher than half, 56 (51.4%), of the HCPs scored higher than the mean score on the acceptability study. The usability evaluation identified vital comments and usability flaws that were essential for the eMCH tool to be upgraded. The tool was feasible and acceptable as reported by end-users. Therefore, the errors and usability flaws of the tool should be fixed before deployment to other healthcare settings.

Author summary

The study team developed an electronic Maternal and Child Health (eMCH) application to be used by healthcare providers (HCPs) serving mothers and their babies. The participation of end-users in the design and evaluation of a new application is a vital step to produce contextual, acceptable and easy to use application. Here, the study wants to evaluate the usability, feasibility, and acceptability of the eMCH application by healthcare providers. Twelve tasks were given to twenty-four healthcare providers who were selected from three provider categories. The HCPs had to use the eMCH application to complete the tasks. The study revealed that none of the HCP categories were able to complete all the tasks without any mistake. The number of eMCH application restarts was directly proportional to the number of errors committed. The study also identified many vital comments and usability flaws that were essential to upgrade the eMCH application. The results gave an insight that the comments and usability flaws of the application should be fixed before deployment to other healthcare institutions.

Introduction

Digital health, which comprises emerging technologies and eHealth (such as mHealth, telemonitoring, video-consultations, electronic health records), play a crucial role to advance the core principle of primary health care (PHC)–a people-centered and integrated health service delivery model [13]. Many low-and middle-income countries (LMICs) are leveraging advances in digital health technologies to improve and maintain the continuity of service delivery post-COVID-19, by innovating, testing, evaluating, and, in some instances, integrating digital health solutions into PHC settings [4,5].

The use of digital health is also recognized and recommended as vital intervention to guide and support clinical and operational decision-making across the health system to reduce inequality and increase universal health coverage [6]. Digital health may improve healthcare quality in different areas of interventions such as patient safety, access to healthcare, effective treatment, efficient use of resources, equity of care across subgroups of populations, and patient-centered care [68]. The rise and use of digital health adoption may also have created a challenge to establish the extent of the impact of digital health on quality healthcare [5,8]. The challenges associated with digital health include ethical concerns about information security, the acceptability of new digital tools and practices, and whether the technology really is practical and feasible to be implemented in real practice [9]. End-users (i.e., healthcare providers) perspective is one component in the design and deployment of eHealth applications in healthcare facilities [10].

The study team developed an innovative eMCH (electronic Maternal and Child Health) which is an integrative, interoperable and vendor-neutral digital health platform to support operational and clinical decision-making in maternal and child health (MCH) services. The tool transformed the current paper-based information management system and services provision to digital system. As the eMCH tool stand now, the primary end-users are healthcare providers (HCPs) who are deployed in MCH unit. In eHealth development process, measuring usability, acceptability, and feasibility of the new tool among end-users is a vital step to determine how successfully users can learn and utilize the tool to accomplish their goals and to identify chances to improve the technology. Therefore, this study aimed to evaluate eMCH application usability, acceptability, and feasibility among HCPs in Amhara region, Ethiopia. Addressing the users’ comments and usability flaws will help to produce usable and acceptable eMCH tool that eventually helps to improve the quality of care and reduce mortality and morbidity of mothers and children.

Methods and materials

Study design and setting

A cross-sectional study was conducted from February 24, 2022 to March 08, 2022. The study involved healthcare providers working in 6 public health facilities (3 hospitals and 3 health centers) in Amhara region, Ethiopia. HCPs who were working in urban, peri-urban and rural setting healthcare facilities were participated in the study. The healthcare facilities were selected from Bahir Dar city, Bahir Dar Zuria district and North Mecha district. Healthcare providers who were working in MCH unit were participated in the study. The professional categories participated were midwives, integrated emergence surgery officers (IESOs) and medical doctors.

Sample size and sampling procedure

All HCPs who were working in MCH unit in six public health facilities were considered in the feasibility and acceptability study. In the case of usability evaluation, representatives of real users were selected to conduct user-based usability testing. Scholarly studies suggested different number of users for usability testing [1114]. Up to 80% of usability problems can be detected by involving 4 to 5 participants and the benefit/cost ratio declines as more participants are added [12]. Scholars in the field recommended that 15 to 20 study participants are adequate to detect all usability problems in user-based usability testing [11,13]. Splitting the usability testers into different groups is also found to be advantageous in usability testing [15]. Thus, 24 evaluators from the three categories of health professionals were involved in the usability evaluation. Initial screening questionnaire was used to identify and administer usability evaluation. The usability evaluation was done by 24 study participants who were fit for usability testing process.

Digital health tool description

The eMCH application development process was started in 2020 by a team of experts. The team composed of experts from ICT, clinicians and public health professionals. The eMCH application, a quality improvement tool, was designed to be used at a point-of-care by healthcare providers. The tool had a decision support system to enhance healthcare providers’ clinical decision. The eMCH tool included maternal continuum of care components: antenatal care (ANC), delivery care including ePartograph (electronic partograph), and postnatal care (PNC). The tool transformed the paper-based standard care into electronic form to guide and support maternal and newborn healthcare services. The detail description of ePartograph tool was presented in Kebebaw et al. 2021 [16]. A responsive web design approach was followed to develop the system. So that, the applications can run in devices of different screen sizes. The usability problems discovered in heuristic-based usability evaluation were corrected before the current end-user testing. A snapshot of filled ePartograph was given below (Fig 1). Additional interfaces from the application were also provided as supporting information files (see S1 Fig, S2 Fig and S3 Fig).

Data collection tools and procedure

A self-administered questionnaire with five-point Likert scale was used to measure feasibility and acceptability of eMCH application by healthcare providers. The scale was developed by reviewing literature. The data were collected using a self-administered English version questionnaire. The measurement tool had 23 items: 10 items were designed to assess feasibility of the eMCH tool while 13 items were designed to assess acceptability of the eMCH application. Six investigator team members facilitated the data collection procedure.

In this usability evaluation, think-aloud protocol, observation, log analysis and questionnaire were applied to capture usability evaluation data. Usability evaluation framework shown on the diagram below was used to conduct the usability evaluation (Fig 2). The end user-based usability evaluation was assessed in terms of efficiency, effectiveness and satisfaction. Evaluation participants executed a set of tasks on the eMCH using thinking aloud approach. Facilitators observed, guided, and monitored the overall evaluation process.

thumbnail
Fig 2. Usability evaluation framework.

IESOs: Integrated emergency surgery officers; eMCH: electronic Maternal and Child Health.

https://doi.org/10.1371/journal.pdig.0000494.g002

The eMCH tool can run in devices of different screen sizes. For this study, however, 7-inch tablets with a screen size of 7.5” x 4.75” were used. In the think-aloud testing, also known as think-aloud protocol, the evaluators speak aloud about their thinking throughout the evaluation process. The think-aloud approach is the most widely used usability testing method [17]. It is known to give detail understanding of usability problems as it helps to identify actual difficulties end-users come upon and the causes of underlying problems. The think-aloud usability evaluation method stems from the field of cognitive psychology and it is flexible, cheap, easy to learn, and convincing usability evaluation method [18,19]. This day, the think-aloud protocol is becoming a well-recognized usability evaluation method to capture data on user experiences of electronic applications [2023]. Open Broadcaster Software (OBS) Studio version 26.1.1 was used to record the end-users’ responses and their interactions to complete the tasks. The software was launched from the home screen and the recording mode was selected from virtual camera screen. When a user was ready, the facilitators hit start recording button, and the software immediately hidden any tool interface and start to display the evaluator’s desktop screen. The OBS tool recorded every action including voices.

Two sets of questionnaires were used to gather information about participants and their experience of the system. The first questionnaire contains open ended questions that were used to gather demographic information and computer skills and usage. The second questionnaire contains the standard task level satisfaction questions (i.e., Single Ease Question (SEQ)) [24] and test level satisfaction questions(i.e., System Usability Scale (SUS)) [25].

The usability evaluation was conducted in two sessions in the presence of four domain experts and two usability experts as a facilitator. Midwives and general practitioners form separate sessions and IESOs were evenly distributed into the two groups. For each session two domain experts and one usability experts were assigned. The facilitators delivered a 30 minutes orientation for usability evaluators. The orientation included explanation of the application, the tasks scenarios, and the usability testing process. The evaluation process was started following the orientation. As first step the participants started to fill demographic information as well as computer skills and usage information. Next, they went through the task scenario and executed them while uttering their views as they move through the user interface. The software captures every interaction of the evaluators with the system. The OBS recorder was re-launched either by the participant or with the help of facilitators when the participant suddenly closed the software.

During the testing session, mobile devices were silenced to avoid distraction. Generally, the two evaluation sessions were conducted in a controlled environment. Twelve tasks which were supposed to be carried out using the eMCH were given for participants. The tasks were proposed based on relevance to MCH care professionals’ daily practice. The tasks were categorized into two roles with detail scenario descriptions. Some of the task scenarios are portrayed in table (Table 1). The users filled the post-test questions after completing the tasks. The usability evaluation questionnaires used were valid and reliable instruments to measure the satisfaction of users [26].

thumbnail
Table 1. Tasks for usability evaluation of eMCH application by healthcare providers in Amhara region, Ethiopia.

https://doi.org/10.1371/journal.pdig.0000494.t001

Study variables and metrics

In this study, usability was measured following the ISO 9241–11 usability framework (Fig 3). Usability metrics involve measurement of usability attributes like effectiveness, efficiency, and satisfaction of users of a product [27]. Efficiency was measured in terms of time taken to complete the tasks. Effectiveness was measured in terms of task completion rate (success score), number of errors an evaluator committed and number of restarts to complete a task. The study used the two most commonly used satisfaction assessment scales: SUS and SEQ. The SUS is 5-point Likert scale which comprises of 10 questions that are used to assess test level usability and learnability of a product. The SEQ on the other hand is a 7-point Likert scale and it is used to measure task level satisfaction.

The four-usability metrics were defined as follows:

  • Number of errors: the number of errors a user has committed while trying to accomplish a task was counted. Some examples of the errors were clicking wrong link or unclickable image, clicking save/complete and other buttons before entering/selecting all mandatory fields, entering/selecting wrong values, and double clicks.
  • Task completion time: it was calculated as a difference between task completion time and task start time. Start time was counted from the moment the evaluator has finished reading task scenario and completion time was the time user has declared she/he has finished the task verbally.
  • Success score: this metrics measures average successful task completion rate per a user. Tasks were completed with no facilitator intervention, with little assistance or with guidance (giving detailed directions for user on how to complete the tasks).
  • Number of restarts to complete the task: this metrics refers the number of restarts made to complete the task.

In this study, feasibility was measured by computing 10 questions with five-point Likert scale. The participants rated each question 1 to 5: very easy (5), easy (4), undecided (3), difficult (2), very difficult (1). The higher the score the more feasible was the eMCH application. The eMCH application was considered feasible if a participant scores greater or equal the mean value. Similarly, the acceptability was measured by computing 13 questions with five-point Likert scale. The participants rated each question 1 to 5: strongly agree (5), agree (4), undecided (3), disagree (2), strongly disagree (1). The higher the score the more acceptable was the eMCH application. The eMCH application was considered acceptable if a participant scores greater or equal the mean value.

Data management and analysis

The feasibility and acceptability data were coded and enter to SPSS version 21. Descriptive statistics, such as mean, frequencies, percent, were used to analyze feasibility and acceptability data. Bar graphs and tables were used to present the data. Additionally, Kruskal-Wallis test (a non-parametric test) [28] was used to measure the association between mean scores of errors and satisfaction, and professional categories.

The log file analysis method was used to analyze the recorded video. The metrics’ values were identified by log file analysis, and usability problems were jotted down. The recorded videos were carefully and independently evaluated by two experts and the values were extracted. Whenever the video reviewers detect discrepancies of the values, the videos were rechecked for correct values. For efficiency and effectiveness, the analysis was made for each task. The study took either the average or count of the values based on the type of metrics used. Tasks involving role switching (recorder to provider or vice versa) was ignored from the analysis.

Ethics statement

Ethical approval was obtained from Bahir Dar University College of Medicine and Health Sciences Institutional Review Board. Support letters were obtained from Amhara Public Health Institute. Then, a permission letter was sought from all relevant healthcare facilities. Besides, written informed consent was obtained from each study participant. Anonymity and confidentiality of information was ensured throughout the study process.

Results

Sociodemographic characteristics of study participants

The feasibility and acceptability questionnaire were completed by 109 healthcare providers. Forty-five (41.3%) of the healthcare providers were in the age range of 30 to 34 years old. Seventy (64.2%) were females, 100 (91.7%) were first degree holders and 90 (82.6%) were midwives by profession (Table 2).

thumbnail
Table 2. Sociodemographic characteristics of healthcare providers involved in feasibility and acceptability study, Amhara region, Ethiopia.

https://doi.org/10.1371/journal.pdig.0000494.t002

Twenty-four healthcare providers were involved in the usability evaluation. Of the 24 HCPs, 12 were females and 13 were first degree holders. Majority, 16 HCPs had intermediate computer skill while 9 healthcare providers had 2 to 3 hours exposure to computer use per day (Table 3).

thumbnail
Table 3. Demographic characteristic of usability study participants by professional category, Amhara region, Ethiopia.

https://doi.org/10.1371/journal.pdig.0000494.t003

Feasibility and acceptability of eMCH

The mean feasibility score was 38.7 (standard deviation (SD) = ±5.8). The study revealed that 62 (56.9%) healthcare provider scored higher than the mean. The mean acceptability score was 52.8 (SD = ±6.9). Slightly higher than half, 56 (51.4%), of the healthcare providers scored higher than the mean score (Table 4).

thumbnail
Table 4. Feasibility and acceptability of eMCH by healthcare providers in public health facilities, Amhara region, Ethiopia.

https://doi.org/10.1371/journal.pdig.0000494.t004

Usability evaluation

The average number of errors and restarts were 7.5 and 2.8, respectively. The IESOs made more slips or errors in comparison to the other two professional groups. The Kruskal-Wallis test on both metrics showed a statistically significant difference in number of errors committed (X2 = 7.86, P = 0.019) and number of restarts between professional groups (X2 = 11.00, P = 0.004) (Table 5).

thumbnail
Table 5. Average number of errors and restarts per professional categories, Amhara region, Ethiopia.

https://doi.org/10.1371/journal.pdig.0000494.t005

The study revealed that none of the professional categories were able to complete all the tasks without assistance or guidance. Some required guidance to complete the tasks. On the other hand, all the professional groups were able to complete, on average, more than 70% of the tasks without requiring any assistance or guidance. Medical doctors outperform other groups in successful completion of tasks (81%). Midwives tend to complete their tasks after some support or assistance (18%, 13 out of 72 instances). Medical doctors were the groups that required minimum guidance (4%, 3 out of 72 instances). The IESOs on the other hand required the highest guidance to complete the task. More specifically at least one participant required guidance to accomplish most of the tasks. Task completion rate significantly varied among professional groups (X2 = 6.78, P = 0.0337) (Fig 4).

thumbnail
Fig 4. Task completion rate of healthcare providers by their profession, Amhara region, Ethiopia.

Guidance refers when eMCH user required extensive guide or action from facilitators to accomplish a task given whereas assistance refers when eMCH user required a minimal help from facilitators to accomplish a task given. IESOs: Integrate emergency surgery officers; GPs: General practitioners.

https://doi.org/10.1371/journal.pdig.0000494.g004

The result revealed that midwives generally tend to take more time (average completion time 81.5 seconds) to complete most of the tasks. Medical doctors were the fastest group (average completion time 53.5 seconds) to complete majority of the tasks. Task 1 and Task 7 take higher time because the two tasks consist steps that involve filling forms. The task completion time varied among professional categories (X2 = 6.48, P = 0.039). The mean task completion time among midwives, IESOs, and GPs were 24.3, 17.5 and 13.5 respectively. The pair-wise completion showed the existence of significant difference between midwives and GPs (p = 0.006) (Fig 5).

thumbnail
Fig 5. Average task completion time by professional categories, Amhara region, Ethiopia.

IESOs: Integrate emergency surgery officer; MD: Medical doctor.

https://doi.org/10.1371/journal.pdig.0000494.g005

The result showed that midwives and medical doctors found the tasks easier to use than the IESOs. There was no statistically significant difference between professional categories (X2 = 5.52, P = 0.063) (Fig 6).

thumbnail
Fig 6. Average ease of use of each task by professional categories, Amhara region, Ethiopia.

IESOs: Integrate emergency surgery officer; GPs: General practitioners.

https://doi.org/10.1371/journal.pdig.0000494.g006

Medical doctors scored the highest satisfaction in using the eMCH application while IESOs scored the lowest. The Kruskal-Wallis test depicted that there was statistically significant difference in satisfaction level among professional groups (X2 = 6.65, P = 0.036) (Fig 7).

thumbnail
Fig 7. The average SUS result by professional categories, Amhara region, Ethiopia.

IESOs: Integrate emergency surgery officers; GPs: General practitioners.

https://doi.org/10.1371/journal.pdig.0000494.g007

Additionally, 47 comments or errors from the think-aloud analysis and 22 comments from usability metrics analysis were identified. The nine most frequent comments or errors from the think-aloud recording are presented in table (Table 6). Majority, 63% of the evaluators want the system to be accessed via their local language, Amharic. Especially, 75% of the midwives prefer accessing the tool using Amharic. The other comment which was reiterated by 13/24 (54%) evaluators was the system shall restrict entering subsequent contractions before preceding ones were entered. Forty-six percent of the participants suggested one decimal place for temperature values. Forty-two percent of the participants want the tool display the clinical intervention provided for patients or clients (Table 6).

thumbnail
Table 6. Usability issues obtained by analyzing think-aloud recordings of healthcare providers, Amhara region, Ethiopia.

https://doi.org/10.1371/journal.pdig.0000494.t006

The study also identified different comments or errors from selected usability evaluation tasks with their suggested remedials to upgrade the tool. The top four tasks that scored highest average completion time by the three groups of professionals were summarized in table below (Table 7).

thumbnail
Table 7. Usability issues extracted from usability tasks performed by healthcare providers, Amhara region, Ethiopia.

https://doi.org/10.1371/journal.pdig.0000494.t007

Discussion

The study revealed that none of the user groups were able to complete all the twelve experimental tasks without errors. The average number of errors committed was 7.54. The finding is consistent with the previous study which showed that the average number of errors per task was 0.7 [29]. The study also revealed that the average number of restarts were directly proportional to average number of errors. Those who have made more mistakes have restarted their tasks more frequently. This indicates that making slips and mistakes when performing tasks is perfectly normal. Though no severe error was detected, it helped to obtain minor errors that needs to be diagnosed. For example, most users made errors in entering date values. They either mixed Ethiopian Calendar with Gregorian Calendar or typed wrong date formats. This has given a lesson to explicitly mention the calendar type and to change the date field from textbox to date picker. The study showed that number of errors detected had statistically significant variation across the study groups. The variation could be related to practical partograph usage experience by study groups.

Task completion rate or success rate has strong correlations with usability benchmarks for eHealth systems [29]. This study revealed that two of the three groups failed to meet the threshold value of a usable application (78%) as suggested in Sauro 2010 [30]. This might be due to unfamiliarity to the eMCH tool. The study assumes that after some training and experience they can use the tool effectively as no serious usability flaw was detected in the analysis of the recording of participants thinking-aloud.

The study showed that all professional groups involved were able to complete, on average, more than 70% of the tasks without requiring any assistance or guidance. Medical doctors had the highest performance in successful completion of tasks compared to other groups. The IESOs on the other hand required the highest guidance to complete the tasks. More specifically at least one participant required guidance to accomplish most of the tasks. It is obvious that task completion time depends on factors related to the type and complexity of task, the participant, the device used, and context of use. Bad user interface also usually leads to longer task completion time [31]. Longer task completion time may in turn reveal symptoms of user interface problems. However, the metrics doesn’t state exactly where the problems are. Further investigation can be undertaken to detect the flaws on usability for eventual diagnostic measure in formative tests. For instance, adding autofill and exhaustive listing of values for some fields for sure will enable users accomplish their tasks faster for tasks like Task 1 and Task 2.

The study showed that satisfaction rating (SUS) varied by the study groups which is consistent with previous studies [32,33]. The study also did not find the correlation stated between SUS and task performance as did in Sauro 2009 [29]. Just like other usability metrics, post-test satisfaction metrics tells the overall satisfaction opinion about the application but it fails to provide particular symptomatic information. However, if the result is very low user experience practitioners can make investigations on tasks that are completed with assistance and guidance to find out user interface problems. For example, they may check if labels are clear and comprehensible, or if the tasks are complex.

The results also depicted that though midwives were frequent users of partographs their performance didn’t reflect their usage experience. This might be due to their computer skills and time spent on computers. On contrary, despite the fact that IESO have better computer skills and spend more time on computers, they were less performing group in all measures. This is more probably due to lack of practical experience on partograph usage. Medical doctors performed very well in most of the usability metrics. Listing usability issues from think-aloud protocols remains one of the most effective tools to explain the usability for eHealth [34]. The study identified various errors from selected usability evaluation tasks. The study also uncovered the top four tasks that have highest average completion time by the three groups. They were activities that were found difficult and were most likely to be the main reasons for low success rates and high error rates which is consistent with the previous study [29]. The remedial suggestions also advised to upgrade the tool based on the comments received from usability evaluators.

The study has some limitations. Firstly, the eMCH application usability on small-sized device (smartphones) has to be studied since the current study was conducted on a 7-inch tablet with a screen size of 7.5” x 4.75”. Usability evaluation on small-sized devices may facilitate implementation of eMCH tool by smartphones since almost all healthcare providers have smartphones and its potential to overcome the observed resources constraints in public healthcare facilities. Secondly, the sample size may not be adequate to use parametric tests.

Conclusion

The usability evaluation revealed very vital comments and usability flaws that were essential for the eMCH tool to be upgraded. All evaluation metrics showed statistically significant performance differences among the three groups of users. The eMCH tool was found to be feasible and acceptable as reported by end-users. Therefore, the errors and usability flaws of the eMCH tool should be fixed before deployment to other healthcare settings and before considering for scale-up.

Supporting information

S1 Fig. Main interfaces of the eMCH application.

https://doi.org/10.1371/journal.pdig.0000494.s001

(PNG)

S2 Fig. ePartograph interface used to enter parameters used to draw partograph.

https://doi.org/10.1371/journal.pdig.0000494.s002

(PNG)

S3 Fig. WHO safe child-birth checklists interface.

https://doi.org/10.1371/journal.pdig.0000494.s003

(PNG)

Acknowledgments

This research team acknowledges Bahir Dar Institute of Technology for providing library and internet facilities while we were conducting this research. We are also very grateful to Bahir Dar University College of Medicine and Health Sciences for providing ethical clearance.

References

  1. 1. WHO [World Health Organization]. Global diffusion of eHealth: making universal health coverage achievable: report of the third global survey on eHealth. 2016.
  2. 2. Fatehi F, Samadbeik M, Kazemi A. What is Digital Health? Review of Definitions. Stud Health Technol Inform. 2020;275:67–71. Epub 2020/11/24. pmid:33227742.
  3. 3. WHO. Monitoring and evaluating digital health interventions: a practical guide to conducting research and assessment. 2016.
  4. 4. Mahmoud K, Jaramillo C, Barteit S. Telemedicine in Low- and Middle-Income Countries During the COVID-19 Pandemic: A Scoping Review. Front Public Health. 2022;10:914423. Epub 2022/07/12. pmid:35812479; PubMed Central PMCID: PMC9257012.
  5. 5. Mohd Faeiz P, Siti Norazlina J. Digital Transformation of Healthcare and Medical Education, Within, and Beyond Pandemic COVID-19. Asian Journal of Medicine and Biomedicine. 2020;4(2).
  6. 6. van de Vijver S, Tensen P, Asiki G, Requena-Méndez A, Heidenrijk M, Stronks K, et al. Digital health for all: How digital health could reduce inequality and increase universal health coverage. Digital health. 2023;9:20552076231185434. Epub 2023/07/12. pmid:37434727; PubMed Central PMCID: PMC10331232.
  7. 7. Mwase C, Nkhoma K, Allsop MJ. The role of digital health in palliative care for people living with HIV in sub-Saharan Africa: A systematic review. Digital health. 2022;8:20552076221133707. Epub 2022/12/03. pmid:36457812; PubMed Central PMCID: PMC9706081.
  8. 8. Ibrahim MS, Mohamed Yusoff H, Abu Bakar YI, Thwe Aung MM, Abas MI, Ramli RA. Digital health for quality healthcare: A systematic mapping of review studies. Digital health. 2022;8:20552076221085810. Epub 2022/03/29. pmid:35340904; PubMed Central PMCID: PMC8943311.
  9. 9. Sittig DF, Wright A, Coiera E, Magrabi F, Ratwani R, Bates DW, et al. Current challenges in health information technology-related patient safety. Health Informatics J. 2020;26(1):181–9. Epub 2018/12/13. pmid:30537881; PubMed Central PMCID: PMC7510167.
  10. 10. Li J, Land LPW, Ray P, Chattopadhyaya S. E-Health readiness framework from Electronic Health Records perspective. International Journal of Internet and Enterprise Management. 2010;6(4):326–48.
  11. 11. Alroobaea R, Mayhew PJ, editors. How many participants are really enough for usability studies? 2014 Science and Information Conference; 2014: IEEE.
  12. 12. Virzi RA. Refining the test phase of usability evaluation: How many subjects is enough? Human factors. 1992;34(4):457–68.
  13. 13. Nielsen J, Landauer TK, editors. A mathematical model of the finding of usability problems. Proceedings of the INTERACT’93 and CHI’93 conference on Human factors in computing systems; 1993.
  14. 14. Faulkner L. Beyond the five-user assumption: Benefits of increased sample sizes in usability testing. Behavior Research Methods, Instruments, & Computers. 2003;35:379–83. pmid:14587545
  15. 15. Sarkar U, Gourley GI, Lyles CR, Tieu L, Clarity C, Newmark L, et al. Usability of commercially available mobile applications for diverse patients. Journal of general internal medicine. 2016;31:1417–26. pmid:27418347
  16. 16. Kebebaw T, Alemneh E, Azage M, Misgan E, Nigatu D, Abate E, editors. Development and Heuristic-based Usability Evaluation of an e-partograph. 2021 International Conference on Information and Communication Technology for Development for Africa (ICT4DA); 2021: IEEE.
  17. 17. Fernandez A, Insfran E, Abrahão S. Usability evaluation methods for the web: A systematic mapping study. Information and software Technology. 2011;53(8):789–817.
  18. 18. Jaspers MW. A comparison of usability methods for testing interactive health technologies: methodological aspects and empirical evidence. International journal of medical informatics. 2009;78(5):340–53. pmid:19046928
  19. 19. Jääskeläinen R. Think-aloud protocol. Handbook of translation studies. 2010;1:371–4.
  20. 20. Alshehri AA, Alanazi A. Usability study of an electronic medical record from the nurse practitioners’ practice: a qualitative study using the think-aloud technique. Cureus. 2023;15(7).
  21. 21. Jaspers MWM, Steen T, Bos Cvd, Geenen M. The think aloud method: a guide to user interface design. International Journal of Medical Informatics. 2004;73(11):781–95. https://doi.org/10.1016/j.ijmedinf.2004.08.003.
  22. 22. Miller SJ, Sly JR, Gaffney KB, Jiang Z, Henry B, Jandorf L. Development of a tablet app designed to improve African Americans’ screening colonoscopy rates. Transl Behav Med. 2020;10(2):375–83. Epub 2019/02/26. pmid:30799495; PubMed Central PMCID: PMC7237545.
  23. 23. Howell D, Bryant Lukosius D, Avery J, Santaguida A, Powis M, Papadakos T, et al. A Web-Based Cancer Self-Management Program (I-Can Manage) Targeting Treatment Toxicities and Health Behaviors: Human-Centered Co-design Approach and Cognitive Think-Aloud Usability Testing. JMIR Cancer. 2023;9:e44914. Epub 2023/07/21. pmid:37477968; PubMed Central PMCID: PMC10403801.
  24. 24. Sauro J. Measuring U: 10 things to know about the single ease question (SEQ). Retrieved March. 2012;28:2019.
  25. 25. Brooke J. SUS: A ‘quick and dirty’usability scale. Usability Evaluation in Industry. PW Jordan, B Thomas, BA Weerdmeester and AL McClelland. Taylor and Francis, London; 1996.
  26. 26. Aggelidis VP, Chatzoglou PD. Hospital information systems: Measuring end user computing satisfaction (EUCS). Journal of biomedical informatics. 2012;45(3):566–79. pmid:22426283
  27. 27. Kopanitsa GD, Tsvetkova Z, Veseli H, editors. Analysis of metrics for the usability evaluation of electronic health record systems. EFMI-STC; 2012.
  28. 28. Zeggini E, Morris A. Analysis of complex disease association studies: a practical guide: Academic Press; 2010.
  29. 29. Sauro J, Lewis JR, editors. Correlations among prototypical usability metrics: evidence for the construct of usability. Proceedings of the SIGCHI conference on human factors in computing systems; 2009.
  30. 30. Sauro J. A practical guide to measuring usability. Measuring Usability LLC, Denver. 2010;12.
  31. 31. Sonderegger A, Sauer J. The influence of design aesthetics in usability testing: Effects on user performance and perceived usability. Applied ergonomics. 2010;41(3):403–10. pmid:19892317
  32. 32. Laubheimer P. Nielsen Norman Group. Beyond the NPS: Measuring Perceived Usability with the SUS, NASA-TLX, and the Single Ease Question After Tasks and Usability Tests; 2018.
  33. 33. Brooke J. SUS: a retrospective. Journal of usability studies. 2013;8(2):29–40.
  34. 34. Broekhuis M, van Velsen L, Hermens H. Assessing usability of eHealth technology: a comparison of usability benchmarking instruments. International journal of medical informatics. 2019;128:24–31. pmid:31160008