Figures
Abstract
Background
Before the Coronavirus COVID-19, universities offered blended learning as a mode of study. However, with the closure of all educational institutions, after the pandemic, most of these institutions were required to transition to e-learning to support continuous student learning. This transition was challenging to most institutions, as there were no standards to ensure the quality of e-learning. During this literature review, the researcher aimed to explore relevant literature and provide insight into the standards for undergraduate e-learning programmes in the health professions.
Data sources
Online databases MEDLINE, CINAHL with full text, Academic search ultimate, APA PsycInfo, ERIC, Health Source: Nursing/academic edition, CAB abstracts, Africa-wide information, Sociology source ultimate, and Communication and Mass media complete were searched.
Materials and methods
Studies pertaining to low- and middle-income countries (LMICs) on standards in evaluating undergraduate e-learning programmes in health professions, published between January 2010 to June 2022, were considered. A two-step process was followed involving three reviewers and guided by an inclusion criteria focused on the evaluation of undergraduate e-learning programmes in the health professions. The initial hit produced 610 articles altogether, and eight articles that met the inclusion criteria were included in the study. Data was then extracted and analysed, and key themes were identified.
Results
Eight Key themes related to LMIC standards emerged from the eight selected articles: curriculum planning, proficiency of educators, learner proficiency and attitude, infrastructure for learning, support and evaluation.
Conclusion
In this review, we synthesised standards that have been used for evaluating undergraduate e-learning programmes in health professions in LMICs. A gap in standards related to clinical teaching and learning in undergraduate e-learning programmes in the health professions was evident from all the included articles. The identification of the eight unique LMIC standards in this review could contribute to guiding towards contextually appropriate quality e-learning programmes in the health professions.
Citation: Mutua MM, Nyoni CN (2023) Undergraduate e-learning programmes in health professions: An integrative review of evaluation standards in low- and middle-income countries. PLoS ONE 18(2): e0281586. https://doi.org/10.1371/journal.pone.0281586
Editor: Ayse Hilal Bati, Ege University Faculty of Medicine, TURKEY
Received: July 1, 2022; Accepted: January 26, 2023; Published: February 13, 2023
Copyright: © 2023 Mutua, Nyoni. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the paper and its Supporting Information files.
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
List of abbreviations: LMICs, low- and middle-income countries; COVID 19, Coronavirus 2019; PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analysis; CINAHL, Cumulative Index of Nursing and Allied Health Literature
Introduction
Employers are sceptical about hiring health professionals whose qualifications are obtained through e-learning [1]. Clinical incompetence and poor quality of assessments are often used to support arguments against the employment of such health professionals [2]. These narratives are palatable to the regulators of health professions, who often argue in line with protecting the public from poorly trained and incompetent health professionals [1–3]. However, technological advancements and the recent COVID-19 pandemic have effectuated the rapid uptake of e-learning and distance education into mainstream programmes in the health professions. Strategies need to be in place to evaluate and guarantee quality in these programmes. The focus of this review is on the theoretical components of training health professions inferences are made, however, on the need for integrating clinical practice on e-learning platforms, albeit being a challenge in LMICs.
An educational programme is evaluated against standards to determine its quality [1, 3, 4]. The literature describes various models and frameworks used for assessing the quality of e-learning programmes. The quality matters (QM) model is a prominent model for evaluating e-learning quality and has generated widespread interest through several studies that support its impact [4]. The QM model applies eight standards: the course overview, learning objectives, assessment and measurement, resources, materials, learning engagement, course technology, learner support, and accessibility in evaluating the quality of e-learning programmes. The context, inputs, processes and product (CIPP) model was initially developed by Stufflebeam [5]. Fishbain et al. [6] operationalised it in describing the process of evaluating online programmes, considering the structure, complexity, and cultural differences. Furthermore, essential elements in evaluating the quality of e-learning programmes such as technological infrastructure, institutional support, course design, support of instruction and learning, assessment, educators, administration, and curricular structure have been reported [4, 7, 8]. Fishbain et al. [6] make a futuristic recommendation related to evaluating e-learning programmes for the health professions by including patient outcomes. The above models and frameworks present a plethora of approaches used to evaluate e-learning programmes reflecting various areas of chiasm and divergence. However, the operationalisation of these models and frameworks within undergraduate e-learning programmes in health professions education is complex.
Undergraduate education programmes in the health professions integrate clinical and theoretical components [1]. The clinical detail adds an absorbing layer of complexity in evaluating and maintaining quality in an e-learning programme. The clinical environment where students learn is often distant from the online educator, who relies on the clinicians to support students to develop clinical competence [1, 9]. Kolb and Kolb [10] argue that classroom activities must align with the clinical experience to enhance learning and avoid cognitive dissonance, which interferes with learning and hinders competence development [11]. Thus, students enrolled in undergraduate e-learning in the health professions programmes may experience cognitive dissonance in the clinical setting due to differing resources, approaches to healthcare, disease profile, student support and online educators who may apply different standards. The literature confirms the influence of cognitive dissonance on the perceptions of the quality of educational programmes [12, 13]. Therefore, more specific standards and approaches in determining and ensuring quality in e-learning nursing education may be needed in health professions programmes to learn and demonstrate skills and competencies [1]. However, at the time of Delva et al. ‘s [1] study, there were no reported standards specific to evaluating the quality of undergraduate nursing programmes and no reported entity in the United States responsible for the oversight and continuous evaluation of the quality of online nursing programmes. The unavailability of explicit strategies for evaluating e-learning undergraduate programmes in the United States may be similar in other countries, including Low- and middle-income countries (LMICs). The uniqueness of LMICs is that their challenges are dissimilar from developed countries. Hence context-specific standards will enhance the quality of e-learning.
E-learning provides greater flexibility concerning time, space, language, content, and administrative power than traditional face-to-face learning [14]. Technological advances in various areas, including education in the health professions, and the integration of e-learning and online distance education, is inevitable. However, there is a need to understand the determinants of quality standards in e-learning programmes in the health professions underpinned by the complexity brought on by educators, and students, variations in resources, regulations, and clinical outcomes. This article presents an integrative review that synthesised evidence related to standards used to determine quality in undergraduate e-learning programmes in the health professions from LMICs. We argue that low-resource settings contribute to the discourse related to evaluating quality in undergraduate e-learning programmes in the health professions, which could guide the development of contextually appropriate interventions. Such interventions need to enhance quality and should ensure the regulators and the public of the competence of health professionals trained through e-learning programmes. The purpose of the current review was to synthesise available literature related to standards for evaluating undergraduate e-learning programmes in the health professions in LMICs. The outcomes of this review will be standards that will be used in LMICs to evaluate the quality of e-learning. These standards will then be used to evaluate the quality of e-learning in undergraduate e-learning nursing programmes in LMICs.
Materials and methods
This section seeks to present the approach used in the integrative review for evaluating undergraduate e-learning programmes in the health professions in LMICs. The purpose of an integrative review is to summarize what is known about a topic and communicate the synthesis of literature to a targeted community [16, 17]. An Integrative review synthesizes research and draws conclusions from diverse sources on a topic and provides a more holistic understanding of a specific phenomenon with a focus on current state of evidence of a particular phenomenon [17]. A systematic review differs from an integrative review because it has a single narrow focused clinical question formulated in a PICO format and may include Meta analysis and requires protocol registration [16].
Design
An integrative review approach included theoretical and empirical literature on varied methodologies and designs [15–17]. In the sections below, we report the five main steps of the integrative review:
Problem identification
The following research question guided this integrative review:
‘Which standards are used in evaluating undergraduate e-learning programmes in the health professions in LMICs?
Literature search
The research question was operationalised by identifying keywords and their synonyms linked through Boolean operators and modifiers to generate a search string. The keywords were ‘evaluating’, ‘e-learning’, and ‘health professions’. After being piloted, the search string was refined through a ‘quick and dirty’ search by an information specialist. The final search string in this review was:
(assess* evaluat* measur* licen* accredit*) n3 (“electronic teaching” “online education” “internet-based learning” “blended education” e-learning “Online learning” “Distance learning” “Blended learning” “Computer-assisted instruction” “Computer-assisted learning”) (Policy policies Benchmark* Qualit* criter* Principle* Descriptor* “quality standard*”) (“health science*” Nurs* medic* healthcare)
The search string was used to search for the literature through a university library. The EBSCOhost interface was used to access databases for this review. The following databases generated 610 hits: MEDLINE, CINAHL with full text, academic search ultimate, APA PsycInfo, ERIC, Health source: Nursing/academic edition, CAB abstracts, Africa-wide information, Sociology Source Ultimate, and Communication and Mass Media Complete. The hits generated are presented as titles and abstracts.
Data extraction
An author-generated custom-made Google form was used to extract data from the included articles. The form was piloted on two articles by both authors and refined to their satisfaction. The characteristics of the articles were author name, the title of the study, and country, but also the purpose of the study, population group, the study design and results or outcomes which were also identified from the included articles. The evaluation models, data collection and data analysis methods, as well as recommendations and limitations, were identified (Table 1). The first author extracted data from the included articles, and the second author checked the work and resolved discrepancies through discussions.
Data analysis
This integrative review was underpinned by the contemporary framework for integrative reviews, which was applied throughout the process of analysing data [17]. Whittemore and Knafl [17] suggest a multi-step process for analysing data for an integrative review. Frequencies related to study characteristics were tallied, while an inductive thematic approach was used to identify standards for evaluating e-learning as presented by the included articles (Table 2).
The identified standards were further synthesised through pattern coding to reflect broader themes and domains related to evaluating e-learning in undergraduate programmes in the health professions.
Results
Selecting the articles
A two-step process involving three reviewers and guided by an inclusion criterion was applied in selecting articles to be included in this review. The two authors and an expert in the education of health professions from a low-income country screened and selected the articles. The study had the following, inclusion criteria focused on the evaluation of undergraduate e-learning programmes in the health professions, should have been published between January 2010 and December 2020 and had to be from LMICs. The literature was excluded from the review based on the following criteria. Literature not related to health sciences, literature on post-graduate programmes, Literature not accessible from the university of the Free State library and Literature dated before 2010. Integrative reviews summarise the latest research, and ten years may be sufficient to provide information that would be useful for future use. The purpose of the criteria was to identify the most appropriate articles to enable us to develop valid and reliable answers to research questions. The primary search strategy identified 610 studies, from which 205 duplicates were excluded through automatic and physical deduplication. Of the remaining 295 records, 269 were eliminated for various reasons, including inappropriate context, different subjects, including inappropriate context, different subjects, articles addressing post-graduate education, articles not from LMICs, article abstracts that the librarian could not get full articles and articles not written in English. Full texts articles were sought from 24 remaining records, of which 16 were eliminated as they did not meet the inclusion criteria. A total of eight articles met the criteria and are included in this review (Fig 1).
Study characteristics
Based on the studies’ characteristics, three articles from Iran were selected. The other articles were from India, Jordan, and Malaysia. There was a paucity of articles from Africa and South America, where many countries are classified in the low- and middle-income brackets. Most of the studies (n = 3) were published in 2020 and 2021. One article reported on an undergraduate nursing programme [18], while the other three journals reported on an undergraduate medical programme [19]. Only two studies mentioned using models to underpin their evaluation processes, namely, the SWOT analysis [20], the Kirk Patrick model [33] and the force field analysis [21]. Most of the studies applied mixed methods research [18, 21, 22], while others used quantitative methods [18, 23] and qualitative methods [20]. Seven domains, each with a specific theme, were generated inductively after engaging with the individual themes from each included article [32]. Seven domains, each with a specific theme were generated inductively after engaging with the individual themes from each included article. The seven domains are presented in Table 3.
As expected, a relatively low number of articles were included in this review. This finding could be attributed to the low uptake of e-learning programmes in low-resource settings versus the resource-intensive nature of e-learning programmes [24–26]. Brooks et al. [27] further explain that the scepticism around the quality of health profession graduates from e-learning or distance learning programmes hampers national investments in e-learning programmes, with educational institutions preferring face-to-face programmes. In the same vein, research outputs from low-resource settings are often low; hence there are limited publications in general [2].
Discussions
The purpose of this review was to synthesise available evidence related to standards for evaluating undergraduate e-learning programmes in the health professions in LMICs. The results of the integrated review of standards presented characteristics of the eight studies. They then focused on the seven domains related to the evaluation of undergraduate e-learning programmes in the health professions in LMICs.
The seven evaluation-related domains were curriculum planning, proficiency of the educator, learner proficiency and attitude, infrastructure for learning, support, and evaluation (data in S1 Text). There are similarities between the domains from this review and some of the popular standards used in high-income countries [28]. The similarity in these standards may be due to models and theories related to evaluation that are often transferred from high-income countries to low-income counties under the guise of confirmed validity [20, 22]. In addition, there are reported similarities among undergraduate e-learning programmes in the health professions between high- and low-income countries, supporting the findings from this review on the similarity of evaluation standards.
The domain related to curriculum planning encompasses various aspects of the structure of e-learning programmes. Babadi and Ehraz [23] reflect on the curriculum course plan and course design, educational outcomes and goals, and course content, while Sowan and Jenkins [15] describe assignments as part of curriculum planning. Wasfy et al. [32] describe course specifications linked to the methodology of online teaching and the method of assessment with well-formulated policies and procedures for online courses as desired requirements for e-learning programmes. This domain for evaluating undergraduate e-learning programmes in the health professions in LMICs is aligned with the course overview and standards of learning objectives as defined by the QM model [1]. The domain further aligns with the curricular structure of Capacho et al. [7] and the course design [8]. Prideaux [29] notes three levels of a curriculum: planned, prescribed, or implemented curriculum, which the former is often operationalised in educational programmes through specific outcomes, goals, designs and even assessments. Therefore, it seems common that the structure of the programme–specifically the curriculum plan for an e-learning programme must be included as part of the standards for evaluating undergraduate e-learning programmes in the health professions.
The uptake of e-learning in undergraduate education in the health professions in LMICs has been hampered by the resource-intensive nature of such programmes [25, 30]. The second domain from this review relates to the infrastructure for learning. Several authors described the necessary infrastructure for learning as part of the evaluation standards for their e-learning programmes [20, 22, 23, 31]. This infrastructure included equipment, information technology (IT) and network infrastructure, an independent learning management system and skilful human resources. In addition, other authors focused on the ease of accessing and using a specific infrastructure for learning [20, 22, 23]. Raghuveer and Nirgude [22] further mention the need for a variety of content material to be included and for compliance with principles and intellectual property. Wafsy et al. [32] describe the need for financial resources allocated for online learning. Infrastructure for e-learning is aligned with resources, materials and course technology as described by the QM model, with technology as described by Capacho et al. [7] and with technological infrastructure Kumar et al. [4]. As e-learning is based on the integration of technology in an educational programme, the standards for evaluating e-learning should have a specific focus on technological resources and use within a programme. Insufficient resources for learning, typical in resource-limited settings, compromise the quality of e-learning programmes.
The proficiency of educators is reported as a domain in this study and focuses on the educators facilitating e-learning. Raghuveer and Nirgude [22] reported that educators must be proficient and competent in facilitating e-learning. Authors of all included studies evaluated the online teaching and delivery skills of educators [18–23]. Moreover, the requirement of educators related to e-learning, including their perception and attitude, was included as part of needs assessments. Additionally, an evaluation of professionalism and ethics during the delivery of content was reported as well as the ability of educators to match students’ interests with digital tools. The role of educators in implementing e-learning programmes successfully cannot be overemphasised, as educators have the potential to influence the programme negatively. As part of educator development, continuous training on skills for online teaching is essential [32]. Feedback provided by educators should be constructive and timely to foster efficient implementation by learners [33]. The proficiency of the educator domain identified in this study is similar to that reported by Capacho et al. [7], while the QM model and Kumar et al. [4] are silent regarding the educator’s proficiency. In LMICs, educators are often not well versed in advancements in educational technology and may be pressured to copy and paste face-to-face teaching modalities into the online or e-learning space. Strategies need to be designed and implemented that focus on determining educator needs, developing the educator, and continuously monitoring and evaluating educators on their effectiveness in the online space.
The learners enrolled in an e-learning programme must be proficient in learning through digital means and possess the appropriate attitudes [23]. In this review, the domain of learner proficiency and attitude reflected standards used to evaluate learners in the e-learning space. The sub-themes were active and collaborative learning experiences, personalised learning, focus on learning needs, and the interest and motivation of learners to learn through electronic means. Jebraeily et al. [20] further focused on the process of improved self-learning and problem-solving skills. The learner proficiency domain was not reported in the other popular models for evaluating e-learning [4, 7]. Learners are expected to have some degree of proficiency specifically related to the technology used in their learning. Babadi and Ehraz [23], as well as Sowan and Jenkins [18], state that orienting learners to the learning technologies and subsequent support or maintenance of proficiency is an essential element in student learning. Learner proficiency should be an indispensable standard where institutions must focus on the competence of learners regarding learning technologies. Furthermore, Raghuveer and Nirgude [22] mention the role and value of attitudes towards e-learning, as learners should have an interest in the learning approaches. Moreover, a programme for learner training and a plan for academic counselling that is clear, manageable, and executed is required [32]. Just as in the case of educators, the literature by Babadi and Ehraz [23] specifies that the uptake, use and usefulness of such technology are poor when the learners’ interest in the technology is limited, thus compromising the quality of e-learning. Therefore, standards for e-learning should evaluate learner interest in addition to learner proficiency.
Support is understood as specific strategies, techniques and approaches that assist the attainment of a specific outcome. In this review, the included articles reflected the application of support as part of the standards used in evaluating the quality of e-learning programmes. Learners and their educators are expected to be supported during the teaching and learning activities [18, 20, 21]. The support should be from trained technical teams [32]. The support is focused on learners, specifically relating to technical support [21] and instructor support [18]. Jebraeily et al. [20] further mention the role of administrative support and the need for support gleaned from institutional plans and policies that direct the nature and type of support for learners and educators. These findings are aligned with other models for describing quality in e-learning programmes. The QM model refers to learner support and accessibility as a standard for evaluating the quality of e-learning support. Capacho et al. [7] relate to support in general, while Kumar et al. [4] relate to support of instruction and learning and institutional support. All e-learning programmes should support learners, and in the evaluation of quality, the nature and type of support need to be made explicit and should be accessible for learners and educators. The support should facilitate quality e-learning within the expected contextual remits.
The last domain, which focuses on the evaluation, provides institutions, programmes, and educators with information on the engagement with learners and the value of their e-learning programmes. In evaluating the quality of e-learning programmes, studies included in this review reflected evaluation elements using various parameters. Raghuveer and Nirgude [22] relate to the pattern of computer and internet usage, while Mousavi et al. [21] comment on issues related to programme [effectiveness, safety, convenience, and student satisfaction [18]]. The integration of internal reviewers, to monitor online learning materials and processes, and the data generated from the reviews drives decisions for continuous improvement [32]. Only Kumar et al. [4] applied standards aligned with evaluation, namely effectiveness of learning and satisfaction of learners and educators. The standards related to evaluation emphasise an interplay of process and outcome monitoring [21, 22]. However, there are gaps related to the longitudinal outcome of undergraduate e-learning programmes in the health professions.
Studies included in this review reported a wide array of standards and indicators of quality within undergraduate e-learning programmes in the health professions [18–21, 23]. However, none of the included articles described evaluation standards related to teaching and learning of clinical skills within undergraduate e-learning programmes in the health professions. The lack of standards for evaluating teaching and learning clinical skills is a significant gap within the literature from LMICs [25]. The generic standards used in undergraduate e-learning programmes in the health professions often miss the intricacies of clinical education [1]. There is a need for the development and integration of standards for evaluating undergraduate e-learning programmes in the health professions that include the evaluation of clinical education. These standards would support education institutions in determining the quality of their programmes and in professing that their graduates are as competent as graduates from face-to-face programmes.
Standards for undergraduate e-learning programmes in the health professions should be distinct from face-to-face programmes. The adoption of standards of face-to-face programmes in undergraduate e-learning is detrimental to the development of e-learning in LMICs [26]. The need for standards for evaluating undergraduate e-learning programmes in the health professions is vital in ensuring quality health professionals. Further research should focus on quality standards in teaching and learning clinical skills in e-learning programmes.
Conclusion
Learners graduating from e-learning and online programmes are currently viewed with scepticism, especially in terms of their clinical competence. However, the integration of digital technology in undergraduate education in the health professions has become inevitable. In most low-resource settings, face-to-face programmes remain superior and dominant as regulators and programme directors struggle to define and apply standards that comprehensively assess e-learning programmes’ quality. Delva et al. [1] noted that undergraduate e-learning programmes in the health professions must be comparable to face-to-face programmes. Still, caution should be taken regarding adopting face-to-face standards for online settings.
In this review, we synthesised standards that have been used for evaluating e-learning programmes in the health professions in LMICs to improve the quality of e-learning programmes. Only eight articles met the inclusion criteria–predominantly from the Middle East, North Africa and South-East Asia. Most of the standards described by the included articles aligned with popular models for evaluating e-learning programmes, with a few exceptions. A gap in clinical teaching and learning standards in undergraduate e-learning programmes in the health professions was evident in all the included articles.
Further research in this field should focus specifically on developing, expanding, and testing standards for evaluating the quality of e-learning that integrate teaching and learning of clinical skills. Such standards should allow evaluators to access clinical teaching sites, the educators within clinical teaching sites, and the nature and quality of clinical practice.
The contribution of this review is to identify themes used in evaluating the quality of undergraduate e-learning programmes in the health professions. The themes are curriculum planning, proficiency of the educator, learner proficiency and attitude, infrastructure for learning, support, and evaluation. When developing contextually appropriate interventions, it would be valuable to include teaching and learning of clinical skills.
Limitations
Limitations of the review are, the search string and inclusion criteria, which may have excluded some studies. Literature sourced from was limited to English-language publications within LMICs. Abstracts and full texts in foreign languages were excluded. Articles only from the University of the Free state were accessed and this might have limited the scope. The risk of bias in each of the articles was not assessed. We acknowledge not conducting a methodological quality review might affect the study results and the potential of overestimating or underestimating findings, can inadvertently affect the quality of the study. However, rigorous screening and consensus discussion mitigated the issue.
Acknowledgments
Prof. Ruth Albertyn: Critical reading
Jackie Viljoen: Language editing
Annamarie du Preez: Search string development
References
- 1. Delva S, Nkimbeng M, Chow S, Renda S, Han H, D’Aoust R. Views of regulatory authorities on standards to assure quality in online nursing education. Nursing Outlook. 2019;67(6):747–759; pmid:31421862
- 2. Gemuhay H, Kalolo A, Mirisho R, Chipwaza B, Nyangena E. Factors affecting performance in clinical practice among preservice diploma nursing students in Northern Tanzania. Nursing Research and Practice. 2019;2019: 1–9; pmid:30941212
- 3. Gaupp R, Dinius J, Drazic I, Körner M. Long-term effects of an e-learning course on patient safety: A controlled longitudinal study with medical students. PLOS ONE. 2019;14(1): e0210947; pmid:30657782
- 4. Kumar S, Martin F, Budhrani K, Ritzhaupt A. Award-winning faculty online teaching practices: elements of award-winning courses. Online learn. 2019;23(4). 160–180; http://dx.doi.org/10.24059/olj.v23i4.2077
- 5. Stufflebeam D. L. Toward a science of educational evaluation. Educational Technology 12.1968;8(14):5–13; http://www.jstor.org/stable/44422348
- 6. Fishbain D, Danon Y, Nissanholz-Gannot R. Accreditation systems for postgraduate medical education: a comparison of five countries. Advances in Health Sciences Education. 2019;24(3):503–524; pmid:30915642
- 7. Capacho J, Jimeno M, Salazar A. Operational Indicators of the learning management system in virtual spaces supported by ICT. Turkish Online Journal of Distance Education. 2019;20(2):103–118; https://doi.org/10.17718/TOJDE.601907
- 8. Bergeron M, Fornero S. Centralized and decentralized approaches to managing online programs. leading and managing e-learning. 2017;29–43; https://doi.org/10.1007/978-3-319-61780-0_3
- 9. Rad FA, Otaki F, Baqain Z, Zary N, Al-Halabi M. Rapid transition to distance learning due to COVID-19: perceptions of postgraduate dental learners and instructors. PLoS One. 2021;16(2): e0246584; pmid:33556131
- 10. Kolb D., Kolb A. The Kolb Learning Style Inventory 4.0: guide to theory, psychometrics, research & applications. 2013 https://www.researchgate.net/publication/303446688_The_Kolb_Learning_Style_Inventory_40_Guide_to_Theory_Psychometrics_Research_Applications. Accessed 15 January 2021
- 11. Klein J, McColl G. Cognitive dissonance: how self-protective distortions can undermine clinical judgement. Med Educ. 2019;53(12):1178–86; pmid:31397007
- 12. Alam F, Yang Q, Bhutto M, Akhtar N. The Influence of e-learning and emotional intelligence on psychological Intentions: study of stranded Pakistani students. Frontiers in Psychology. 2021;12; pmid:34512475
- 13. Yang C, Chen A, Chen Y. College students’ stress and health in the COVID-19 pandemic: The role of academic workload, separation from school, and fears of contagion. PLoS One. 2021;16(2): e0246676; pmid:33566824
- 14. De Leeuw R, Walsh K, Westerman M, Scheele F. Consensus on quality indicators of postgraduate medical e-learning: Delphi Study. JMIR Medical Education. 2018;4(1):e13; https://doi.org/10.2196/mededu.9365
- 15.
Rodgers B, Knafl K. Concept development in nursing. 2nd ed. Philadelphia: Saunders; 2000.p. 112–114.
- 16.
Toronto CE, Remington R, editors. A step-by-step guide to conducting an integrative review. Cham: Springer International Publishing; 2020; https://doi.org/10.1007/978-3-030-37504-1
- 17. Whittemore R, Knafl K. The integrative review: updated methodology. J Adv Nurs. 2005;52(5):546–53; pmid:16268861
- 18. Sowan AK, Jenkins LS. Designing, delivering and evaluating a distance learning nursing course responsive to students needs. Int J Med Inform. 2013;82(6):553–64; pmid:23478139
- 19. Saiboon IM, Zahari F, Isa HM, Sabardin DM, Robertson CE. E-learning in teaching emergency disaster response among undergraduate medical students in Malaysia. Front Public Health. 2021;9,426; pmid:33996711
- 20. Jebraeily M, Pirnejad H, Feizi A, Niazkhani Z. Evaluation of blended medical education from lecturers’ and students’ viewpoint: a qualitative study in a developing country. BMC Medical Education. 2020;20(1):482; pmid:33256714
- 21. Mousavi A, Mohammadi A, Mojtahedzadeh R, Shirazi M, Rashidi H. E-learning educational atmosphere measure (EEAM): a new instrument for assessing e-students’ perception of educational environment. Res Learn Technol. 2020;28(0); http://dx.doi.org/10.25304/rlt.v28.2308
- 22. Raghuveer P, Nirgude AS. Utility of E-learning in community medicine: A mixed methods assessment among Indian medical students. Int J Med Public Health. 2016;6(2):88–93; http://dx.doi.org/10.5530/ijmedph.2016.2.7
- 23. Babadi KAA, Ehraz YF. The feasibility of using blended learning in the curriculum of speech therapy at Tehran university of medical sciences. J Mod Rehabil. 2019;163–8; http://dx.doi.org/10.32598/jmr.v12.n3.163
- 24. Mutisya DN, Makokha GL. Challenges affecting the adoption of e-learning in public universities in Kenya. E-Learn digit media. 2016;13(3–4):140–57; http://dx.doi.org/10.1177/2042753016672902
- 25. Frehywot S, Vovides Y, Talib Z, Mikhail N, Ross H, Wohltjen H et al. E-learning in medical education in resource-constrained low- and middle-income countries. Human Resources for Health. 2013;11(1); pmid:23379467
- 26. Zalat MM, Hamed MS, Bolbol SA. The experiences, challenges, and acceptance of e-learning as a tool for teaching during the COVID-19 pandemic among university medical staff. PLoS One. 2021;16(3):e0248758; pmid:33770079
- 27. Brooks H, Pontefract S, Vallance H, Hirsch C, Hughes E, Ferner R et al. Perceptions and impact of mandatory e-learning for foundation trainee doctors: a qualitative evaluation. PLOS ONE. 2016;11(12): e0168558; pmid:28005938
- 28. Perris K., Mohee R. Quality assurance rubric for blended learning. Common wealth for Learning. California. 2020; http://oasis.col.org/handle/11599/3615. Accessed 12 January 2022.
- 29. Prideaux D. ABC of learning and teaching in medicine. Curriculum design. BMJ 2003;326(7383):268–70; pmid:12560283
- 30. Glanville D, Kiddell J, Lau R, Hutchinson A, Botti M. Evaluation of the effectiveness of an eLearning program in the nursing observation and assessment of acute surgical patients: A naturalistic observational study. Nurse Education in Practice. 2021; 55: 103152; pmid:34392231
- 31. Franzen S, Chandler C, Lang T. Health research capacity development in low and middle income countries: reality or rhetoric? A systematic meta-narrative review of the qualitative literature. BMJ Open. 2017;7(1): e01233; pmid:28131997
- 32. Wasfy N. F., Abouzeid E., Nasser A. A., Ahmed S. A., Youssry I., Hegazy N. N., et al. (2021). A guide for evaluation of online learning in medical education: a qualitative reflective analysis. BMC Medical Education, 21(1), 339. pmid:34112155
- 33. Saleh S., Brome D., Mansour R., Daou T., Chamas A., & Naal H. (2022). Evaluating an e-learning program to strengthen the capacity of humanitarian workers in the MENA region: the Humanitarian leadership diploma. Conflict and Health, 16(1), 27. pmid:35596195