Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Comparing the effectiveness of a hybrid and in-person courses of wheelchair service provision knowledge: A controlled quasi-experimental study in India and Mexico

  • Yohali Burrola-Mendez,

    Roles Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Supervision, Writing – original draft, Writing – review & editing

    Affiliations Department of Rehabilitation Science and Technology, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America, International Society of Wheelchair Professionals (ISWP), University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America, Consejo Nacional de Ciencia y Tecnología (CONACYT), Ciudad de México, México

  • Francisco J. Bonilla-Escobar,

    Roles Data curation, Formal analysis, Writing – original draft, Writing – review & editing

    Affiliations SCISCO Foundation, Cali, Colombia, School of Medicine, Institute for Clinical Research and Translational Science, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America

  • Mary Goldberg ,

    Roles Conceptualization, Funding acquisition, Project administration, Resources, Supervision, Writing – review & editing

    mgoldberg@pitt.edu

    Affiliations Department of Rehabilitation Science and Technology, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America, International Society of Wheelchair Professionals (ISWP), University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America

  • Jon Pearlman

    Roles Funding acquisition, Project administration, Supervision, Writing – review & editing

    Affiliations Department of Rehabilitation Science and Technology, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America, International Society of Wheelchair Professionals (ISWP), University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America

Abstract

Background

Evidence highlights a global shortage of wheelchair service provision education and training that results in inappropriate wheelchair provision with associated health and economic consequences. Two learning methodologies, a hybrid and an in-person course, based on the World Health Organization Wheelchair Service Training Package Basic Level, currently are available to train wheelchair service providers worldwide. The effectiveness of the in-person methodology, used as the standard of practice, has never been tested. Meanwhile, the Hybrid Course, which combines online and in-person training, was developed to reduce training costs and to scale training interventions and has shown potential effectiveness in increasing basic level wheelchair service provision knowledge. The objective of this study was to compare the effectiveness of both learning methodologies based on knowledge and satisfaction among a group of wheelchair service providers in India and Mexico.

Methods

We conducted a controlled quasi-experimental study to evaluate changes in basic wheelchair knowledge and levels of satisfaction between Hybrid and In-person course learners in India and Mexico. A convenience sampling method guided by local stakeholders’ input was used to recruit participants. Outcomes were assessed using self-administered online surveys, the International Society of Wheelchair Professionals Wheelchair Service Provision Basic Test (primary outcome) completed pre- and post- the learning intervention and an anonymous Satisfaction Survey (secondary outcome) completed post- intervention. Baseline characteristics were compared among groups using hypothesis tests based on their assumptions. The primary analysis was intention-to-treat. To address missing values and lost to follow-up, multiple chained imputations were conducted. The primary outcome was analyzed using linear mixed models. The secondary outcome was analyzed using a two-tailed two independent samples t-test.

Results

A total of 81 participants, 43 (53.1%) in the In-person group and 38 (46.9%) in the Hybrid group, participated in the study. Mean baseline knowledge scores were below the passing cutoff of the test (53 points) in both groups. Both study groups experienced statistically significant improvements in the primary outcome when comparing pre- and post-test scores (p<0.0001) with total mean scores above the passing cutoff of the test. The in-person group experienced, on average, larger effects on the primary outcome. The difference in mean change from post-test to pre-tests between In-person groups and Hybrid was 3.6 (95% Confidence Interval: 1.7;5.4), Cohen’s d = 0.36, with a small effect size favoring the In-person training. With regards to satisfaction, the difference between the two interventions was 0.23±0.07 in favor of the In-person group (p = 0.0021).

Conclusions

Both learning methodologies had a statistically significant effect in increasing wheelchair service knowledge with overall high levels of satisfaction. However, the In-person group reported overall larger effects when compared with the Hybrid methodology. This study provided recommendations on how organizations can improve blended learning interventions to enhance participants’ learning experiences and reduce potential barriers and limitations.

Introduction

The World Health Organization (WHO) estimates that only 5–15% of the 100 million people in the world who need a wheelchair for mobility and function have an appropriate wheelchair that meets their needs [13]. Inappropriate wheelchair provision impacts the life, safety, health, and other basic human rights of people with disabilities [2, 48]. In addition, when a wheelchair does not meet the wheelchair user’s needs, it may result in underutilization or abandonment [9, 10]. This situation may be more problematic in low- and middle-income countries (LMICs) where disability and poverty are interconnected, the incidence of disability is higher, people with disabilities often are marginalized, there is less availability of skilled health personnel, and there is a limited range of quality, affordable wheelchairs [1, 1115].

Evidence highlights that a major factor associated with inappropriate wheelchair distribution is the global shortage of wheelchair service provision education and training [1, 13, 16]. The World Health Organization Guidelines on the provision of manual wheelchairs in less-resourced settings (WHO Guidelines) recommend integrating wheelchair service provision content into existing rehabilitation programs at academic institutions [2]. However, a 2017 study reported limited training time allocated to wheelchair service provision in some professional rehabilitation programs in low-middle- and high-income countries [16]. To help assess the global training need, the International Society of Wheelchair Professionals (ISWP) developed and validated a Wheelchair Service Provision Basic Test (Basic Test) which aligns with the WHO Guidelines’ eight (8) wheelchair service provision steps [17]. Currently, in the majority of regions where the test has been applied, less than half of test takers pass the test with 41% passing in Africa, 44% in Asia, 46% in Latin America, 47% in Europe, 48% in Australia and Oceania, and 55% in North America, which confirms the overwhelming need to promote training of wheelchair service providers worldwide [18].

In 2012, the WHO published the Wheelchair Service Training Package-Basic Level (WHO WSTP-B) as the first of a series of training packages, free of charge, with supporting materials and availability in different languages to promote training worldwide. The WHO WSTP-B proposes a learning methodology of 5 consecutive days of in-person training that provides the skills and knowledge for basic level wheelchair provision [19]. Traditional in-person training places a significant emphasis on human and financial resources that are often not available in resource-constrained settings [20, 21]. In addition, this learning format is difficult to scale across multiple settings, to reach underserved areas [20] and to attend for busy providers who need to leave work during the training days [22]. While this training approach has been widely used in the sector as the standard of facilitating trainings, no evidence of the effectiveness of this learning methodology has been published.

Blended learning is a cost-effective [23] and student-accepted [2426] educational format that combines online and in-person training [27]. This type of learning has proved to be as effective as in-person learning in medical education [24, 2729] and a feasible solution to overcome knowledge dissemination barriers in less-resourced areas [30]. In 2016, ISWP developed a Hybrid Course based on the WHO WSTP-B in English and Spanish with the scope to support efficient content delivery, decrease cost associated with leading the training, and increase access [22, 31]. The Hybrid Course uses a blended learning methodology that combines 9 online modules designed for low-bandwidth internet access which reduces the in-person training exposure to 3 days, making it easier to scale and more adaptable to different training environments such as conferences and continue educational programs at universities) [22]. The Hybrid Course has been tested in English [22] and Spanish [31] reporting a statistically significant increase on the Basic Test total score in both languages [22, 31]. While these results offer some evidence of the potential effectiveness of the Hybrid Course to train wheelchair service providers, the course has not been compared with the standard methodology of in-person training recommended by the WHO WSTP-B, nor is there evidence of the effectiveness of the in-person training approach.

The primary objective of this study was to compare the effectiveness of a Hybrid Course and In-person Course in English and Spanish in increasing knowledge in basic level wheelchair service provision. The secondary objective was to evaluate and compare levels of satisfaction with the interaction, instructors, instruction methodology, content, and technology (exclusively with the Hybrid) after the learning interventions. We hypothesized that the Hybrid Course will produce similar improvements in outcomes as the In-person Course.

Methods

This study was a quasi-experimental design with nonequivalent control groups conducted to evaluate changes in basic level wheelchair service knowledge with a group of wheelchair service providers in Bengaluru, India and Puebla, Mexico. In each setting, one of the groups was trained using the Hybrid Course (blended methodology), and the other followed the In-person training methodology. A post-assessment was used to evaluate levels of satisfaction after the educational interventions. The study was approved by the University of Pittsburgh Institutional Review Board.

Participants

The research team selected the countries of India and Mexico due to the presence of local facilitators, local partnerships, income classification (lower-middle- and upper-middle- income economies classified by the World Bank) [32], and the possibility to test both languages of the Hybrid and In-person courses (Table 1). Lead organizations and stakeholders used a convenience sampling method to recruit participants interested in receiving basic level wheelchair service provision training. Flyers describing the course, inclusion criteria, location, schedule, and contact information were distributed. Each organization (Table 1) led the recruitment, enrollment, and delivery of the interventions. The inclusion criteria included: 1) rehabilitation sciences students or professionals who worked locally in wheelchair service delivery, and 2) who had not taken the Basic Test. Participants who were participating in another wheelchair-related study simultaneously were excluded. The interventions occurred in different timepoints between February 2016 to February 2017 (Table 1).

Interventions

To provide a detailed description of the interventions, improve the reporting of these interventions and, ultimately their replicability, we used the Template for Intervention Description and Replication (TIDieR) checklist [33] (Table 2) and report some training costs in Table 3.

Hybrid course.

This group followed the methodology implemented in previous studies [22, 31] that consisted of the completion of baseline assessments, two consecutive weeks of online training followed by three days of in-person training and the collection of follow-up assessments (Table 2). During the online training, participants reviewed the content and completed all required activities asynchronously that have been described elsewhere (e.g., discussion boards, case studies, short quizzes, videos, interactive activities) [22]. The online content strictly followed the WHO WSTP-B’s content and incorporated its materials (e.g., videos, Power Points) [19] whenever possible. The Hybrid Course mirrors the WHO WSTP-B training with the necessary adaptations for online learning [22]. To promote the gradual revision of online content, not all modules were accessible from the beginning of the course; instead, they were divided and made available sequentially every week. Two online synchronous meetings with the Indian groups and 3 with the Mexican groups were conducted between trainers and trainees to reinforce learning outcomes, discuss topics, answer questions, and promote interaction between participants. The Mexican groups had an additional meeting due to trainers’ availability. The recitations were mandatory, lasted 60 minutes, and were recorded and made available to all participants and trainers. In the last recitation, trainers provided detailed information about the consecutive 3-days of in-person training led by their facilitating organization.

In-person course.

This group followed the learning methodology of 5 consecutive days of in-person training, 8 hours per day, described in the WHO WSTP-B Trainers Manual [34]. Theoretical and practical sessions occurred simultaneously. Trainers used the WHO WSTP-B materials (e.g., videos, Power Points, assessment forms) to facilitate the training.

In both groups, a local ‘master trainer’ coordinated the trainings. A ‘master trainer’ was considered someone who had been trained with the WHO WST-B previously, who had passed the ISWP Basic Test, and had experience facilitating WHO WSTP-B courses. Also, in both groups, wheelchair users volunteered in the training as role models which allowed participants to directly interact with them (Table 2). In addition, all groups delivered basic level wheelchairs to wheelchair users at the end of the training following the WHO 8-steps learned in the education interventions.

Outcomes

To be consistent with the previous studies that evaluated the effectiveness of the Hybrid Course [22, 31], we established knowledge change as our primary outcome measure and levels of satisfaction as our secondary measure. These two outcome measures are relevant to evaluate the influence of both learning methodologies in knowledge change and the courses' acceptance among trainees.

The Basic Test is an online test available in English and Spanish, that has shown validity evidence for measuring basic level wheelchair service provision knowledge independent of geographic location [17]. The test consists of two sections: A demographic questionnaire and a multiple-choice test. The demographic questionnaire includes 19 questions regarding sociodemographic characteristics of participants such as age, gender, education level, profession, employment status, years of experience in wheelchair service provision, work setting, age group served, and motivation to take the training. In the questions related to the work setting, age group served, and motivation to take the training, participants can select all applicable options. The multiple-choice test includes 75 scored questions from 7 domains of wheelchair service delivery knowledge: 1) assessment, 2) prescription, 3) fitting, 4) production, 5) user training, 6) process and 7) follow-up and maintenance as described in the WHO WSTP-B [17]. The test settings include: 1) a pre-set number of questions based on the weight of each domain that is obtained from a pool of questions to reduce the likelihood of receiving the same questions in multiple attempts; 2) a forced completion in one-time entry; and 3) immediate test score reporting with the opportunity to review correct and incorrect answered questions [17, 22, 31]. Test scores greater than or equal to 53 points, or 70% of correctly answered questions, are considered passing scores.

The ISWP Hybrid Satisfaction Survey (Hybrid Survey) is an online questionnaire available in English and Spanish that evaluates levels of satisfaction among participants after the learning intervention [31]. The Hybrid Survey is integrated by 5 sub-domains: Interaction, instructor, instruction methodology, content and technology and uses a five-point Likert scale (4 = strongly agree, 3 = agree, 2 = neither agree nor disagree, 1 = disagree, 0 = strongly disagree) for participants to indicate the degree to which they agree with each statement. Open-ended questions at the end of each sub-domain asked participants to provide suggestions and feedback [31]. We created the ISWP In-person Satisfaction Survey (In-person Survey) from the existing Hybrid Survey by (1) removing the questions related the online component of the course through the sub-domains, and (2) eliminating the technology sub-domain (S1 File).

In both groups, participants were instructed to complete the Basic Test without accessing course materials one-week pre-post the learning intervention. The test was hosted in a testing platform, Test.com, and completed online. Participants received an email with instructions on how to log into Test.com and the contact information for ISWP’s staff in case of technical problems or questions. In addition, participants were encouraged to complete the ISWP Hybrid or In-person Satisfaction Survey anonymously one week post the learning intervention. The Surveys were hosted online in Qualtrics and distributed via an external link to all participants. The Indian groups completed all outcome measures in English, while the Mexican groups did so in Spanish.

Sample size

The intended group size for each intervention was determined to be between 15–20 participants based on the trainer-trainee ratio suggested by WHO WSTP-B to promote an appropriate learning environment since the program had a significant amount of hands-on practical sessions [34]. We estimated the power of the study using information from the analyzed data with the command to estimate power for a two-sample means test clustered design.

Assignment method

Interventions were facilitated at different timepoints, and each training followed its own convenience sampling method. The training interventions were facilitated at no cost to participants; hence, to reduce attrition, the participant’s supervisor’s approval was necessary to enroll in the study. During the interventions, the study outcome measures were self-administered online at specific timepoints (Table 2). Participants’ completion of the Basic Test was continuously monitored by the research team and reported to facilitating organizations. Trainers sent reminders via email and followed up with participants when a trainee did not complete the test within the given timeframe. For the secondary outcome measure, due to the anonymity of the survey, individual follow-up was not feasible. Nevertheless, trainers encouraged participants in the last day of the training to complete the surveys and provide feedback.

Masking

Participants, trainers, staff, and the research team were not masked to the study learning intervention assignment.

Unit of analysis and statistical analysis

Treatment impact was derived using longitudinal modeling of within-person change in mean scores of the basic test knowledge from baseline (pre-test) to follow-up (post-test). Analysis of satisfaction was limited to trainees’ follow-up responses.

Baseline characteristics of participants from intervention and control groups were compared using Chi-square or two-tailed Fisher’s exact test for categorical variables and t-student or Wilcoxon test for continuous variables after assessing the test assumptions. Outliers were considered extreme values equal to or greater than 4 standard deviations. The indicator of effectiveness in increasing basic wheelchair service provision knowledge was derived from comparing mean changes in the total test scores from baseline to follow-up assessment. These differences were compared again between each other to assess the differences in the effectiveness of the Hybrid and In-person groups.

The design of the study was intended to analyze the effects of the interventions in both countries simultaneously. The trainee was the unit of analysis. The primary outcome was the change in the basic knowledge test score measured by the Basic Test; the secondary outcome was the satisfaction level measured by the Hybrid/In-person Satisfaction Survey. All outcomes were treated as continuous variables.

Primary outcome analysis: Knowledge test scores.

A mixed effects model, with a robust estimate of variance, was used to estimate the effect of training strategies (Hybrid or In-person) including time point (0 = pre-test; 1 = post-test) and participant ID as random effects to account for within-person correlation across time and between-person correlation. This assessed the mean difference between intervention conditions (Hybrid vs. In-person training) in changes in knowledge score over time within both Hybrid and In-person training.

Lost to follow up was handled using two methods: 1) multiple chained-imputations of covariates and scores for those lost to follow up and 2) a sensitivity analysis creating an inverse probability weight of follow up and including this weight as a covariate in the sensitivity models. Missing values including follow-up scores for those lost to follow-up in the knowledge test were handled using chained equations command for multiple imputations in Stata, which pools data according to Rubin's rules [35, 36]. We assumed missing at random (MAR) for the imputation model and following a methodology described by Bolton et al [37, 38], we first imputed any missing data on demographic variables based on all other demographic variables and educational strategy. A total of 11 imputations were used. Baseline and follow-up knowledge scores on all items missing data were then imputed using all variables in the dataset. Educational strategies were imputed separately. Sum scores based on the seven domains of the Basic Test were then calculated in the multiple imputation framework using all imputed datasets to acquire the final test score. We did not perform any data transformation. All final outcome models were run across the 11 imputed datasets.

Statistical significance was set at a 0.05 alpha level, two-tailed, and expressed as a 95% confidence interval. Cohen's d effect sizes were calculated by dividing the difference in average change from baseline to follow-up between the Hybrid and In-person groups by the outcome's pooled standard deviation at baseline. Between group effect sizes were calculated using Cohen’s d statistic. Effect sizes of 0.2 were considered small, 0.5 medium, and 0.8 or above large.[39] All analyses used the full intent-to-treat (ITT) sample.

Adjusted models.

An ITT analysis, which included all study participants based on participants’ group allocation with the multiple imputations database, was used to mitigate the effects of loss of follow-up. Outcomes were adjusted to account for possible residual confounding. Co-variables included in final models were those that were significant at the p<0.10 level identified using: 1) simple logistic regression clustering by country and participant to identify baseline differences between interventions; and 2) mixed models to determine interactions between potential co-variables and time on knowledge test scores. Furthermore, models were adjusted for age, gender, and education, which are well-known confounders of the relationship between the intervention and outcome in educational research. All possible confounding variables (both dichotomous and continuous) were centered in order to report the averaged sample effect of the interventions. Multicollinearity was explored using the variance inflation factor (VIF) considering values of VIF>5 indications of collinearity between variables. These variables were removed from the models. To explore whether the country should be treated as a random or fixed effect, a Hausman test was utilized [40].

The model for total test scores was adjusted by age, gender, educational level, work setting, student or professional status, and baseline domain test scores. The models for gender categories were adjusted by age, educational level, work setting, student or professional status, and baseline domain test scores. The models for age categories were adjusted by gender, educational level, work setting, student status, and baseline domain test scores. The models for education level categories were adjusted by age, sex, work setting, and baseline domain test scores. The models for wheelchair service provision experience categories were adjusted by age, gender, educational level, work setting, student or professional status, and baseline domain test scores. The country also was included in all models as a fixed effect, as the Hausman test was significant (p<0.0001) [40]. To test for the effect of outliers, we were planning to exclude them from the analyses and run new models without outliers, but there were no outliers in the test scores.

Secondary outcome analysis: Levels of satisfaction.

Q-Q plots were used to assess the normal distribution of the data. The Variance Ratio Test (sdtest in Stata) was used to assess homoscedasticity. Survey responses were analyzed using means and standard deviations. Survey domains scores were obtained by summing the type of response selected (4 = strongly agree, 3 = agree, 2 = neither agree nor disagree, 1 = disagree, 0 = strongly disagree) dividing them by the total number of respondents and then obtaining the standard deviation. A total satisfaction mean was obtained by calculating the mean per subject using a pre-selected set of 15 questions that did not vary across the Hybrid/In-person Satisfaction surveys.

To assess differences in satisfaction between the groups, we tested if the individuals and country have an additional effect on the outcome. Using the generalization of the Hausman test in Stata, we found that the effect of the individuals and country was not significant. Additionally, we tested if the country as a cluster would have an impact on the analysis using the intraclass correlation coefficient (ICC) for the country. To obtain the ICC, we developed a mixed regression model using the following variables: Total satisfaction means as the dependent variable, Hybrid or In-person group as an independent variable, and the country as a cluster. The calculated ICC was close to zero (3.541−48); therefore, clustering by country was disregarded. A two-sample t-test with equal variances was used to describe the comparison on satisfaction between the two groups (Hybrid and In-person) considering a significant difference a p-value <0.05.

Results

Sample analyzed

A total of 81 eligible participants were recruited across countries to participate in the study (n = 45 in India and n = 36 in Mexico) from February 2016 to February 2017. In India, 24 (53.3%) participants formed the In-person group and 21 (46.6%) the Hybrid group. In Mexico, 19 (52.8%) participants formed the In-person group and 17 (47.2%) the Hybrid group (Table 4). A total of 38 (46.9%) participants were enrolled in the Hybrid course while 43 (53.1%) were enrolled in the In-person course. In the In-person group, 4 participants were lost to follow-up (India), while in the Hybrid groups, 1 participant voluntarily withdrew from the intervention (India), and 2 were lost to follow-up (India and Mexico) (Fig 1).

thumbnail
Table 4. Characteristics of participants and baseline scores.

https://doi.org/10.1371/journal.pone.0217872.t004

Sociodemographic and baseline characteristics

In India, sociodemographic characteristics between In-person and Hybrid groups were similar except for age, last year of formal training, and student status. In the Hybrid group, the group was significantly younger (Mean (M) = 30, Standard Deviation (SD) = 1.3), had more students (5/21, 23.8%) and more participants with less than 4 years of experience (n = 16, 76.2%) than in the In-person group. In Mexico, fewer sociodemographic characteristics were similar between In-person and Hybrid groups. Overall, the Hybrid group was significantly younger (M = 23.5, SD = 1.2) and comprised of mostly students (12/17, 70.6%). This situation translated into statistically significant baseline differences in other variables such as educational level, last year of formal training, employment status, work settings, and motivation to take the training. Despite the baseline difference, the Basic Test total scores and domain scores were similar between Hybrid and In-person groups in both countries (Table 4).

Primary outcome: Basic level wheelchair knowledge

A paired sample t-test indicated that post assessment total scores were significantly higher after the training experience in the In-person and Hybrid group of both countries (Table 5). All domain scores mean values increased after the training interventions. The domains that did not report statistically significant changes in India’s In-person group were “Production” and “Follow up and maintenance”; while Mexico’s In-person was “Follow up and maintenance.” In India’s Hybrid group, “Fitting” and “Production” did not report statistically significant changes. In contrast, all domains in Mexico’s Hybrid group reported statistically significant changes (Table 5).

thumbnail
Table 5. Paired sample test scores of knowledge based on the ISWP basic test.

https://doi.org/10.1371/journal.pone.0217872.t005

Effectiveness of the intervention.

Table 6 presents the adjusted intervention effects of overall Basic Test in all the participants and across sub-groups including the following scores: Pre, post, the difference between pre and post, and the interventions’ difference obtained when comparing the differences of In-person and Hybrid courses (difference of differences). Both study groups experienced statistically significant improvements in the primary outcome when comparing post- and pre-test scores (p<0.0001). When the primary outcome was analyzed by subgroups, statistically significant increases were found in all subgroups except on providers with ≥4 years of wheelchair service experience (p = 0.091) in the Hybrid group. The In-person group experienced, on average, larger effects on the primary outcome. Statistically significant differences favoring the In-person group were founded in overall total test scores (p<0.0001, d = 0.36, small effect), and total test scores sub grouped by male (p = 0.001 d = 0.48, small effect), age ≥31 years (p = <0.0001, d = 1.02, large effect), educational level ≥bachelor (p = 0.002, d = 0.50, moderate effect) and wheelchair service provision experience ≥4 years (p = 0.001, d = 0.69, large effect) (Table 6 and Fig 2).

thumbnail
Fig 2. Adjusted pre- and post-test scores mean with their 95%confidence intervals by type of intervention.

https://doi.org/10.1371/journal.pone.0217872.g002

thumbnail
Table 6. Effectiveness of In-person and hybrid interventions.

https://doi.org/10.1371/journal.pone.0217872.t006

Sensitivity analyses.

The sensitivity analysis did not show changes in the significance of the differences of differences between the Hybrid and In-person groups in the total knowledge score nor in the subgroup analyses.

Secondary outcome: Levels of satisfaction

A total of 71 Satisfaction Surveys were collected, 41 (50.6%) from the In-person group and 30 (37%) from the Hybrid group. The means and standard deviations of the In-person and Hybrid Surveys’ domains are in Table 7.

thumbnail
Table 7. In-person and Hybrid mean and standard deviation total scores.

https://doi.org/10.1371/journal.pone.0217872.t007

Table 8 presents the total means and standard deviations of the pre-set of questions analyzed from both surveys and Fig 3 depicts box plots of satisfaction mean scores of the same pre-set of questions by type of intervention and country.

thumbnail
Fig 3. Box plots of satisfaction mean scores by type of intervention and country.

https://doi.org/10.1371/journal.pone.0217872.g003

thumbnail
Table 8. Means and standard deviations of questions analyzed.

https://doi.org/10.1371/journal.pone.0217872.t008

Total satisfaction in the In-person course was 3.81 (SD 0.25) while for the Hybrid course it was 3.58 (SD 0.35). The difference between the two interventions was 0.23 in favor of the In-person group (p = 0.0021).

Open-ended questions were analyzed individually and some of the comments received at the end of each sub-domain are presented in Table 9. The Hybrid groups provided more observations than the participants from the In-person groups. Overall, Hybrid participants reported technological problems when trying to watch the videos and suggested using programs that allow video streaming in places with low to medium internet speed. The alternative methods implemented by the trainers to mitigate the problems, such as pdfs and screenshots, were reported useful.

thumbnail
Table 9. Some participants’ comments from the In-person and Hybrid courses.

https://doi.org/10.1371/journal.pone.0217872.t009

Discussion

Summary of results

This project demonstrated that the Hybrid Course and In-person Course were effective in increasing basic level wheelchair service knowledge with overall high levels of satisfaction among a group of rehabilitation students and professionals in India and Mexico. The results of this study help building confidence in the applicability and effectiveness of the lower-cost and more scalable Hybrid approach to training wheelchair service providers. Despite the In-person course had, on average, higher total test scores and levels of satisfaction this methodology showed only a small effect size superior to the hybrid methodology. The lessons learned from this study could help organizations to improve blended learning interventions and enhance participants’ learning experiences.

Differences between the Hybrid and In-person groups

The Hybrid and In-person training interventions had a statistically significant influence on the total Basic Test score when comparing post- and pre-test scores (p<0.0001) which make both interventions effective in increasing basic level wheelchair service knowledge. These results are consistent with other studies conducted in LMICs in which blended learning interventions proved to be as effective as in-person interventions [20, 24, 4143]. However, the In-person group experienced, on average, larger effects on the total test scores and on the subgroup analysis. This finding is consistent with the study results reported by Vichitvejpaisal et al. [44]. In their study, a prospective randomized design, medical students in a traditional methodology group reported better in the short-term compared to a group using computer-assisted instruction [44].

A possible explanation of our study’s In-person larger effects could be related to technological problems faced by Hybrid learners. The technology domain of the Hybrid Satisfaction Survey reported the lowest satisfaction when compared with the other domains (Table 7). Unfortunately, the survey did not identify problems encountered by participants, nor if they could or could not resolve them. Nevertheless, the comments received in the open-ended section of the survey point out that participants from the Hybrid group in India had issues accessing the modules and watching the videos due to limited internet access; some comments suggested that the actions implemented by the trainers (i.e., sending screenshots and pdfs) were effective. Previous studies have noted that effectiveness of blended education can be diluted by technological barriers such as limited access to digital technology (e.g., inadequate computer facilities, limited access to computers) [24, 30, 45], computer illiteracy [30], and limitations in bandwidth which often contributed to slow speed and low quality of videos or visual outputs [24, 30, 4649]. Strategies that can combat these challenges include developing access hubs at strategic central locations to provide the required technology and internet access [30, 50]; developing offline content delivery platforms to overcome slow internet connectivity [30, 50], developing courses using guidelines for low bandwidth design [51] and limited technological demands in order to be more adaptable [27, 52, 53]. Another common barrier is the inadequate technological support of blended programs [45].

To some extent, our program implemented strategies to reduce barriers such as: 1) a repository of offline videos and pdfs distributed to trainers; 2) the use of a Hybrid Course developed considering low bandwidth internet access [22]; and 3) staff available for remote technical support (Table 2: Who provided). Nevertheless, it seems that hybrid learners still faced technological problems which may have affected their learning experience. In future studies, more effort should be made in conducting and considering environmental conditions and students’ capabilities such as self-directedness when implementing Hybrid courses, as recommended by Atkins et al. [24]. Satisfaction surveys could be reviewed to include questions that compute most frequent problems to allow researchers to design specific strategies to mitigate them. Future studies should assess prospective students’ attitudes towards online learning, computer literacy, and skills to identify critical success factors the group of participants who could beneficiate from a blended learning approach.

Another possible explanation to the In-person larger effects could be related to participants habituation to in-person education [54]. Older students who are more familiar with traditional education systems that heavily rely on in-person training may still prefer that methodology and may find it more difficult to adapt to an unfamiliar learning environment [55]. This assumption could explain why participants ≥31 years old in the Hybrid group had the lowest increase in wheelchair service knowledge when compared with younger participants (Table 6). Moreover, it could suggest that the Hybrid learning approach is more effective in younger population.

Table 3 presents some training costs associated with this study’s training interventions, particularly, trainers’ stipend, staff support, food, and beverage during the training days. In these categories, the Hybrid courses had an overall 40% lower cost in India and 39% in Mexico when compared with the In-person courses. Although there are other costs related to facilitating these trainings that were not captured in this study, we believe that reducing the number of in-person training days decreases the cost associated with leading the training.

Generalizability of results

Although randomized clinical trials are considered the gold standard for assessing the efficacy of an intervention and the generalizability of the results [56, 57], our goal was to design a pragmatic study that included the circumstances of practice that could hopefully transmit more relevant, actionable, and tailored results [58] in light of the recognition that evidence-based practice should be informed by practice-based evidence and research [58, 59]. As recommended by the literature, when non-randomized designs are being used to build evidence-based health practice, it is necessary to improve the quality of the reporting [57]. We used two widely-recognized tools to enhance the quality of the reporting of these interventions and, ultimately, their replicability: 1) The Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) [57], used to describe the methodology; and 2) the Template for Intervention Description and Replication (TIDieR) (Table 2) [33], to provide a complete detailed report of all interventions and increase the potential impact of this study.

Contribution to literature

To our knowledge, this trial is the only published study to date testing and comparing the effectiveness of a hybrid and in-person course in increasing basic level wheelchair service knowledge based on the WHO-WSTP. Previous studies have tested the Hybrid Course exclusively [22, 31], and we are unaware of any studies that have tested the in-person methodology suggested by the WHO WSTP-Basic Level. In the wheelchair service provision sector, the WHO WSTP-Basic in-person methodology is commonly used as the standard of practice for delivering training. This training approach was decided through a consensus-based process, and therefore does not have the benefit of evidence to demonstrate that is effective. Despite being considered the standard of practice, the in-person methodology showed only a small effect size superior to the hybrid methodology. This finding questions if the cost associated with leading an in-person training is justified by its impact when effective alternative learning methodologies are available. Our study suggests that both methodologies, a Hybrid and In-person course, are effective and well-accepted methods to train personnel in basic level wheelchair service provision.

Limitations

Some limitations of this study are important to note in planning for future research and interpreting the current results. We did not randomize participants to the intervention groups. The lack of randomization cannot ensure that groups’ baseline characteristics will not differ. However, educational studies are difficult to mask learners to their assigned group, which may result in contaminating effects that further compromise the randomization process[60]. According to Sullivan, non-randomized methods are common in education research and considered by experts as not inferior to randomized clinical trials [60]. We strengthened the quality of our study by implementing the following factors used by Best Evidence Medical Education[60, 61]: 1) A “Pragmatic Trial” [62] that consisted of 2 interventions compared in real-world practice; 2) comparison group that received an active intervention; 3) training manuals or methods to ensure a detailed intervention; 4) multiple sites; 5) low dropout rates, and 6) a rigorous statistical method to confirm the findings, in this case, an intention-to-treat analysis based on multiple chain imputations and adjusted the models for potential confounders, interactions, and fixed and random effects.

The Satisfaction Surveys used in this study have not been validated in the target population. The Surveys were developed using previous satisfaction surveys and an international stakeholders’ group that guided the process and the Hybrid Satisfaction Survey has been previously used to measure levels of satisfaction among wheelchair providers [31]; nevertheless, until a formal validation process we are unsure if the tool measures the underlying outcome of interest [63].

We acknowledge that there were slight differences in the training interventions reported on the TIDieR table including trainers’ background, number of recitations, the language, and total wheelchair users who volunteer as role models (Table 2). The objective of this study was to compare the effectiveness of these interventions in real-world practice settings, with heterogenous populations, and flexibility to adjust the training to the context and resources available. Despite the diverse backgrounds of trainers, local master trainers with same qualifications across trainings coordinated the interventions. In terms of language, we combined groups to increase the sample size and to be able to generalize results regardless of whether it was delivered in English or Spanish. Nevertheless, we also analyzed the effectiveness of the training intervention by country and learning methodology; all groups had a statistically significant increase in total test scores after the training intervention (p <0.0001) (Table 5). Furthermore, Table 3 provided some information related to the trainings interventions’ costs. It is important to notice that we did not conduct a cost-effectiveness analysis between the Hybrid and the In-person courses. We encourage future studies to collect a complete dataset of costs and a rigorous analysis of the different training interventions.

Our sample size was estimated based on the WHO WSTP-B recommendations for the training groups’ size [34]. In addition, in planning for this study, we considered the budget of the project and decided that having multiple sites (India and Mexico) was more desirable than spending the allocated resources in one site with multiple trainings. After obtaining the data from the study, using the mean differences of the Hybrid and In-person groups (15.1 and 18.7), the pooled standard deviation (9.87), the number of clusters per intervention (2, India and Mexico), the size of each intervention (38 Hybrid and 43 In-person), and the ICC (7.244x10-18) we calculated the power of a clustered study for a two-tails test of means of 64%. If one side, the power will go up to 75%. Although our study did not reach the minimum power desirably, 80% [64], we consider our outcomes to be reliable based on the multiple imputations (11 times of main dataset) and the results of the sensitivity analysis (no changes in the significance after removing the imputations). It is important to note that we did not power the study based on the number of imputed participants; if so, the power of the study will be up to 90%. Most importantly, we are providing the data to estimate sample sizes for future studies and strengthen the quality of designs.

Conclusion

Evidence highlights the global shortage of wheelchair service provision education and training that is related to inappropriate wheelchair provision. The results from this study suggest that both currently available learning methodologies, a hybrid and in-person course, are effective in increasing knowledge in basic level wheelchair service provision with overall high levels of satisfaction among participants.

To increase the number of training opportunities and to promote an equitable distribution in underserved areas, alternative learning methodologies need to be developed and tested in international settings. Blended learning is an attractive and sustainable learning approach that has demonstrated to be as effective as traditional educational strategies. In resourced constrained settings, where the need is great and resources are limited, lower-cost solutions that can significantly scale interventions and overcome knowledge dissemination barriers are critical to develop but should not compromise the learning quality. The lessons learned from this study could help organizations to develop strategies to mitigate potential implementation barriers and limitations and help to advance research in blended-learning training.

Supporting information

S1 File. ISWP In-person and Hybrid satisfaction surveys.

https://doi.org/10.1371/journal.pone.0217872.s001

(PDF)

Acknowledgments

We greatly acknowledge the efforts of our partner organizations in India, SMOI and Mobility India, and in Mexico, Fundación Teletón México. In particular, we would like to recognize the work from all local trainers Padmaja Kankipati, Naveen Gowda, Amalorpava Marie Lourdhu, Nirmala Muniraju, Yasmin García, María Elena Lerma, and Norma Jiménez. We are grateful to Krithika Kandavel for her support in coordinating the trainings and to Nancy Augustine for her critical revision of the manuscript.

References

  1. 1. World Health Organization. World Report on Disability. Geneva: 2011.
  2. 2. World Health Organization. Guidelines on the provision of manual wheelchairs in less resourced settings. Geneva: WHO; 2008.
  3. 3. World Bank. Disability Inclusion 2018 [cited 2019 Feb 09]. Available from: http://www.worldbank.org/en/topic/disability.
  4. 4. United Nations. The Universal Declaration of Human Rights [cited 2019 Feb 09]. Available from: http://www.un.org/en/universal-declaration-human-rights/.
  5. 5. Borg J, Lindstrom A, Larsson S. Assistive technology in developing countries: national and international responsibilities to implement the Convention on the Rights of Persons with Disabilities. Lancet. 2009;374(9704):1863–5. Epub 2009/12/01. pmid:19944867.
  6. 6. Visagie S, Scheffler E, Schneider M. Policy implementation in wheelchair service delivery in a rural South African setting. Afr J Disabil. 2013;2(1):63. Epub 2013/09/09. pmid:28729993; PubMed Central PMCID: PMCPMC5442587.
  7. 7. Carver J, Ganus A, Ivey JM, Plummer T, Eubank A. The impact of mobility assistive technology devices on participation for individuals with disabilities. Disabil Rehabil Assist Technol. 2016;11(6):468–77. Epub 2015/03/31. pmid:25815679.
  8. 8. United Nations, Division for Social Policy and Development Disability. Convention on the Rights of Persons with Disabilities–Articles. Geneva: United Nations; 2006 [cited 2019 Feb 09]. Available from: https://www.un.org/development/desa/disabilities/convention-on-the-rights-of-persons-with-disabilities/convention-on-the-rights-of-persons-with-disabilities-2.html.
  9. 9. Mukherjee G, Samanta A. Wheelchair charity: a useless benevolence in community-based rehabilitation. Disabil Rehabil. 2005;27(10):591–6. Epub 2005/07/16. pmid:16019868.
  10. 10. Greer N, Brasure M, Wilt TJ. Wheeled mobility (wheelchair) service delivery: scope of the evidence. Ann Intern Med. 2012;156(2):141–6. Epub 2012/01/18. pmid:22250145.
  11. 11. Economic Commission for Latin America and the Caribbean (ELAC). Social Panorama of Latin America. Briefing paper. 2012.
  12. 12. Banks LM, Kuper H, Polack S. Poverty and disability in low- and middle-income countries: A systematic review. PLoS One. 2017;12(12):e0189996. Epub 2017/12/22. pmid:29267388; PubMed Central PMCID: PMCPMC5739437.
  13. 13. McSweeney E, Gowran RJ. Wheelchair service provision education and training in low and lower middle income countries: a scoping review. Disabil Rehabil Assist Technol. 2017:1–13. Epub 2017/11/03. pmid:29092684.
  14. 14. Elwan A. Poverty and disability: A survey of the literature. Washington, DC.: Social Protection Unit, Human Development Network, 1999.
  15. 15. Gupta N, Castillo-Laborde C, Landry MD. Health-related rehabilitation services: assessing the global supply of and need for human resources. BMC Health Serv Res. 2011;11:276. Epub 2011/10/19. pmid:22004560; PubMed Central PMCID: PMCPMC3207892.
  16. 16. Fung KH, Rushton PW, Gartz R, Goldberg M, Toro ML, Seymour N, et al. Wheelchair service provision education in academia. Afr J Disabil. 2017;6:340. Epub 2017/09/25. pmid:28936415; PubMed Central PMCID: PMCPMC5594266 may have inappropriately influenced them in writing this article.
  17. 17. Gartz R, Goldberg M, Miles A, Cooper R, Pearlman J, Schmeler M, et al. Development of a contextually appropriate, reliable and valid basic Wheelchair Service Provision Test. Disabil Rehabil Assist Technol. 2016;12(4):333–40. Epub 2016/04/22. pmid:27100362.
  18. 18. International Society of Wheelchair Professionals (ISWP). ISWP Wheelchair Service Provision Basic Test by region 2018 [cited 2019 Feb 09]. Available from: http://wheelchairnetwork.org/wp-content/uploads/2017/07/Basic-Test-takers-by-region_12June2018.pdf.
  19. 19. World Health Organization. Wheelchair Service Training Package: Basic Level. Geneva: WHO; 2012.
  20. 20. Marrinan H, Firth S, Hipgrave D, Jimenez-Soto E. Let's Take it to the Clouds: The Potential of Educational Innovations, Including Blended Learning, for Capacity Building in Developing Countries. Int J Health Policy Manag. 2015;4(9):571–3. Epub 2015/09/05. pmid:26340485; PubMed Central PMCID: PMCPMC4556572.
  21. 21. Joynes C. Distance Learning for Health: What works A global review of accredited post-qualification training programmes for health workers in low and middle income countries. 2011.
  22. 22. Burrola-Mendez Y, Goldberg M, Gartz R, Pearlman J. Development of a Hybrid Course on Wheelchair Service Provision for clinicians in international contexts. PLoS One. 2018;13(6):e0199251. Epub 2018/06/16. pmid:29906794; PubMed Central PMCID: PMCPMC6003808.
  23. 23. Maloney S, Nicklen P, Rivers G, Foo J, Ooi YY, Reeves S, et al. A Cost-Effectiveness Analysis of Blended Versus Face-to-Face Delivery of Evidence-Based Medicine to Medical Students. J Med Internet Res. 2015;17(7):e182. Epub 2015/07/23. pmid:26197801; PubMed Central PMCID: PMCPMC4527010.
  24. 24. Atkins S, Yan W, Meragia E, Mahomed H, Rosales-Klintz S, Skinner D, et al. Student experiences of participating in five collaborative blended learning courses in Africa and Asia: a survey. Glob Health Action. 2016;9:28145. Epub 2016/10/12. pmid:27725077; PubMed Central PMCID: PMCPMC5056983.
  25. 25. Motschnig-Pitrik R.Participatory Action Research in a Blended Learning Course on Project Management Soft Skills. 36th ASEE/IEEE Frontiers in Education Conference; San Diego, CA: IEEE; 2006.
  26. 26. Lewis PA, Tutticci NF, Douglas C, Gray G, Osborne Y, Evans K, et al. Flexible learning: Evaluation of an international distance education programme designed to build the learning and teaching capacity of nurse academics in a developing country. Nurse Educ Pract. 2016;21:59–65. Epub 2016/10/19. pmid:27756057.
  27. 27. Frehywot S, Vovides Y, Talib Z, Mikhail N, Ross H, Wohltjen H, et al. E-learning in medical education in resource constrained low- and middle-income countries. Hum Resour Health. 2013;11:4. Epub 2013/02/06. pmid:23379467; PubMed Central PMCID: PMCPMC3584907.
  28. 28. Ruiz JG, Mintzer MJ, Leipzig RM. The Impact of E-Learning in Medical Education. Academic Medicine. 2006;81(3):207–2012. pmid:16501260
  29. 29. Dantas AM, Kemm RE. A blended approach to active learning in a physiology laboratory-based subject facilitated by an e-learning component. Adv Physiol Educ. 2008;32(1):65–75. Epub 2008/03/13. pmid:18334571.
  30. 30. Liyanagunawardena TR, Aboshady OA. Massive open online courses: a resource for health education in developing countries. Glob Health Promot. 2017:1757975916680970. Epub 2017/01/31. pmid:28134014.
  31. 31. Burrola-Mendez Y, Toro-Hernandez ML, Goldberg M, Pearlman J. Implementation of the hybrid course on basic wheelchair service provision for Colombian wheelchair service providers. PLoS One. 2018;13(10):e0204769. Epub 2018/10/05. pmid:30286127; PubMed Central PMCID: PMCPMC6172015.
  32. 32. World Bank. World Bank Country and Leading Groups: Country Classification 2018 [cited 2019 Feb 09]. Available from: https://datahelpdesk.worldbank.org/knowledgebase/articles/906519-world-bank-country-and-lending-groups.
  33. 33. Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348:g1687. Epub 2014/03/13. pmid:24609605.
  34. 34. World Health Organization. Wheelchair Service Training Package. Trainer’s Manual Basic Level. Geneva: WHO; 2012. 248 p.
  35. 35. Azur MJ, Stuart EA, Frangakis C, Leaf PJ. Multiple imputation by chained equations: what is it and how does it work? International journal of methods in psychiatric research. 2011;20(1):40–9. pmid:21499542
  36. 36. Rubin DB. Multiple imputation for nonresponse in surveys: John Wiley & Sons; 2004.
  37. 37. Bolton P, Lee C, Haroz EE, Murray L, Dorsey S, Robinson C, et al. A transdiagnostic community-based mental health treatment for comorbid disorders: development and outcomes of a randomized controlled trial among Burmese refugees in Thailand. PLoS Med. 2014;11(11):e1001757. Epub 2014/11/12. pmid:25386945; PubMed Central PMCID: PMCPMC4227644.
  38. 38. Bonilla-Escobar FJ, Fandino-Losada A, Martinez-Buitrago DM, Santaella-Tenorio J, Tobon-Garcia D, Munoz-Morales EJ, et al. A randomized controlled trial of a transdiagnostic cognitive-behavioral intervention for Afro-descendants' survivors of systemic violence in Colombia. PLoS One. 2018;13(12):e0208483. Epub 2018/12/12. pmid:30532155; PubMed Central PMCID: PMCPMC6287825.
  39. 39. Cohen J. Statistical power analysis for the behavioral sciences. Second Edition ed. Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers.; 1988.
  40. 40. Hausman JA. Specification Tests in Econometrics. Econometrica. 1978;46(6):1251–71.
  41. 41. Atkins S, Marsden S, Diwan V, Zwarenstein M, consortium A. North-south collaboration and capacity development in global health research in low- and middle-income countries—the ARCADE projects. Glob Health Action. 2016;9:30524. Epub 2016/10/12. pmid:27725081; PubMed Central PMCID: PMCPMC5057000.
  42. 42. Protsiv M, Rosales-Klintz S, Bwanga F, Zwarenstein M, Atkins S. Blended learning across universities in a South-North-South collaboration: a case study. Health Res Policy Syst. 2016;14(1):67. Epub 2016/09/04. pmid:27589996; PubMed Central PMCID: PMCPMC5010676.
  43. 43. Mastellos N, Tran T, Dharmayat K, Cecil E, Lee HY, Wong CCP, et al. Training community healthcare workers on the use of information and communication technologies: a randomised controlled trial of traditional versus blended learning in Malawi, Africa. BMC Med Educ. 2018;18(1):61. Epub 2018/04/04. pmid:29609596; PubMed Central PMCID: PMCPMC5879741.
  44. 44. Vichitvejpaisal P, Sitthikongsak S, Preechakoon B, Kraiprasit K, Parakkamodom S, Manon C, et al. Does computer-assisted instruction really help to improve the learning process? Med Educ. 2001;35(10):983–9. Epub 2001/09/21. pmid:11564203.
  45. 45. Erah PO, Dairo EA. Pharmacy students perception of the application of learning management system in patient-oriented pharmacy education: University of Benin experience. International Journal of Health Research. 2008;1(2):63–72.
  46. 46. Agrawal S, Maurya AK, Shrivastava K, Kumar S, Pant M, Mishra SK. Training the trainees in radiation oncology with telemedicine as a tool in a developing country: A two-year audit. International journal of telemedicine applications. 2011;2011:1.
  47. 47. Corrêa L, De Campos AC, Souza SC, Novelli MD. Teaching oral surgery to undergraduate students: a pilot study using a Web‐based practical course. European Journal of Dental Education. 2003;7(3):111–5. pmid:12846819
  48. 48. Obura T, Brant WE, Miller F, Parboosingh IJ. Participating in a Community of Learners enhances resident perceptions of learning in an e-mentoring program: proof of concept. BMC medical education. 2011;11(1):3.
  49. 49. Vincent DS, Berg BW, Hudson DA, Chitpatima ST. International medical education between Hawaii and Thailand over Internet2. Journal of telemedicine telecare. 2003;9(2_suppl):71–2.
  50. 50. Oyo B, Kalema BM. Massive open online courses for Africa by Africa. The International Review of Research in Open Distributed Learning. 2014;15(6).
  51. 51. Aptivate The Digital Agency for International Development. Wed Design Guidelines for Low Bandwidth [cited 2019 Feb 09]. Available from: http://www.aptivate.org/webguidelines/Multimedia.html.
  52. 52. Vyas R, Albright S, Walker D, Zachariah A, Lee M. Clinical training at remote sites using mobile technology: an India–USA partnership. Distance Education. 2010;31(2):211–26.
  53. 53. Missen C. Internet in a box: the eGranary digital library serves scholars lacking internet bandwidth. New review of information networking. 2005;11(02):193–9.
  54. 54. Miller M, Lu M-Y, Thammetar T. The residual impact of information technology exportation on Thai higher education. Educational Technology Research and Development. 2004;52(1):92–6.
  55. 55. Lucas H, Kinsman J. Distance- and blended-learning in global health research: potentials and challenges. Glob Health Action. 2016;9:33429. Epub 2016/10/12. pmid:27725082; PubMed Central PMCID: PMCPMC5056981.
  56. 56. Stuart EA, Bradshaw CP, Leaf PJ. Assessing the generalizability of randomized trial results to target populations. Prev Sci. 2015;16(3):475–85. Epub 2014/10/14. pmid:25307417; PubMed Central PMCID: PMCPMC4359056.
  57. 57. Des Jarlais DC, Lyles C, Crepaz N, Group T. Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement. Am J Public Health. 2004;94(3):361–6. Epub 2004/03/05. pmid:14998794; PubMed Central PMCID: PMCPMC1448256.
  58. 58. Green LW. Making research relevant: if it is an evidence-based practice, where's the practice-based evidence? Fam Pract. 2008;25 Suppl 1:i20–4. Epub 2008/09/17. pmid:18794201.
  59. 59. Handley MA, Schillinger D, Shiboski S. Quasi-experimental designs in practice-based research settings: design and implementation considerations. The Journal of the American Board of Family Medicine. 2011;24(5):589–96. pmid:21900443
  60. 60. Sullivan GM. Getting off the "gold standard": randomized controlled trials and education research. J Grad Med Educ. 2011;3(3):285–9. Epub 2012/09/04. pmid:22942950; PubMed Central PMCID: PMCPMC3179209.
  61. 61. Harden RM, Grant J, Buckley G, Hart IR. BEME Guide No. 1: Best Evidence Medical Education. Med Teach. 1999;21(6):553–62. Epub 1999/01/01. pmid:21281174.
  62. 62. Ware JH, Hamel MB. Pragmatic trials—guides to better patient care? N Engl J Med. 2011;364(18):1685–7. Epub 2011/05/06. pmid:21542739.
  63. 63. Sullivan GM. A primer on the validity of assessment instruments. J Grad Med Educ. 2011;3(2):119–20. Epub 2012/06/02. pmid:22655129; PubMed Central PMCID: PMCPMC3184912.
  64. 64. Suresh K, Chandrashekara S. Sample size estimation and power analysis for clinical research studies. J Hum Reprod Sci. 2012;5(1):7–13. Epub 2012/08/08. pmid:22870008; PubMed Central PMCID: PMCPMC3409926.