Figures
Abstract
The study aimed to illustrate the constructs and test the psychometric properties of an instrument of health literacy competencies (IOHLC) for health professionals. A multi-phase questionnaire development method was used to develop the scale. The categorization of the knowledge and practice domains achieved consensus through a modified Delphi process. To reduce the number of items, the 92-item IOHLC was psychometrically evaluated through internal consistency, Rasch modeling, and two-stage factor analysis. In total, 736 practitioners, including nurses, nurse practitioners, health educators, case managers, and dieticians completed the 92-item IOHLC online from May 2012 to January 2013. The final version of the IOHLC covered 9 knowledge items and 40 skill items containing 9 dimensions, with good model fit, and explaining 72% of total variance. All domains had acceptable internal consistency and discriminant validity. The tool in this study is the first to verify health literacy competencies rigorously. Moreover, through psychometric testing, the 49-item IOHLC demonstrates adequate reliability and validity. The IOHLC may serve as a reference for the theoretical and in-service training of Chinese-speaking individuals’ health literacy competencies.
Citation: Chang L-C, Chen Y-C, Liao L-L, Wu FL, Hsieh P-L, Chen H-J (2017) Validation of the instrument of health literacy competencies for Chinese-speaking health professionals. PLoS ONE 12(3): e0172859. https://doi.org/10.1371/journal.pone.0172859
Editor: Christophe Leroyer, Universite de Bretagne Occidentale, FRANCE
Received: November 13, 2016; Accepted: February 12, 2017; Published: March 6, 2017
Copyright: © 2017 Chang et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the paper and hosted at the Dryad Repository. Data hosted at Dryad can be found at the following DOI: 10.5061/dryad.m3d7f.
Funding: This project was funded by the Ministry of Science and Technology, Taiwan (NSC 100-2511-S-255-001-MY2) and received financial assistance from Chang Gung Memorial Hospital (BMRP 978).
Competing interests: The authors have declared that no competing interests exist.
Introduction
The World Health Organization [1] defined health literacy as the cognitive and social skills that determine the motivation and ability of individuals to gain access to, understand, and use information in ways that promote and maintain good health. The important effects of an adequate level of health literacy on the disease management process have been observed in various studies [2–4] and drew the attention of health professionals towards successful care for people with chronic disease.
Several nationwide health literacy surveys have reported moderate to high prevalence of low health literacy (LHL) among adults, which is bound to result in healthcare problems [5–6]. Many reports have highlighted that low health literacy is associated with poor communication between patients and healthcare providers and with poor health outcomes, including increased hospitalization rates [7], poor medication adherence [4], and low effective self-care behavior [2]. The landmark report of the Institute of Medicine (IOM) [8] explicitly recommends that health professionals receive training in tailed patient education programs and in communicating effectively with patients with limited health literacy. Health education efforts could enhance people’s health literacy, thereby encouraging health-promoting behaviors [9].
Unfortunately, previous studies on health professionals have revealed insufficient knowledge of health literacy and misunderstanding of the contributions of health literacy towards patient health outcomes [10]. Health professionals’ limited ability to improve patients’ health literacy skills may affect patients’ capacity to better understand their illnesses and instructions on how to manage their health conditions [11].
Identification of key health literacy competencies among health professionals is fundamental for continued professional education. The education-based framework of a validated instrument could allow us to develop curricular interventions aimed at improving patients’ capacity for health literacy directly related to important self-care outcomes [12]. However, previous studies have yielded disappointing findings relating to health professionals’ health literacy knowledge or skills; these include the inability to correctly identify patients with LHL [13], lack of awareness of issues related to health literacy, including the impact of inadequate health literacy on patient care [14], no knowledge of health literacy resources, misconception of health literacy measurement, and discordance in the estimation of patients’ literacy levels [15].
There were differences in the self-reported use of various health literacy-oriented communication techniques among physicians, nurses, and pharmacists [16]. To date, the majority of measurements of health literacy competences have targeted physicians’ communication skills [17], with minimal emphasis on other core abilities related to the delivery of an educational program for other health professionals [18]. The first consensus study conducted by Coleman identified a comprehensive set of health literacy practices and measurable educational competencies (i.e., knowledge, skills, and attitudes) among students and health professionals. Further work is required to determine the suitability of all 94 items, and educational research is required to validate competencies in health literacy practice and to determine which should be taught, which healthcare professionals should receive training, which settings should be used, and which teaching methods should be adopted to improve patient-centered outcomes [12].
Conceptualizations of health literacy competence have been developed from the perspective of researchers, health professionals, and literacy experts, and using face-to face interviews with patients. These suggest that the categories of competencies in health literacy or other health communication constructs (e.g., cross-cultural communication, general interviewing skills, written communication, and shared decision making) are incomplete [12]. A scientific process for selecting suitable recommendations to include or exclude in the instrument to assess health professionals’ health literacy competencies is a crucial foundation for ensuring instrument validity. Thus, this paper describes the psychometric testing of the IOHLC, due to the importance of incorporating health literacy skills into healthcare providers’ education practices, as proclaimed by the WHO [19].
Materials and methods
Participants
This study formed part of a larger study aiming to develop an online competence module of health literacy for health professionals. In the current study, we report on the results of instrument development, which was conducted between May 2012 and January 2013. We invited three private Medical Centers in northern Taiwan and three private regional hospitals in southern Taiwan to participate in this study. Participants included non-physician healthcare professionals, case managers, health educators, dietitians, and pharmacists who had been working for at least six months. Participants who were expected to leave within a month were excluded from this study.
Procedures
A convenience sample of 812 healthcare professionals completed self-administered online questionnaires. Information packets were distributed by hospital officials at selected sites to potential participants. The packets contained an information sheet, an informed consent form, and a reply envelope addressed to the primary investigator. Participants who agreed to participate in the study and returned the written inform consent forms received a link to the online questionnaires on the e-mail addresses that they provided. Completion of the online questionnaires took approximately 15–20 minutes. Online questionnaires were built into a learning platform and participants were required to use their e-mail addresses and date of birth to set the log-in profile. On this basis, we could ensure that participants personally responded to the questionnaire. Permission to conduct the study was obtained from the Institutional Review Board at Chang Gung Memorial Hospital (no: 100-4569B, approval date: 2012.3.16).
Measure
A comprehensive set of measurable competencies of health literacy was established by using a modified Delphi technique with a panel of 24 experts who had experience in providing training on health literacy patient education [20]. Twenty-four experts (12 academic scholars in health literacy and 12 professionals with training related to health literacy practice) were invited to participate in the second to fourth rounds of data collection in the Delphi study. Moreover, consent forms were sent to the experts and returned to us prior to the commencement of our study.
The knowledge domain comprises 27 true/false items (12 were knowledge-specific items and 15 were related to attributes of patients with LHL). An incorrect answer for each item is assigned a score of 0 and a correct answer is assigned a score of 1. Three of the items (K1, K2, and A9) are reverse-worded and reverse-scored. The total score ranges from 0 to 27. Higher scores indicate higher levels of health literacy knowledge.
The skills domain consisted of 65 items related to health literacy practice. Respondents were required to indicate the level of agreement with items such as, “Being able to use plain language, instead of medical jargon.” A five-point Likert-scale was used for items in this domain, ranging from highly disagree (1) to highly agree (5). Higher scores indicated better health literacy skills.
The demographic data obtained from respondents included age, education level, occupational role at the clinic, years of experience in medicine/health care, and having ever heard of the term “health literacy.” We recruited 20 nurses for participation in the pilot survey before the investigation.
Statistical analysis
Item response theory (IRT) analysis.
Rasch analysis was used to assess the psychometric properties of the knowledge domain using the Andrich rating scale model [21] with Winsteps software (version 3.74.0) [22]. Two values are used throughout the analysis: logit measures and fit statistics. The logit (or log-odds units) is the natural logarithm of the odds of a participant succeeding at carrying out a specific task. Conventionally, 0 logit is ascribed to mean item difficulty. Moreover, the chi-square statistics for individual items’ (p > .05) difficulty and response ability (between +2.0 logits and -2.0 logits) were used as the determination criteria for the selection of items.
Exploratory factor analysis (calibration sample).
The validity of 65 items of the skills domain was established through exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). The EFA and CFA were performed through use of a calibration sample and a validation sample, respectively. A total of 736 participants were equally divided into two subsamples, using the SPSS random case selection procedure (Version 15.0; SPSS Inc., Chicago, IL). The random selection procedures in SPSS were repeatedly performed until the homogeneity of baseline characteristics (e.g., education level, professional position, and tenure) among groups was warranted (p > .5). The first sample, termed the “calibration sample,” was used to explore the factor structure. In contrast, the second sample, termed the “validation sample,” was used to validate the factor structure constructed through the calibration sample.
With the calibration sample, principal component EFA was performed on the 65 items to extract the major contributing factors, and Varimax rotation was used to recognize relationships between items and common factors. The number of factors was decided upon, based on the eigenvalue-greater-than-1 rule. Factor loadings greater than 0.50 were regarded as significantly relevant and items with cross-loadings greater than 0.50 were deleted [23]. All item deletions were conducted one by one and the EFA model was re-specified following each deletion. The internal consistency of each construct was determined through Cronbach’s alpha; a value of 0.70 or higher was considered acceptable [24].
CFA (validation sample).
The validation sample was used to validate and modify the factor structure of the health literacy scale under development when the model did not fit the data. In the process of model modification, items that correlated too highly with others were deleted by examining the modification index (MI) of the additional specification of error covariance. A larger MI (e.g., >50) between two items indicated that those two items measured the same thing, thus necessitating deletion of one of the items, according to the parsimony principle [25]. The CFA model is to be modified until most of the model fit indices meet the criteria.
The goodness fit of the model was assessed using absolute fit indices, relative fit indices, and parsimony fit indices [23]. The absolute fit indices included the goodness of fit index (GFI), adjusted GFI index (AGFI), standardized root mean squared residual (SRMR), and the root mean square error of approximation (RMSEA). The relative fit indices were assessed through the normed fit index (NFI), non-normed fit index (NNFI), and the comparative fit index (CFI). Finally, the parsimony fit indices were determined using the parsimony normed fit index (PNFI), parsimony comparative fit index (PCFI), and the likelihood-ratio (χ2/df) [26].
The convergent validity of the items and factor structures was determined through standardized factor loading (values of 0.50 or higher were considered acceptable) and average variance extraction (AVE) (values of 0.50 or higher indicated acceptable) [23]. Convergent reliability was assessed through construct reliability (CR; values of .70 or higher were considered acceptable). The AVE and correlation matrices among the latent constructs were used to establish discriminant validity. The square root of the AVE of each construct was required to be higher than the correlation coefficient of that construct and other constructs [27].
Results
Participant characteristics
Among the 746 surveys returned from 6 hospitals, during analysis, we included 736 (99%) participants who had complete and valid data for all variables. All participants in this study were female. Table 1 describes the demographic characteristics of the study sample. The participants’ mean age was 30.71 ± 6.65 years (range = 20–60 years), with an average of 6.94 ± 6.25 years working in medicine/health care. The majority of participants (28.4%) had 1–3 years’ working experience, followed by those with 10–15 years’ (21.9%) experience. Participants’ professional roles at the hospital were registered nurse (78.5%), nurse practitioner (11.4%), dietician (4.5%), case manager (2.7%), health educator (2.4%), and health manager (0.5%) In total, 57.7% had a university education and 62.8% had never heard of the term, health literacy (see Table 1).
IRT calibration for 27 items of the knowledge domain.
In the IRT analysis of the 27 items of the knowledge domain, 16 misfit items identified because of Rasch misfit statistics and measurement redundancy (p < .05) were deleted from the final questionnaire (see Table 2).
The estimated item difficulty for retained items ranged from -2.00 logits (least difficult) to +2.64 logits (most difficult). Among the remaining nine items, seven were knowledge items (K2, K4, K5, K7, K8, K9, and K10) and two were LHL items (A1 and A3). The rate of correct answers for each item ranged from 37.6% to 96.8%.
EFA results for the calibration sample.
EFA was performed sequentially a total of five times until none of the items factored less than .50 and there were no cross-loadings. In the first to the fifth step, four items were deleted, due to each factoring less than .50. No cross-loading was found in this phase, resulting in 61 remaining items that met the specified criteria.
The results obtained for descriptive statistics, EFA, and internal consistency (Cronbach’s α coefficient) are summarized in Table 3. The nine extracted factors represented 72.0% of the variance in the 61 items. These were “design teaching plan for LHL,” “simple and concrete teaching,” “build a friendly environment,” “use easy-to-use resources,” “life-oriented teaching,” “check for understanding,” “encourage clients to ask questions,” “self-designed materials to clients,” and “interdisciplinary collaboration.” Cronbach’s alphas greater than .80 were obtained for each subscale and that of the total scale was .97.
CFA results for the validation sample.
As shown in Table 4, the result of the initial 61-item CFA revealed that none of the model fit indices met the criteria, with two exceptions, PNFI and PCFI. This indicated that the CFA model had to be modified. In the process of model modification, a total of 21 items were deleted sequentially (one by one) due to measuring constructs similar to those measured by other items. After the removal of 21 items, most of the model fit indices were acceptable, except for the GFI and the AGFI. Finally, the model fit indices of the modified model were generally acceptable, such that χ2/df = 1.90, NNFI = .94, CFI = .94, SRMR = .048, and RMSEA = .049 (see Table 4).
As Table 5 illustrates, all standardised factor loadings exceeded the threshold of .50 and the AVE of each construct ranged from .58 to .82, which indicated excellent convergent validity. The construct reliability of all the constructs was greater than .70, which indicated acceptable convergent reliability.
As shown in Table 6, the majority of square roots of the AVE of each construct (values in the diagonal elements) were greater than the corresponding inter-construct correlations (values below the diagonal). This suggested that the result supported the discriminant validity of the IOHLC.
Discussion
Internal consistency reliability
The findings of this study showed that the IOHLC meets internal consistency reliability standards. After calibration for validation, the final version of the 49-item health literacy competencies also revealed acceptable internal reliability, with Cronbach’s alpha > .80.
Psychometric properties of the instrument of health literacy competence
In this study, we used the Rasch model and two-stage factor analysis to verify the knowledge dimension and the skill dimension of the IOHLC. The IOHLC, which consisted of 9 knowledge items and 40 skill items making up 9 dimensions, was verified as having good convergent validity, explaining 72% of total variance. Therefore, this scale has a good theoretical basis and can be used to design courses to train professionals in health literacy.
The number of items in existing instruments measuring health literacy knowledge for health professionals ranges from 6 to 29; these items pertain to health literacy knowledge in general, and specifically to disadvantageous factors for vulnerable populations, prevalence of LHL, and consequences for various countries [27–28]. As knowledge and skills were measured in different domains, we adopted Rasch analysis, which entailed examining the adequacy of the 27 items in the knowledge domain, based on difficulty and item goodness-of-fit statistics (p > .05). However, no tools for health literacy competencies have been verified through IRT. There were limited opportunities for the comparison of the current results with others.
This study validated the IOHLC with 40 skill items related to patient education. According to Schulz and Nakamoto [29], the concepts, health literacy practices, and communication skills are difficult to distinguish. Literature positioning health literacy as communication skills considered health literacy practices as written and oral communication [30]. This study did not follow the categorization in the majority of previous studies on health literacy skills, and did not develop items based solely on the communication level.
Conversely, this study combined interview results relating to practices within patient education planning to inform categorization. This is an important contribution by this study in relation to the identification of health literacy competencies. The scale developed in this study is aimed at assessing the health literacy competencies of professionals, and professionals are expected to improve clients’ health literacy through patient education. Since health literacy is not only about communication skills, health literacy competencies should also include assessment and confirmation of comprehension [29]. Among the nine factors in this study, five were related to communication, while the other four included a supportive environment and assessment and confirmation of patient comprehension, which were also mentioned in literature [30]. In our scale, these concepts were operationalized into measurable items, to enable comprehensive representation of the health literacy competencies of health professionals.
Coleman et al. [12] initially identified a set of health literacy practices and educational competencies. The current study collected a wide range of items from literature and interviews results pertaining to the concept of health literacy. The scale consisted of 65 skill items before elimination, which were slightly more than the 59 items by Coleman (27 skill items + 32 practice items). The greatest difference between this study and that by Coleman et al. is that the latter divided health literacy competencies into potential educational competencies and potential practices. Although some items in this study included similar skill or practice items, the nine factors pertaining to skill items in this study were based on patient education planning. These skill items can be used as indices of learning competencies.
In terms of contributions to future application, not only can this scale be used comprehensively for regular assessment of health professionals’ health literacy competencies, but it can also be used to design health literacy training through the incorporation of contents into education goals. Unlike procedures in tools of health literacy practices, education, or competencies, this study adopted procedures that are similar to those in most scale construction research. However, the validation in this study may be used to inform inferences about the reliability and validity of tools relating to health literacy knowledge and skills mentioned in literature and used in practice.
The descriptive results of the IOHLC
The items established in this study were similar to those in other studies. However, items that were scored differently yielded different results. In terms of acceptable scores on knowledge items, lower scores were obtained for “assessment of health literacy” and “extra support for patients with LHL.” Higher scores were obtained on items relating to the definition of health literacy and risk factors of LHL. This is similar to the results of a study conducted by Jukkala et al. [14] on the health literacy knowledge of professionals, wherein lower scores were obtained for LHL outcomes and assistance needed.
Among the skills, lower scores were obtained on items related to designing educational materials, such as educational materials for LHL, educational materials applicable to different groups, and Internet and multimedia educational materials. This is similar to the results of Cafiero’s [18] study on nurses, whereby audio-visual and computer-aided teaching tools were applied less frequently. This is also similar to the research results obtained in Howard, Jacobson, and Kripalani’s [31] study on physicians and in Mackert’s [32] study on pharmacists, wherein the application of drawing visual aids in LHL educational materials was relatively low. Schlichting et al. [33] found that providing LHL educational materials was a skill that was not frequently applied by physicians, nurse practitioners, dentists, and dental hygienists. Since the measurement methods and goals were not completely identical to those in previous studies, we could not determine whether the different results were due to differences in health literacy competencies. To ascertain this, future studies should further measure different health professionals’ health literacy competencies, using the IOHLC-PE.
Limitations
This study continued research on the health literacy competencies of professionals, which were determined by consensus through the Delphi method. Then, IRT and two-stage factor analysis were employed to verify health literacy competence tools for professionals, which were oriented towards patient health education. There are no similar existing studies, as previous studies on health literacy competency were more dispersed and did not have a consistent framework for reference. Although this study developed the first structural tool for health literacy competency, it still has limitations relating to professional groups used, the sampling method, and language.
First, this study selected health professionals who are not medical doctors in Taiwan, used hospitals as the sampling unit, due to the limitations of time and resources, and questionnaires were recovered from nursing professionals, mostly. Thus, the health literacy competencies reflected in the results may be limited in their application to other professional groups. Secondly, due to convenience sampling, only hospitals in the southern and northern regions of Taiwan were included, and remote areas were not covered. Thus, the results cannot be generalized to different levels of hospitals and in different regions. Finally, the original scale developed in this study is in Chinese, and the items on health literacy competencies were developed based on practice content. Although it was validated through factor analysis after being translated into English and fitting the hypothesis, this scale may only be applicable to Chinese-speaking professionals.
Conclusions
The tool in this study is the first to validate health literacy competencies in a rigorous way. Through factor analysis and IRT, 92 items were simplified into the 49-item IOHLC that is orientated toward patient education and has construct validity, including the knowledge level and 9 sub-dimensional skill domains. It provides future Chinese-speaking professionals with a framework and guidance for designing health literacy courses.
Practice implications
Chinese society only recently embarked on health literacy research. Because LHL has an increasingly important influence on health, the education and training of professionals has to keep abreast with developments. The reliable and valid IOHLC may serve as a reference for theoretical and in-service training on health literacy competencies, with courses designed based on the scale’s framework. Directors of medical institutions can also use this tool to regularly assess the health literacy competencies of professionals, so as to ensure the quality of health care provided. Health literacy is closely related to patient education. Because actual practices of different healthcare professionals may differ, we recommend that future studies investigate different groups, in order to analyze the performance of this scale with use on different patient education professionals.
Acknowledgments
We would like to thank Chang Gung Medical Education Research Centre, CG-MERC for editing of the manuscript.
Author Contributions
- Conceptualization: LCC.
- Data curation: FLW PLH.
- Formal analysis: LCC.
- Funding acquisition: HJC.
- Investigation: YCC LLL.
- Methodology: HJC LCC YCC LLL FLW PLH.
- Project administration: YCC LLL.
- Resources: FLW PLH.
- Software: LCC HJC.
- Supervision: LCC.
- Validation: YCC LLL.
- Visualization: FLW PLH.
- Writing – original draft: LCC.
- Writing – review & editing: HJC.
References
- 1.
World Health Organization: Division of Health Promotion, Education and Communications Health Education and Health Promotion Unit. Health promotion glossary. Geneva: World Health Organization; 1998.
- 2. Heijmans M, Waverijn G, Rademakers J, van der Vaart R, Rijken M. Health literacy: functional, communicative and critical health literacy of chronic disease patients and their importance for self-management. Patient Educ Couns. 2015;98: 41–48. pmid:25455794
- 3. Mantwill S, Schulz PJ. Low health literacy associated with higher medication costs in patients with type 2 diabetes mellitus: evidence from matched survey and health insurance data. Patient Educ Couns. 2015;98(15): 1625–1630.
- 4. Zhang NJ, Terry A, McHorney CA. Impact of health literacy on medication adherence: a systematic review and meta-analysis. Ann Pharmacother. 2014;48(6): 741–751. pmid:24619949
- 5. Carmona RH. Health literacy: a national priority. J Gen Intern Med. 2006;21(8): 803.
- 6.
Kutner M, Greenberg E, Jin Y, Paulsen C. The health literacy of America’s adults: results from the 2003 National Assessment of Adult Literacy (NCES 2006–483). Washington, DC: National Center for Education Statistics, US Department of Education; 2006.
- 7. Berkman ND, Sheridan SL, Donahue KE, Halpern DJ, Crotty K. Low health literacy and health outcomes: an updated systematic review. Ann Intern Med. 2011;155(2): 97–107. pmid:21768583
- 8.
Institute of Medicine. Health literacy: a prescription to end confusion. Washington, DC: National Academic Press; 2004.
- 9. Nutbeam D. Health literacy as a public health goal: a challenge for contemporary health education and communication strategies into the 21st century. Health Promot Int. 2010;15(3), 259–267.
- 10. Macabasco-O’Connell A, Fry-Bowers EK. Knowledge and perceptions of health literacy among nursing professionals. J Health Commun. 2011;16(3): 295–307.
- 11. Lambert M, Luke J, Downey B, Crengle S, Kelaher M, Reid S, et al. Health literacy: health professionals’ understandings and their perceptions of barriers that Indigenous patients encounter. BMC Health Serv Res. 2014;14(1): 614.
- 12. Coleman CA, Hudson S, Maine LL. Health literacy practices and educational competencies for health professionals: a consensus study. J Health Commun. 2013;18(Suppl 1): 82–102.
- 13. Ali NK, Ferguson RP, Mitha S. Do medical trainees feel confident communicating with low health literacy patients? J Community Hosp Intern Med. 2014;4(2). 10.3402/jchimp.v4.22893.
- 14. Jukkala A, Deupree JP, Graham S. Knowledge of limited health literacy at an academic health center. J Contin Educ Nurs. 2009;40(7): 298–302; quiz 303–294, 336. pmid:19639850
- 15. Turner T, Cull WL, Bayldon B, Klass P, Sanders LM, Frintner MP, et al. Pediatricians and health literacy: descriptive results from a national survey. Pediatrics. 2009;124(Suppl 3): S299–S305.
- 16. Schwartzberg JG, Cowett A, VanGeest J, Wolf MS. Communication techniques for patients with low health literacy: a survey of physicians, nurses, and pharmacists. Am J Health Behav. 2007;31(Suppl 1): S96–S104.
- 17. Coleman C. Teaching health care professionals about health literacy: a review of the literature. Nurs Outlook. 2011;59(2): 70–78. pmid:21402202
- 18. Cafiero M. Nurse practitioners’ knowledge, experience, and intention to use health literacy strategies in clinical practice. J Health Commun. 2013;18(Suppl 1): 70–81.
- 19.
Parnell TA. Health literacy in nursing: providing person-centered care. New York: Springer Publishing Co Inc; 2015.
- 20. Chang LC, Chen YC, Wu FL, Liao LL. Exploring health literacy competencies toward health education program for healthcare professionals: a Delphi study. BMJ Open. 2016;10: e3120.
- 21. Andrich D. A rating formulation for ordered response categories. Psychometrika. 1978;43(4): 561–573.
- 22.
Linacre JM. A user’s guide to WINSTEPS/MINISTEP: Rasch-model computer programs. 2009. Available from: http://www.winsteps.com/a/Winsteps-ManualPDF.zip).
- 23.
Hair JF, Black WC, Babin BJ, Anderson RE, Tatham RL. Multivariate data analysis. 6th ed. New Jersey: Prentice-Hall; 2006.
- 24.
DeVellis RF. Scale development: theory and applications. Newbury Park, CA: Sage; 1998.
- 25.
Chang WH. How to write and submit an academic paper with SEM. Taichung, Taiwan: Tingmao; 2011.
- 26.
Schumacker RE, Lomax RG. A beginner’s guide to structural equation modeling. 2nd ed. Mahwah, New Jersey: Lawrence Erlbaum Associates; 2004.
- 27. Fornell C, Larcker D. Evaluating structural equation models with unobservable variables and measurement error. J Mark Res. 1981;18(1): 39–50.
- 28. Pagels P, Kindratt T, Arnold D, Brandt J, Woodfin G, Gimpel N. Training family medicine residents in effective communication skills while utilizing promotoras as standardized patients in OSCEs: a health literacy curriculum. Int J Family Med. 2015;129187. pmid:26491565
- 29. Schulz PJ, Nakamoto K. Health literacy and patient empowerment in health communication: the importance of separating conjoined twins. Patient Educ Couns. 2013;90(1): 4–11. pmid:23063359
- 30. Peng J, Valpreda M, Lechelt K. Health literacy: improving communication between health care practitioners and older adults. Canadian Geriatrics Society Journal of CME. 2015;5(2): 1–30.
- 31. Howard T, Jacobson KL, Kripalani S. Doctor talk: physicians’ use of clear verbal communication. J Health Commun. 2013;18(8): 991–1001. pmid:23577746
- 32. Mackert M. Health literacy knowledge among direct-to-consumer pharmaceutical advertising professionals. Health Commun. 2011;26(6): 523–533.
- 33. Schlichting JA, Quinn MT, Heuer LJ, Schaefer CT, Drum ML, Chin MH. Provider perceptions of limited health literacy in community health centers. Patient Educ Couns. 2007;69(1–3): 114–120. pmid:17889494