Validation of the instrument of health literacy competencies for Chinese-speaking health professionals

The study aimed to illustrate the constructs and test the psychometric properties of an instrument of health literacy competencies (IOHLC) for health professionals. A multi-phase questionnaire development method was used to develop the scale. The categorization of the knowledge and practice domains achieved consensus through a modified Delphi process. To reduce the number of items, the 92-item IOHLC was psychometrically evaluated through internal consistency, Rasch modeling, and two-stage factor analysis. In total, 736 practitioners, including nurses, nurse practitioners, health educators, case managers, and dieticians completed the 92-item IOHLC online from May 2012 to January 2013. The final version of the IOHLC covered 9 knowledge items and 40 skill items containing 9 dimensions, with good model fit, and explaining 72% of total variance. All domains had acceptable internal consistency and discriminant validity. The tool in this study is the first to verify health literacy competencies rigorously. Moreover, through psychometric testing, the 49-item IOHLC demonstrates adequate reliability and validity. The IOHLC may serve as a reference for the theoretical and in-service training of Chinese-speaking individuals’ health literacy competencies.


Introduction
The World Health Organization [1] defined health literacy as the cognitive and social skills that determine the motivation and ability of individuals to gain access to, understand, and use information in ways that promote and maintain good health. The important effects of an adequate level of health literacy on the disease management process have been observed in various studies [2][3][4] and drew the attention of health professionals towards successful care for people with chronic disease.
Several nationwide health literacy surveys have reported moderate to high prevalence of low health literacy (LHL) among adults, which is bound to result in healthcare problems [5][6]. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 three private Medical Centers in northern Taiwan and three private regional hospitals in southern Taiwan to participate in this study. Participants included non-physician healthcare professionals, case managers, health educators, dietitians, and pharmacists who had been working for at least six months. Participants who were expected to leave within a month were excluded from this study.

Procedures
A convenience sample of 812 healthcare professionals completed self-administered online questionnaires. Information packets were distributed by hospital officials at selected sites to potential participants. The packets contained an information sheet, an informed consent form, and a reply envelope addressed to the primary investigator. Participants who agreed to participate in the study and returned the written inform consent forms received a link to the online questionnaires on the e-mail addresses that they provided. Completion of the online questionnaires took approximately 15-20 minutes. Online questionnaires were built into a learning platform and participants were required to use their e-mail addresses and date of birth to set the log-in profile. On this basis, we could ensure that participants personally responded to the questionnaire. Permission to conduct the study was obtained from the Institutional Review Board at Chang Gung Memorial Hospital (no: 100-4569B, approval date: 2012.3.16).

Measure
A comprehensive set of measurable competencies of health literacy was established by using a modified Delphi technique with a panel of 24 experts who had experience in providing training on health literacy patient education [20]. Twenty-four experts (12 academic scholars in health literacy and 12 professionals with training related to health literacy practice) were invited to participate in the second to fourth rounds of data collection in the Delphi study. Moreover, consent forms were sent to the experts and returned to us prior to the commencement of our study.
The knowledge domain comprises 27 true/false items (12 were knowledge-specific items and 15 were related to attributes of patients with LHL). An incorrect answer for each item is assigned a score of 0 and a correct answer is assigned a score of 1. Three of the items (K1, K2, and A9) are reverse-worded and reverse-scored. The total score ranges from 0 to 27. Higher scores indicate higher levels of health literacy knowledge.
The skills domain consisted of 65 items related to health literacy practice. Respondents were required to indicate the level of agreement with items such as, "Being able to use plain language, instead of medical jargon." A five-point Likert-scale was used for items in this domain, ranging from highly disagree (1) to highly agree (5). Higher scores indicated better health literacy skills.
The demographic data obtained from respondents included age, education level, occupational role at the clinic, years of experience in medicine/health care, and having ever heard of the term "health literacy." We recruited 20 nurses for participation in the pilot survey before the investigation.

Statistical analysis
Item response theory (IRT) analysis. Rasch analysis was used to assess the psychometric properties of the knowledge domain using the Andrich rating scale model [21] with Winsteps software (version 3.74.0) [22]. Two values are used throughout the analysis: logit measures and fit statistics. The logit (or log-odds units) is the natural logarithm of the odds of a participant succeeding at carrying out a specific task. Conventionally, 0 logit is ascribed to mean item difficulty. Moreover, the chi-square statistics for individual items' (p > .05) difficulty and response ability (between +2.0 logits and -2.0 logits) were used as the determination criteria for the selection of items.
Exploratory factor analysis (calibration sample). The validity of 65 items of the skills domain was established through exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). The EFA and CFA were performed through use of a calibration sample and a validation sample, respectively. A total of 736 participants were equally divided into two subsamples, using the SPSS random case selection procedure (Version 15.0; SPSS Inc., Chicago, IL). The random selection procedures in SPSS were repeatedly performed until the homogeneity of baseline characteristics (e.g., education level, professional position, and tenure) among groups was warranted (p > .5). The first sample, termed the "calibration sample," was used to explore the factor structure. In contrast, the second sample, termed the "validation sample," was used to validate the factor structure constructed through the calibration sample.
With the calibration sample, principal component EFA was performed on the 65 items to extract the major contributing factors, and Varimax rotation was used to recognize relationships between items and common factors. The number of factors was decided upon, based on the eigenvalue-greater-than-1 rule. Factor loadings greater than 0.50 were regarded as significantly relevant and items with cross-loadings greater than 0.50 were deleted [23]. All item deletions were conducted one by one and the EFA model was re-specified following each deletion. The internal consistency of each construct was determined through Cronbach's alpha; a value of 0.70 or higher was considered acceptable [24].
CFA (validation sample). The validation sample was used to validate and modify the factor structure of the health literacy scale under development when the model did not fit the data.
In the process of model modification, items that correlated too highly with others were deleted by examining the modification index (MI) of the additional specification of error covariance. A larger MI (e.g., >50) between two items indicated that those two items measured the same thing, thus necessitating deletion of one of the items, according to the parsimony principle [25]. The CFA model is to be modified until most of the model fit indices meet the criteria.
The goodness fit of the model was assessed using absolute fit indices, relative fit indices, and parsimony fit indices [23]. The absolute fit indices included the goodness of fit index (GFI), adjusted GFI index (AGFI), standardized root mean squared residual (SRMR), and the root mean square error of approximation (RMSEA). The relative fit indices were assessed through the normed fit index (NFI), non-normed fit index (NNFI), and the comparative fit index (CFI). Finally, the parsimony fit indices were determined using the parsimony normed fit index (PNFI), parsimony comparative fit index (PCFI), and the likelihood-ratio (χ2/df) [26].
The convergent validity of the items and factor structures was determined through standardized factor loading (values of 0.50 or higher were considered acceptable) and average variance extraction (AVE) (values of 0.50 or higher indicated acceptable) [23]. Convergent reliability was assessed through construct reliability (CR; values of .70 or higher were considered acceptable). The AVE and correlation matrices among the latent constructs were used to establish discriminant validity. The square root of the AVE of each construct was required to be higher than the correlation coefficient of that construct and other constructs [27].

Participant characteristics
Among the 746 surveys returned from 6 hospitals, during analysis, we included 736 (99%) participants who had complete and valid data for all variables. All participants in this study were female. Table 1 describes the demographic characteristics of the study sample. The participants' mean age was 30.71 ± 6.65 years (range = 20-60 years), with an average of 6.94 ± 6.25 years working in medicine/health care. The majority of participants (28.4%) had 1-3 years' working experience, followed by those with 10-15 years' (21.9%) experience. Participants' professional roles at the hospital were registered nurse (78.5%), nurse practitioner (11.4%), dietician (4.5%), case manager (2.7%), health educator (2.4%), and health manager (0.5%) In total, 57.7% had a university education and 62.8% had never heard of the term, health literacy (see Table 1).
IRT calibration for 27 items of the knowledge domain. In the IRT analysis of the 27 items of the knowledge domain, 16 misfit items identified because of Rasch misfit statistics and measurement redundancy (p < .05) were deleted from the final questionnaire (see Table 2).
The estimated item difficulty for retained items ranged from -2.00 logits (least difficult) to +2.64 logits (most difficult). Among the remaining nine items, seven were knowledge items (K2, K4, K5, K7, K8, K9, and K10) and two were LHL items (A1 and A3). The rate of correct answers for each item ranged from 37.6% to 96.8%.
EFA results for the calibration sample. EFA was performed sequentially a total of five times until none of the items factored less than .50 and there were no cross-loadings. In the first to the fifth step, four items were deleted, due to each factoring less than .50. No cross-loading was found in this phase, resulting in 61 remaining items that met the specified criteria.
The results obtained for descriptive statistics, EFA, and internal consistency (Cronbach's α coefficient) are summarized in Table 3. The nine extracted factors represented 72.0% of the variance in the 61 items. These were "design teaching plan for LHL," "simple and concrete teaching," "build a friendly environment," "use easy-to-use resources," "life-oriented teaching," "check for understanding," "encourage clients to ask questions," "self-designed materials to clients," and "interdisciplinary collaboration." Cronbach's alphas greater than .80 were obtained for each subscale and that of the total scale was .97.
CFA results for the validation sample. As shown in Table 4, the result of the initial 61-item CFA revealed that none of the model fit indices met the criteria, with two exceptions, PNFI and PCFI. This indicated that the CFA model had to be modified. In the process of model modification, a total of 21 items were deleted sequentially (one by one) due to measuring constructs similar to those measured by other items. After the removal of 21 items, most of the model fit indices were acceptable, except for the GFI and the AGFI. Finally, the model fit indices of the modified model were generally acceptable, such that χ 2 /df = 1.90, NNFI = .94, CFI = .94, SRMR = .048, and RMSEA = .049 (see Table 4).
As Table 5 illustrates, all standardised factor loadings exceeded the threshold of .50 and the AVE of each construct ranged from .58 to .82, which indicated excellent convergent validity. The construct reliability of all the constructs was greater than .70, which indicated acceptable convergent reliability.
As shown in Table 6, the majority of square roots of the AVE of each construct (values in the diagonal elements) were greater than the corresponding inter-construct correlations (values below the diagonal). This suggested that the result supported the discriminant validity of the IOHLC.

Internal consistency reliability
The findings of this study showed that the IOHLC meets internal consistency reliability standards. After calibration for validation, the final version of the 49-item health literacy competencies also revealed acceptable internal reliability, with Cronbach's alpha > .80.

Psychometric properties of the instrument of health literacy competence
In this study, we used the Rasch model and two-stage factor analysis to verify the knowledge dimension and the skill dimension of the IOHLC. The IOHLC, which consisted of 9 knowledge items and 40 skill items making up 9 dimensions, was verified as having good convergent validity, explaining 72% of total variance. Therefore, this scale has a good theoretical basis and can be used to design courses to train professionals in health literacy.
The number of items in existing instruments measuring health literacy knowledge for health professionals ranges from 6 to 29; these items pertain to health literacy knowledge in general, and specifically to disadvantageous factors for vulnerable populations, prevalence of LHL, and consequences for various countries [27][28]. As knowledge and skills were measured in different domains, we adopted Rasch analysis, which entailed examining the adequacy of the 27 items in the knowledge domain, based on difficulty and item goodness-of-fit statistics (p > .05). However, no tools for health literacy competencies have been verified through IRT. There were limited opportunities for the comparison of the current results with others.
This study validated the IOHLC with 40 skill items related to patient education. According to Schulz and Nakamoto [29], the concepts, health literacy practices, and communication skills are difficult to distinguish. Literature positioning health literacy as communication skills considered health literacy practices as written and oral communication [30]. This study did not follow the categorization in the majority of previous studies on health literacy skills, and did not develop items based solely on the communication level. Conversely, this study combined interview results relating to practices within patient education planning to inform categorization. This is an important contribution by this study in relation to the identification of health literacy competencies. The scale developed in this study is aimed at assessing the health literacy competencies of professionals, and professionals are expected to improve clients' health literacy through patient education. Since health literacy is not only about communication skills, health literacy competencies should also include assessment and confirmation of comprehension [29]. Among the nine factors in this study, five were related to communication, while the other four included a supportive environment and  assessment and confirmation of patient comprehension, which were also mentioned in literature [30]. In our scale, these concepts were operationalized into measurable items, to enable comprehensive representation of the health literacy competencies of health professionals. Coleman et al. [12] initially identified a set of health literacy practices and educational competencies. The current study collected a wide range of items from literature and interviews results pertaining to the concept of health literacy. The scale consisted of 65 skill items before elimination, which were slightly more than the 59 items by Coleman (27 skill items + 32 practice items). The greatest difference between this study and that by Coleman et al. is that the latter divided health literacy competencies into potential educational competencies and potential practices. Although some items in this study included similar skill or practice items, the nine factors pertaining to skill items in this study were based on patient education planning. These skill items can be used as indices of learning competencies.
In terms of contributions to future application, not only can this scale be used comprehensively for regular assessment of health professionals' health literacy competencies, but it can also be used to design health literacy training through the incorporation of contents into education goals. Unlike procedures in tools of health literacy practices, education, or competencies, this study adopted procedures that are similar to those in most scale construction research. However, the validation in this study may be used to inform inferences about the reliability and validity of tools relating to health literacy knowledge and skills mentioned in literature and used in practice.

The descriptive results of the IOHLC
The items established in this study were similar to those in other studies. However, items that were scored differently yielded different results. In terms of acceptable scores on knowledge items, lower scores were obtained for "assessment of health literacy" and "extra support for patients with LHL." Higher scores were obtained on items relating to the definition of health literacy and risk factors of LHL. This is similar to the results of a study conducted by Jukkala et al. [14] on the health literacy knowledge of professionals, wherein lower scores were obtained for LHL outcomes and assistance needed. Among the skills, lower scores were obtained on items related to designing educational materials, such as educational materials for LHL, educational materials applicable to different groups, and Internet and multimedia educational materials. This is similar to the results of Cafiero's [18] study on nurses, whereby audio-visual and computer-aided teaching tools were applied less frequently. This is also similar to the research results obtained in Howard, Jacobson, and Kripalani's [31] study on physicians and in Mackert's [32] study on pharmacists, wherein the application of drawing visual aids in LHL educational materials was relatively low. Schlichting et al. [33] found that providing LHL educational materials was a skill that was not frequently applied by physicians, nurse practitioners, dentists, and dental hygienists. Since the measurement methods and goals were not completely identical to those in previous studies, we could not determine whether the different results were due to differences in health literacy competencies. To ascertain this, future studies should further measure different health professionals' health literacy competencies, using the IOHLC-PE.

Limitations
This study continued research on the health literacy competencies of professionals, which were determined by consensus through the Delphi method. Then, IRT and two-stage factor analysis were employed to verify health literacy competence tools for professionals, which were oriented towards patient health education. There are no similar existing studies, as previous studies on health literacy competency were more dispersed and did not have a consistent framework for reference. Although this study developed the first structural tool for health literacy competency, it still has limitations relating to professional groups used, the sampling method, and language. First, this study selected health professionals who are not medical doctors in Taiwan, used hospitals as the sampling unit, due to the limitations of time and resources, and questionnaires were recovered from nursing professionals, mostly. Thus, the health literacy competencies reflected in the results may be limited in their application to other professional groups. Secondly, due to convenience sampling, only hospitals in the southern and northern regions of Taiwan were included, and remote areas were not covered. Thus, the results cannot be generalized to different levels of hospitals and in different regions. Finally, the original scale developed in this study is in Chinese, and the items on health literacy competencies were developed based on practice content. Although it was validated through factor analysis after being translated into English and fitting the hypothesis, this scale may only be applicable to Chinesespeaking professionals.

Conclusions
The tool in this study is the first to validate health literacy competencies in a rigorous way. Through factor analysis and IRT, 92 items were simplified into the 49-item IOHLC that is Note: λ = standardized factor loading, R 2 = reliability of item (also called square multiple correlation, SMC), CR = construct (component/composite) reliability, AVE = average variance extraction, all factor loadings were statistically significant at p < .001.
doi:10.1371/journal.pone.0172859.t005 Note. The value in diagonal element is the square root of AVE of each construct; all correlation coefficients were statistically significant at p < .001. orientated toward patient education and has construct validity, including the knowledge level and 9 sub-dimensional skill domains. It provides future Chinese-speaking professionals with a framework and guidance for designing health literacy courses.

Practice implications
Chinese society only recently embarked on health literacy research. Because LHL has an increasingly important influence on health, the education and training of professionals has to keep abreast with developments. The reliable and valid IOHLC may serve as a reference for theoretical and in-service training on health literacy competencies, with courses designed based on the scale's framework. Directors of medical institutions can also use this tool to regularly assess the health literacy competencies of professionals, so as to ensure the quality of health care provided. Health literacy is closely related to patient education. Because actual practices of different healthcare professionals may differ, we recommend that future studies investigate different groups, in order to analyze the performance of this scale with use on different patient education professionals.