Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Biases and Power for Groups Comparison on Subjective Health Measurements

  • Jean-François Hamel ,

    jean-francois.hamel@etu.univ-nantes.fr

    Affiliations EA 4275: Biostatistics, Clinical Research and Subjective Measurements in Health Sciences, Faculty of Pharmaceutical Sciences, University of Nantes, Nantes, France, Methodology and Biostatistics Unit, University of Angers, Angers, France

  • Jean-Benoit Hardouin,

    Affiliation EA 4275: Biostatistics, Clinical Research and Subjective Measurements in Health Sciences, Faculty of Pharmaceutical Sciences, University of Nantes, Nantes, France

  • Tanguy Le Neel,

    Affiliation EA 4275: Biostatistics, Clinical Research and Subjective Measurements in Health Sciences, Faculty of Pharmaceutical Sciences, University of Nantes, Nantes, France

  • Gildas Kubis,

    Affiliation EA 4275: Biostatistics, Clinical Research and Subjective Measurements in Health Sciences, Faculty of Pharmaceutical Sciences, University of Nantes, Nantes, France

  • Yves Roquelaure,

    Affiliation Laboratory of Ergonomics and Epidemiology in Health at Work, University of Angers, Angers, France

  • Véronique Sébille

    Affiliation EA 4275: Biostatistics, Clinical Research and Subjective Measurements in Health Sciences, Faculty of Pharmaceutical Sciences, University of Nantes, Nantes, France

Abstract

Subjective health measurements are increasingly used in clinical research, particularly for patient groups comparisons. Two main types of analytical strategies can be used for such data: so-called classical test theory (CTT), relying on observed scores and models coming from Item Response Theory (IRT) relying on a response model relating the items responses to a latent parameter, often called latent trait. Whether IRT or CTT would be the most appropriate method to compare two independent groups of patients on a patient reported outcomes measurement remains unknown and was investigated using simulations. For CTT-based analyses, groups comparison was performed using t-test on the scores. For IRT-based analyses, several methods were compared, according to whether the Rasch model was considered with random effects or with fixed effects, and the group effect was included as a covariate or not. Individual latent traits values were estimated using either a deterministic method or by stochastic approaches. Latent traits were then compared with a t-test. Finally, a two-steps method was performed to compare the latent trait distributions, and a Wald test was performed to test the group effect in the Rasch model including group covariates. The only unbiased IRT-based method was the group covariate Wald’s test, performed on the random effects Rasch model. This model displayed the highest observed power, which was similar to the power using the score t-test. These results need to be extended to the case frequently encountered in practice where data are missing and possibly informative.

Introduction

Subjective health measurements are increasingly used in clinical studies to assess patients’ perception of their own health [1], [2]. For example, they allow assessing phenomena such as quality of life, tiredness, depression or anxiety. These phenomena are called latent variables because they can not be directly observed nor measured. However, their effects can be accessible through the analysis of other variables that are directly observable.

Assessing these subjective measurements is usually done by using self-assessment questionnaires called patient reported outcomes (PRO) which consist of a set of questions often called items. Two strategies have been developed to analyse such questionnaires: the Classical Test Theory (CTT) and the Item Response Theory (IRT). These theories provide different conceptual frameworks for the analysis of PRO, each being based on several hypotheses that have to be tested before analysis. CTT is based on the assumption of a linear model explaining the individual observed score by a theoretical individual score plus a stochastic error term. Such an hypothesis can be tested using Cronbach’s alpha [3]. On the other hand, IRT is based on the assumption of a logit model explaining the individual item responses by a latent parameter, often called latent trait. Such an hypothesis can be tested using R1m global tests of item fit [4].

With CTT, the item responses are combined to provide scores allowing analysing the data. In most cases, these scores should be considered as ordinal qualitative measurements of the latent variables studied, and thus cannot be considered as interval measurements [5], [6]. It means that a unit difference characterizes the same amount when measured from different initial levels on the latent trait scale.Therefore, a given score variation cannot be associated with a given latent variable variation and one should not rely on CTT to quantify an expected effect or a clinical significance threshold [7], [8].

With IRT, the latent variable is quantified by measuring the latent trait. The latent trait, estimated by modelling the probability of an observed response to an item, can always be considered as a quantitative variable with interval measurement properties [9]. Then, the IRT systematically allows both quantifying an expected effect or the clinical relevance of an observed difference, but also highlighting latent trait differences between compared groups.

A simple and widely used IRT model, adapted to the analysis of dichotomous items, is the Rasch model [9]. In this model, the probability of a specific response (e.g. positive or negative answer) is modelled as a function of person and item parameters. Person parameters pertain to the latent trait level of people who are evaluated while item parameters pertain to the difficulty of the items (in a Rasch model, the difficulty of an item is equal to the latent trait of an individual who would have an equal probability of responding positively or negatively to this item). Person parameters can then be interpreted as a propensity to respond positively to each item.

This model can be grasped in different ways: all the individual latent traits can be considered as a set of fixed effects (this is known as the fixed effects Rasch model), or as realizations of a random variable assumed to be normally distributed (this is known as the random effects Rasch model). With a fixed effects Rasch model, the purpose is to assess for each individual the value of his/her individual latent trait. On the contrary, with a random effects Rasch model, the purpose is to directly estimate the parameters of the overall distribution of the latent trait: in the case of a normal distribution, two parameters are estimated: the mean and the variance of the latent trait. Finally, if the sample consists of individuals coming from potentially distinct populations, it is possible to add a group covariate in the random effect model.

Several methodologies can be used to compare two samples of patients on PRO data coming from an IRT-based or a CTT-based validated questionnaire. These methodologies depend on the use of CTT or IRT, and on the chosen model to estimate latent traits if IRT is used. Whether one approach would be more suitable than another is still under debate and not perfectly known to date.

The aim of our study is to evaluate and to compare different group-comparison methods from IRT-based and CTT-based models. The statistical properties of the different methods either based on CTT or IRT were assessed and compared by simulations regarding the type I error, power, and bias in parameter estimates.

Methods

Simulation Study

One of the most relevant strategies to explore the empirical properties of comparison methodologies is to perform them in perfectly known contexts. Then, the “true” statistical conclusion is known, and can be compared with the observed conclusion. For example, to study the type I error of a group comparison test, it should be performed on two samples both drawn from the same population. The proportion of rejections of the null hypothesis should actually correspond to the probability of finding a difference that does not exist in reality. In contrast, this test should be performed on two samples drawn from different populations to study its power.

An appropriate strategy to know a priori the origin of the analysed samples is to generate those using Monte-Carlo simulations. Unlike a real data study, data resulting from Monte-Carlo simulations should allow differentiating whether a statistically significant difference is linked to a real difference or to the first order risk of the considered test.

In our study, we generated the data using Monte Carlo simulations with a Rasch model. Doing so allowed us to assume that the simulated questionnaires had been previously validated to be analysed either with a Rasch model or with CTT: the assumptions needed to analyse a data with a CTT-based model were necessarily fulfilled through a data satisfying the assumptions of a Rasch model [10].

Several parameters combinations were considered to generate the simulated data.

  • For each simulation, we simulated two samples A and B of equal size . The sample size per group ranged from 50 to 400 subjects to reflect sample sizes commonly encountered in clinical research studies.
  • The latent trait distribution was defined as normal. The normal distribution was chosen to respect the hypothesis related to the implementation of a random effects Rasch model.
  • The latent trait distribution variances were equal to 1 to be within the framework of reduced data, and so to overcome the problem of the measurement scale. Thus, the differences in latent trait and in difficulties were only expressed in terms of standard deviation fraction.
  • The simulated differences between the means of latent traits were set at 0, 0.2, 0.5 and 0.8. The latent traits mean for groups A and B were therefore respectively equal to and to . A difference set at 0 corresponded to a lack of effect, and allowed estimating the tests type I error by computing the proportion of rejection of the null hypothesis. A difference set at 0.2, 0.5 or 0.8 corresponded respectively to a small, medium or large effect size [11] and allowed estimating power by computing the proportion of rejection of the null hypothesis.
  • The items were defined as dichotomous, so they could be analysed by a Rasch model. Each positive response was coded as 1 and each negative response as 0. The number of items was set at 5 or 10 in accordance with the size of the subscales of the most commonly used questionnaires to measure PRO. For example, the NHP consists of 6 subscales composed of 3 to 9 dichotomous items [12]. As well, the SF-36 consists of 8 subscales composed of 2 to 10 items, 2 subscales being only composed of dichotomous items (Emotional Role Limitation, and Physical Role Limitation), the others of polytomous items [13].
  • These items difficulties were defined as the percentiles of a standard normal distribution or as the percentiles of an equiprobable mixture of two Gaussian distributions. These two possibilities allowed considering two different situations that can be encountered in practice. The normal distribution reflected the situation where the questionnaire was perfectly adapted to a population with normally distributed latent traits. Evenly distributed items difficulties allowed considering the score as an interval measurement. The bimodal mixture corresponded to a more irregular and probably more realistic items difficulties distribution. Gaussian parameters of this mixture were then chosen to distinguish two groups of items within the scale: a first group of items whose difficulty values were very close, and a second whose difficulty values were more far apart. Such a distribution involved a poorer match to the latent trait distribution and thus floor or ceiling effects, and did not allow considering the score as an interval measurement.
  • The individual items responses were generated by Bernoulli trials, after calculating for each individual the probability of response to each item by a Rasch model.
  • Each parameter combination of the simulations was replicated 1000 times.

The details of the chosen simulation parameters are presented in table 1.

thumbnail
Table 1. Possible values of the different simulation parameters.

https://doi.org/10.1371/journal.pone.0044695.t001

Statistical Analysis

For each simulation of each parameters combination, the individual scores for person () were defined as the sum of the items positive responses. The latent trait analysis (IRT) has been performed with fixed effects and random effects Rasch models. These analyses were conducted assuming three distinct cases:

  • One could consider the difficulty parameters as unknown, which required to estimate them during the IRT analysis,
  • One could assume these parameters as already known (eg estimated during previous studies, or coming from items banks such as the quality of life item bank PROMIS [14]). In this case, they were not estimated during the analysis. Knowledge of these parameters was then envisaged in two ways:
    1. The difficulty parameters were considered as well known: the fixed values of the difficulty parameters used during the analysis were equal to the simulated difficulties
    2. The difficulty parameters were considered as imperfectly known, or known with error: the fixed difficulty parameters values used during the analysis were randomly drawn from uniform distributions U(; )

The Rasch Model

One of the commonly used IRT model adapted to the analysis of dichotomous items is the Rasch model [9]. Let be the dichotomous variable representing the response of person () to an item (). For a questionnaire containing dichotomous items, the model can be written as follows (eq.1):(1)where for a negative response and for a positive response, is the difficulty associated with item , and is the individual value of the latent trait for patient .

When all the individual latent traits are considered as a set of fixed effects, the Rasch model is known as a fixed effects Rasch model, while when the individual latent traits are considered as realizations of a random variable assumed to be normally distributed, the Rasch model is known as a random effects Rasch model.

The Fixed Effects Rasch Model

The estimates of the fixed effects Rasch model parameters were obtained using a two-step procedure, providing consistent estimators [15][17]. The estimates of the items difficulty parameters were obtained with conditional maximum likelihood, given the individual scores (eq.2). The estimates of the individual latent traits were then obtained with weighted maximum likelihood (WML) (eq.3). This entire procedure is known as the CML procedure. By extension, in this study, a fixed effects Rasch model will be called CML-model.

Let be the k-vector of items difficulty parameters , be the n-vector of individual latent-traits, be the n-vector of individual scores , be the k-vector of the items responses for the individual and be the -vector of the items responses for all the n individuals.

The parameters are consistently estimated by maximizing the conditional likelihood (eq.2):(2)where is the conditional likelihood given the subject’ s scores .

The parameters are then estimated without biases by maximizing the weighted likelihood (eq.3):(3)As with any maximum likelihood estimating procedure, the parameters estimated with the CML procedure are asymptotically normally distributed according to a normal distribution with mean equal to their maximum likelihood estimator. To assign to each individual his own latent trait value, we must define a decision rule based on this estimated distribution. It will be defined in section: “Different possible estimates of the individual latent traits”.

The Random Effects Rasch Model

The estimate of the random effects Rasch model parameters were obtained with marginal maximum likelihood (eq.4), known as the MML procedure [16]. The latent trait was then considered normally distributed with mean and variance . By extension, a random effects Rasch model will be called MML-model in this study.

The , , and parameters can be consistently estimated by maximizing the marginal likelihood (eq.4):(4)where is the cumulative distribution function of the studied population latent trait , assumed to follow a normal distribution with parameters (; ).

The estimators of each individual latent trait, assumed to be normally distributed, could be obtained afterwards, with expected a posteriori Bayesian (EAP) estimates [17], [18]. EAP estimates are obtained by taking the expectation of the posterior density function of , conditional on and (eqs.5 & 6).(5)(6)where is the posterior density function of , conditional on and .

Including a Group Effect in a Rasch Model

The group effect can be represented by a covariate in the formulation of the Rasch model [19]. The individual latent traits are then decomposed into a part related to the group (), and a part related to the individual (). The model is then written as (eq.7):(7)where if the individual is in the first group and if the individual is in the second group. The average latent trait in the first group is equal to , and in the second group equal to . The individual latent traits can then be computed as: .

We did not perform any fixed effects Rasch model with group covariates. Such a model would be unidentifiable, estimates for the Rasch model with fixed effects being computed conditionally on the individuals. It was only possible to include a group covariate within a random effects Rasch model. This model has been called MML-Cov.

Different Possible Estimates of the Individual Latent Traits

Two different ways of estimating the individual latent traits can be proposed.

  • The most intuitive choice for an individual latent trait value estimate performed with a CML, MML or MML-Cov model, is probably the estimated mean of the individual latent trait distribution. For the CML model, these are the WML-CML estimates, and for the MML and MML-Cov models, these are the EAP-MML and EAP-MML-Cov estimates. (EAP-MML-Cov is then computed as the sum of and )
  • As we cannot know the true values of individual latent traits, but only their distributions, the individual latent traits values can be defined as plausible values (PV) coming from these distributions [20], [21]. The latent trait of each individual is then assigned from a draw from its estimated latent trait distribution. For the CML model, these are the PV-CML estimates, for the MML model the PV-MML estimates, and for the MML-Cov model, the PV-MML-Cov estimates.

Different Methods to Compare Two Groups on PRO

Different methodologies have been proposed for comparing two groups of subjects and on PRO data.

  • When using CTT, the groups are compared with a t-test using mean scores. In our study, this method has been called score t-test.
  • When using IRT, groups can be compared using several tests.
  1. Individual latent traits values can be compared with a t-test, whether these are defined as the estimated means of the individual latent traits distributions (WML-CML, EAP-MML and EAP-MML-Cov methodologies) or as plausible values coming from these distributions (PV-CML, PV-MML and PV-MML-Cov methodologies). For example, this is how the most currently used software for Rasch analysis: RUMM software [22] compares individuals groups: the individual latent traits, estimated using WML-CML methodology, are compared with a t-test.
  2. Using the MML-Cov model, it is possible to perform a group comparison by testing the nullity of the parameter associated with the group covariate with a Wald test. In our study, this method has been called “Wald-test”.
  3. Mislevy [23] noted that obtaining the variance estimate of a the latent traits within a group by calculating the variance of their individual estimates is biased because it only corresponds to the between-individual variance estimate, regardless of the within-individual variance estimate [24], [25]. With multiple imputations of plausible values (MI method), it is possible to estimate the distribution parameters of the latent traits of each group, taking into account both the between-individual and the within-individual variance. One can then compare the groups with a t-test. In our study, these methods have been called IM-CML, IM-MML or IM-MML-Cov according to the model used (CML, MML or MML-Cov model).

This methodology was developed for large scale surveys used in educational sciences (eg the PISA, TIMSS and NAEP studies). the number of imputations used was then between 3 and 5. Rubin recommends making between 2 and 10 imputations [24]. In our study, we performed five imputations to be comparable to studies using this methodology.

  1. Finally, it was proposed to perform groups comparisons with a two-step procedure (this procedure is called 2-Steps method [26]). The first step is to estimate the difficulty parameters with MML method, and the second one is to separately estimate the latent traits distributions parameters for each group by performing a random effect Rasch model in each of these groups, with difficulty parameters set to the estimated values obtained during the first step. Since it is possible to estimate with this method the mean and the variance of the latent traits for each group, it is then possible to compare the groups by performing a t-test.

All these methodologies are summarized in figure 1. All the tests were performed with a threshold .

thumbnail
Figure 1. Different methods to compare two groups of patients on subjective measurements.

CTT: Classical Test Theory, IRT: Item Response Theory, CML: conditional maximum likelihood, MML: marginal maximum likelihood, MML-Cov: MML with group covariate, WML: weighted maximum likelihood, EAP: expected a posteriori, PV: plausible values, MI: multiple imputations of PV.

https://doi.org/10.1371/journal.pone.0044695.g001

Comparison of Methods

To compare the methods to analyse PRO data, four criteria were studied: the type I error, the power, the position bias and the dispersion bias.

  • The type I error was classically obtained by calculating the proportion of rejection of the null hypothesis among the 1000 replications of the same parameters combination when was set to 0. A test of equality between the observed type I error and 0.05 was then performed with a t-test.
  • The power was obtained by calculating the proportion of rejection of the null hypothesis among the 1000 replications of the same parameters combination when was different from 0. It was considered that a power variation of less than 0.05 was not relevant in practice.
  • When the methodology was based on IRT:
    1. We estimated the difference between the latent traits means of each group by computing the average of the differences between the means of the latent traits of the groups and over the 1000 replicated simulations: . This average was then compared to the simulated difference with a t-test. When was significantly different from , we concluded to a statistically significant position bias. It was then considered that a position bias of less than when was equal to 0, or less than 10% of when was different from 0 was not relevant in practice.
    2. We assumed that the variances of the two groups were equal: . We estimated the latent traits variance of each group by computing the average of the latent traits variances over the 1000 replicated simulations: . This average was then compared to the simulated common variance with a t-test. When was significantly different from , we concluded to a statistically significant dispersion bias. It was then considered that a bias of less than 10% of was not relevant in practice.
  • When the methodology was based on CTT:
    1. We estimated the difference between the score means of each group by computing the average of the differences between the means of the scores of the groups and over the 1000 replicated simulations: . This average was then compared to the true value of group effect with a t-test. When was significantly different from , we concluded to a statistically significant position bias. It was then considered that a position bias of less than 10% of was not relevant in practice.

The true value of group effect was not known and was approached using the difference of the expected score in each group.(8)

The expected score in each group was computed as follows:(9)with the normal distribution with mean and variance . These integrals can be estimated using Gauss-Hermite quadratures

    1. We did not estimate the dispersion bias when the methodology was based on CTT.

Simulations and statistical analyses were performed with the Stata 11.0 software and the Gllamm package [27].

Results

Type I Error

The type I error level was similar whether the item difficulties were considered unknown, well known or imperfectly known. We will only present the observed type I errors for unknown difficulties that had to be estimated (table 2).

thumbnail
Table 2. Type I errors of the different methodologies for comparing groups on subjective measurements, for different simulation parameters; the difficulties are considered unknown.

https://doi.org/10.1371/journal.pone.0044695.t002

The type I errors observed for the score t-test, WML-CML, PV-CML, EAP-MML, PV-MML and Wald-test methods were not significantly different from 0.05. MI-CML and MI-MML methodologies minimized the type I error, while EAP-MML-Cov, PV-MML-Cov, MI-MML-Cov and 2 Steps methodologies increased the type I error, whatever the values of the simulation parameters.

Power

The methods for which the observed type I errors were significantly greater than 0.05 were excluded from the power analysis. We therefore excluded EAP–MML-Cov, PV–MML-Cov, MI–MML-Cov and 2 Steps methods.

The knowledge of the items difficulties (unknown, well known or imperfectly known) did not affect the comparison methodologies power. We will only present the observed powers for unknown difficulties (table 3 and figure 2).

thumbnail
Figure 2. Evolution of the estimated power for the different methodologies controlling the type I error.

Evolution of the estimated power depending on the sample size, the items number and the difficulties distribution. is set at 0.5 and the items difficulties are considered unknown. CML: conditional maximum likelihood, MML: marginal maximum likelihood, MML-Cov: MML with group covariate, WML: weighted maximum likelihood, EAP: expected a posteriori, PV: plausible values, MI: multiple imputations of PV.

https://doi.org/10.1371/journal.pone.0044695.g002

thumbnail
Table 3. Power of the different methodologies for comparing groups on subjective measurements controlling the type I error, for different simulation parameters; the difficulties are considered unknown, the latent traits are normally distributed.

https://doi.org/10.1371/journal.pone.0044695.t003

The methods respecting the type I error could be grouped into three groups according to their power: (i) the tests with low power, ie the methods based on multiple imputation (MI–MML and MI–CML methods), (ii) the tests with moderate power, ie the methods based on single imputations of plausible values (PV–MML and PV–CML methods), and (iii) the tests with high power, ie the methods based on the comparison of the individual latent traits defined as their average distribution (EAP–MML and WML–CML methods), the Wald-test method and the score comparison t-test.

A global increase of the sample size resulted in an increase of the observed power. In 67% of the cases, this increase was relevant in practice, whatever the values of the other parameters (figure 2). Cases where the difference was not relevant corresponded to observed powers greater than 0.9, resulting in a ceiling effect.

Increasing the number of items resulted in an increase of the observed power. In 55% of the cases, the power increase resulting from the transition from 5 to 10 items was relevant in practice, whatever the values of the other parameters. Cases where this increase was not relevant were either observed powers greater than 0.9, or equal to .

Finally, the items difficulties distribution did not affect the comparison methods power.

Bias

Position bias.

The knowledge of the items difficulties (unknown, well known or imperfectly known) did not affect the position bias estimate (the difference between and ). We will only present the estimated position bias for unknown difficulties (table 4).

thumbnail
Table 4. Position biases of the different IRT methodologies for comparing groups, for different simulation parameters; the difficulties are considered unknown, the latent traits are normally distributed.

https://doi.org/10.1371/journal.pone.0044695.t004

Score t-test, WML-CML, PV-CML, MI-CML, EAP-MML-Cov, PV-MML-Cov, MI-MML-Cov, 2 Steps and Wald test methodologies did not present any position bias relevant in practice whatever the values of the simulation parameters.

Methods based on a random effects Rasch model without covariates (EAP–MML, PV–MML and MI–MML methods) did not present a relevant position bias when the simulated difference was equal to 0, but presented a position bias systematically relevant in practice when was greater than 0. This bias was then greater than 30% of in all the cases.

For methods with a position bias relevant in practice (EAP–MML, PV–MML et MI–MML):

  • Neither the difficulties distribution nor the sample size affected the position bias, whatever the values of the other parameters.
  • Increasing the items number resulted in a decrease of the position bias relevant in practice: the transition from 5 to 10 items resulted in an average decrease of the position bias of 15% of , whatever the values of the other parameters.

Dispersion biases.

The dispersion biases estimates (the difference between and ) were similar when items difficulties were considered unknown or well known. However, the dispersion biases estimates increased when the items difficulties were considered as imperfectly known: these estimated dispersion biases were greater than those estimated by considering the difficulties as unknown or perfectly known by an average of 15% of , whatever the values of the other parameters. However, the knowledge of the items difficulties did not affect the effect of the other simulation parameters on the observed dispersion biases. We will only present the dispersion biases estimated for unknown difficulties (table 5).

thumbnail
Table 5. Dispersion biases of the different IRT methodologies for comparing groups, for different simulation parameters; the difficulties are considered unknown, the latent traits are normally distributed.

https://doi.org/10.1371/journal.pone.0044695.t005

The 2 Steps, Wald test and PV-MML-Cov methods were the only methodologies which did not present any dispersion bias relevant in practice, whatever the values of the other parameters. The methods for which was biased are presented in table 6.

thumbnail
Table 6. Dispersion biases of the different methodologies considered for comparing groups on subjective measurements.

https://doi.org/10.1371/journal.pone.0044695.t006

Increasing the items number resulted in a decrease of the dispersion biases. The transition from 5 to 10 items resulted in a reduction of the dispersion biases by an average of 18%, whatever the values of the other parameters.

The difficulties distribution affected the dispersion biases only for the methods based on a fixed effects Rasch model (WML–CML, PV–CML and MI–CML methods). The transition from a normal distribution to a bimodal Gaussian mixture distribution resulted in an increase of the estimated variance on average of 14%, whatever the values of the other parameters. For the other methods (EAP-MML, PV-MML, MI-MML, EAP-MML-Cov, PV-MML-Cov, and MI-MML-Cov methodologies), the difficulties distribution did not affected the dispersion biases.

Whatever the methods considered, neither the sample size nor the simulated difference affected the dispersion biases, whatever the values of the other parameters.

Example

We illustrate the results of this simulation study using data coming from the surveillance program for upper-extremity musculoskeletal disorders (UE-MSDs) in the working population of the French Loire Valley region [28]. One of the objectives of this study was to compare the quality of life of workers according to their occupational category.

In this example, we focused on comparing the physical role level of blue collar workers to that of other workers. The physical role was estimated using the RP (Role Physical) sub-scale of the SF-36 questionnaire [13], including four dichotomous items. We only included individuals aged between 21 and 50 years to take into account the potential effect of age as a confounding variable. 591 blue collar workers and 828 other workers aged from 21 to 50 years completed the SF36 questionnaire. The observed item non-response rate was very low (1.2% in blue collar workers and 1.0% in other workers).

We used all the methods witch did not resulted in an observed type I error significantly greater than 0.05 to compared the physical role according to the workers occupational categories. The methods used were either based on CTT (as the score t-test) or based on IRT (methods based on fixed effect rasch models: WML-CML, PV-CML and MI-CML; methods based on random effect Rasch models: EAP-MML, PV-MML and MI-MML; and methods based on random effect Rasch models including group covariate: the Wald-test method). The score used for the t-test method was calculated as recommended by the SF-36 manual, imputing missing responses by the average observed responses for each individual who responded to at least half of the items [13]. The results of all these comparisons are presented in table 7.

thumbnail
Table 7. Measurement of the physical role difference between blue-collar workers and workers from other occupational categories.

https://doi.org/10.1371/journal.pone.0044695.t007

Only four methods highlighted a significant physical role difference according to the occupational category: the Score t-test, the WML-CML, the EAP-MML and the Wald-test methods. These were the methods presenting the highest powers in our simulation study. In this example, their power was substantially identical. Finally, the estimation of the latent trait difference varied according to the different methodologies: EAP-MML and WML-CML provided the lowest estimate of the latent trait difference. We could extrapolate, using the simulation study, that only methods the t-test method and Wald-test were not biased.

In a second step, we randomly generated missing data and compared once again the physical role of blue collar workers to that of other workers to study the effect of missing data on these group comparison methods. The simulated probability of an item non-response was set to 20%. We simulated whether an individual responded or not to an item using Bernoulli trials. Such a method for generating missing data allowed ensuring the non-informativity of missing data. We used the same comparison methods than previously. The results of these comparisons are presented in table 8.

thumbnail
Table 8. Measurement of the physical role difference between blue-collar workers and workers from other occupational categories after simulating an item non response rate to 20%.

https://doi.org/10.1371/journal.pone.0044695.t008

The estimation of the score difference between groups using the t-test method varied from more than 20% depending on whether data was complete or missing. Although missing data were fully non-informative, the estimation of the score difference between groups was lower in case of missing data. On the other hand, the estimation of the latent trait difference between groups using non-stochastic IRT methods (WML-CML, EAP-MML and Wald-test methods) did not seem impacted by the presence or absence of missing data: for these considered methods, the latent trait difference estimation varied from less than 5%. Finally, when data was missing, only two methods highlighted a significant physical role difference according to the occupational category: the EAP-MML and the Wald-test methods. The Score t-test method no longer highlighted such a difference.

Discussion

Choice of the Most Efficient Methods for Comparing Two Groups of Individuals on PRO Data

The preferred methods of comparison are those for which the type I error is not significantly greater than 5%. Those with the greatest power will then be preferred. Among them, those with the most reduced biases will be the ones to consider.

Type I error.

The methods based on the individual latent traits analysis estimated by a Rasch model with group covariate (EAP–MML-Cov, PV–MML-Cov and MI–MML-Cov method) and the 2 Steps method resulted in an unacceptable rate of type I error. These methods were therefore unsuitable for latent traits comparison.

Power.

Among the methods controlling the type I error, methods based on multiple imputations of plausible values (MI–CML and MI–MML methods) had the lowest power. This power loss can be associated with their dispersion biases. Their estimated variances were all biased and greater than the simulated variances . These biases were related to the addition of the within-subject variance component to the latent traits variance estimate. This within-subject variance illustrates in fact the imprecision related to the individual latent traits estimate, and is not related to the individual latent trait variability [20]. Indeed, in the framework of cross-sectional studies, each individual latent trait is measured only once, which does not make it possible to assess individual latent traits variability. Therefore, if one focuses on the latent trait dispersion parameters within a population at a given time (as in cross-sectional studies), only the between-subject variance should be taken into account.

Methods based on plausible values (PV–CML and PV–MML methods) presented a moderate power. For the PV–CML method, this limited power can be linked with the increase of the dispersion biases associated with the use of plausible values. Methods based on conditional likelihood for estimating individual latent traits are known to result in a biased and increased variance estimate [29]. The addition of a between-subject variance component with plausible values methodologies can only increase this bias. For the PV–MML method, this limited power can be linked with the dispersion biases due to the use of Bayesian expected a posteriori estimates for estimating individual latent traits [30]. These expected a posteriori estimates are indeed shrunk to their a priori value. Thus, the are decreased compared to the simulated .

The following methods WML–CML, EAP–MML, Wald-test and score t-test presented the highest powers. These methods’ powers were almost identical.

Biases.

As expected, the WML–CML method did not lead to any relevant position biases in practice but to dispersion biases when estimating the latent traits distribution parameters [31]. The estimated variance was indeed greater than the simulated variance . The EAP–MML method leaded to position and dispersion biases when estimating the latent traits distribution parameters. The were minimized compared to the simulated , as well as the estimated variance that was less than the simulated . These biases were related to the shrinkage phenomenon associated with the Bayesian posterior estimates of the individual latent traits [29].

The Wald-test and the scores t-test methods did not lead to any position nor dispersion bias when estimating the parameters of the latent traits distribution.

Influence of the simulation parameters.

For all the considered methods, an increase in the sample size involved an increase of the tests’ power. However, no link has been found between the sample size and the magnitude of the observed biases.

An increase in the number of items involved a reduction of the position and dispersion biases, and an increase of the tests’ power. This phenomenon is known [32], and some authors recommend to estimate the variances and averages of latent traits by a Rasch model only if the questionnaire comprise a minimum of 10 items [17]. The Wald-test method providing unbiased estimates even with less than 10 items, this recommendation should not necessarily be followed to perform group comparisons using this method. The power rise due to the number of items increase is due to the subjective nature of the latent traits. Latent variables being not directly observable, their estimate accuracy is largely dependent on the tool used to perform these estimates. Increasing the items number of a questionnaire leads to an increase of the accuracy of the latent traits estimation, and thus to an increase of the tests’ power performed with this questionnaire [33].

Finally, a change in the distribution of the item difficulties did not affect the tests’ power, nor their position biases. However, such a change in the item difficulties distribution involved a variation of the dispersion biases for methods based on a Rasch model, and a variation of the scores variance for methods based on the score analysis. In addition, a ceiling effect was observed when the items distribution resulted from a mixture of Gaussian distributions.

Influence of the knowledge on the items difficulties.

Several scenarios were considered, the difficulty parameters of items being considered as unknown, well known or imperfectly known.

The parameters chosen to simulate imperfectly known difficulties corresponded to a rather poor precision that might be rarely encountered in real situations. However, the impact of the knowledge on the items difficulties remained negligible on the power estimate of the different comparison methods, as well as on the estimated position biases [33]. Only the variance estimate of the latent traits was slightly increased when the items difficulties were imperfectly known.

It is therefore possible to use difficulty parameters previously estimated during an IRT based questionnaire validation to perform group comparisons with IRT-based methods on PRO measurement in clinical trials or epidemiological studies. Moreover, choosing these difficulty parameters allows comparing patients coming from different studies that made use of the same questionnaire.

Influence of missing data and limitations of the study.

A limitation of this study is that it does not take into account the possible presence of missing data. An illustrative real data example has been used for this purpose. This example illustrates some very important changes in the properties of the considered comparison methods according to whether data is missing or not. Even if missing data is not informative, which is the most favourable case, the CTT based method seems to be very disturbed by such missing data. On the contrary, the IRT-based methods seem less affected by the presence of missing data, in view of the example presented in this article. These differences can be explained by the fact that with IRT, an individual latent trait is directly estimated by analysing the items the individuals have answered, without taking account of the missing item answers. With Rasch family models, such estimations are consistent because of the specific objectivity property of such models. On the other hand and with the CTT, the measurements are performed by calculating scores. When data is missing, the score calculation is only possible by performing missing data imputations, which potentially generates biases. It seems important to continue this study by comparing these different group comparison methods in case of missing data considering different scenarios of missing data process, leading to informative or non-informative missing data (missing completely or not at random).

Even though more and more questionnaires are validated by IRT methods, Rasch models investigated in this study may seem too restrictive to be applied to all the situations of clinical research studies (in this study, the items were necessarily dichotomous, and the items difficulties should be independent of the patients groups studied). It appears necessary to pursue this study by analysing extensions of the Rasch model, allowing for polytomous items analysis (as the Partial Credit Model or the Rating Scale Model), and the analysis of items with difficulties that are dependent of the patients groups studied (by integration of the differential item functioning phenomenon in the studied models).

Conclusion

If data follow both a Rasch model and a CTT-based model, the most appropriate methods to compare two groups of patients on PRO measurements are the scores comparison by t-test when analysing such variables with CTT, and the covariate Wald test, performed with a random effect Rasch model including a group covariate, when analysing such variables with IRT. These two methods displayed very similar powers and unbiased estimates.

Author Contributions

Conceived and designed the experiments: JFH JBH VS. Performed the experiments: JFH. Analyzed the data: JFH. Contributed reagents/materials/analysis tools: JFH JBH GK TLN VS YR. Wrote the paper: JFH JBH VS.

References

  1. 1. Lipscomb J, Gotay CC, Snyder CF (2007) Patient-reported outcomes in cancer : a review of recent research and policy initiatives. CA : A Cancer Journal for Clinicians 57: 278–300.
  2. 2. Willke J, Burke LB, Erickson P (2004) Measuring treatment impact : a review of patient-reported outcomes and other efficacy endpoints in approved product labels. Controlled Clinical Trials 25: 535–552.
  3. 3. Cronbach LJ (1951) Coefficient alpha and the internal structure of tests. Psychometrika 16(3): 297–334.
  4. 4. Glas C (1988) The derivation of some tests for the rasch model from the multinomial distribution. Psychometrika 53: 525–546.
  5. 5. Walters SJ, Campbell MJ, Lall R (2001) Design and analysis of trials with quality of life as an outcome : a practical guide. Journal of Biopharmaceutical Statistics 11: 155–176.
  6. 6. Hambleton RK, W JR (1993) Comparison of classical test theory and item response. Educational Measurement : Issues and Practice 12(3): 38–47.
  7. 7. Wyrwich KW (2004) Minimal important difference thresholds and the standard error of measurement : is there a connection? Journal of Biopharmaceutical Statistics 14: 97–110.
  8. 8. Tubach F, Ravaud P, Baron G, Falissard B, Logeart I, et al. (2005) Evaluation of clinically relevant changes in patient reported outcomes in knee and hip osteoarthritis: the minimal clinically important improvement. Annals of the Rheumatic Diseases 64: 29–33.
  9. 9. Rasch G (1960) Probabilistic models for some intelligence and attainment tests. MESA Press.
  10. 10. Holland P (2003) Classical test theory as a first-order item response theory : application to true-score prediction from a possibly nonparallel test. Psychometrika 68(1): 123–149.
  11. 11. Cohen J (1988) Statistical Power Analysis for the Behavioral Sciences (second ed.). Lawrence Erlbaum Associates.
  12. 12. Hunt S, McKenna S, MCEwen J, Williams J, Papp E (1981) The nottingham health profile : subjective health status and medical consultations. Social Science and Medicine 15(3): 221–229.
  13. 13. Ware J, Sherbourne C (1992) The mos 36-item short-form health survey (sf-36). i. conceptual framework and item selection. Medical Care 30(6): 473–483.
  14. 14. DeWalt D, Rothrock N, Yount S, Stone A (2007) Evaluation of item candidates : The promis qualitative item review. Medical Care 45(5): 12–21.
  15. 15. Andersen EB (1970) Asymptotic properties of conditional maximum-likelihood estimators. Journal of the Royal Statistical Society, Series B 32: 283–301.
  16. 16. Molenaar IW (1995) Estimation of item parameters. In: Molenaar GHFIW, editor, Rasch models, foundations, recent developments and applications, New York : Springer. 39–52.
  17. 17. Hoijtink H, Boomsma A (1995) On person parameter estimation in the dichotomous rasch model. In: Molenaar GHFIW, editor, Rasch models, foundations, recent developments and applications, New York : Springer. 53–68.
  18. 18. Rabe-Hesketh S, Skrondal A, Pickles A (2004) GLLAMM Manual. University of California-Berkeley, Division of Biostatistics, Working Paper Series. Available: http://www.bepress.com/ucbbiostat/paper160/.
  19. 19. Christensen K, Bjorner J, Kreiner S, Pertersen J (2004) Latent regression in loglinear rasch models. Communications in Statistics 33(6): 1295–1313.
  20. 20. Glas CAW, Geerlings H, van de Laar MAFJ, Taal E (2009) Analysis of longitudinal randomized clinical trials using item response models. Contemporary Clinical Trials 30: 158–170.
  21. 21. Wu M (2005) The role of plausible values in large-scale surveys. Studies in Educational Evaluation 31: 114–128.
  22. 22. Andrich D, Lyne A, Sheridan B (2006). Rumm 2020. Perth: RUMM Laboratory.
  23. 23. Mislevy RJ (1991) Randomization-based inference about latent variables from complex samples. Psychometrika 56(2): 177–196.
  24. 24. Rubin DB (1987) Multiple imputation for nonresponse in surveys. John Wiley & Sons, New Jersey.
  25. 25. Tang L, Song J, Belin TR, Untzer J (2005) A comparison of imputation methods in a longitudinal randomized clinical trial. Statistics in medicine 24: 2111–2128.
  26. 26. Hardouin JB (2007) Rasch analysis : Estimation and tests with raschtest. The Stata Journal 7(1): 22–44.
  27. 27. Rabe-Hesketh S, Pickles A, Taylor C (2000) Generalised, linear, latent and mixed models. Stata Technical Bulletin 53: 47–57.
  28. 28. Ha C, Roquelaure Y, Leclerc A, Touranchet A, Goldberg M, et al. (2009) The french musculoskeletal disorders surveillance program: Pays de la loire network. Occup Environ Med 66(7): 471–479.
  29. 29. Eggen T (2000) On the loss of information in conditional maximum likelihood estimation of item parameters. Psychometrika 65(3): 337–362.
  30. 30. Kim JK, Nicewander WA (1993) Ability estimation for conventional tests. Psychometrika 4 587: 599.
  31. 31. Mislevy RJ (1984) Estimating latent distributions. Psychometrika 49(3): 359–381.
  32. 32. Lord FM (1969) Estimating true-score distributions in psychological testing (an empirical bayes estimation problem). Psychometrika 34(3): 259–299.
  33. 33. Sébille V, Hardouin J, Le Néel T, Kubis G, Boyer F, et al. (2010) Methodological issues regarding power of classical test theory (ctt) and item response theory (irt)-based approaches for the comparison of patient-reported outcomes in two groups of patients - a simulated study. BMC Medical Research Methodology 25: 10–24.