Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Progression of the Radiologic Severity Index predicts mortality in patients with parainfluenza virus-associated lower respiratory infections

  • Ajay Sheshadri ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Writing – original draft, Writing – review & editing

    asheshadri@mdanderson.org

    Affiliation Department of Pulmonary Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas, United States of America

  • Dimpy P. Shah,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Software, Validation, Writing – original draft, Writing – review & editing

    Current address: Department of Epidemiology and Biostatistics, University of Texas Health Science Center at San Antonio, San Antonio, Texas, United States of America

    Affiliation Department of Infectious Diseases, Infection Control and Employee Health, The University of Texas MD Anderson Cancer Center, Houston, Texas, United States of America

  • Myrna Godoy,

    Roles Data curation, Formal analysis, Investigation, Methodology, Validation, Writing – original draft, Writing – review & editing

    Affiliation Department of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas, United States of America

  • Jeremy J. Erasmus,

    Roles Data curation, Formal analysis, Methodology, Resources, Validation, Writing – original draft, Writing – review & editing

    Affiliation Department of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas, United States of America

  • Juhee Song,

    Roles Data curation, Formal analysis, Methodology, Validation, Writing – original draft, Writing – review & editing

    Affiliation Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas, United States of America

  • Liang Li,

    Roles Data curation, Formal analysis, Methodology, Project administration, Resources, Validation, Writing – original draft, Writing – review & editing

    Affiliation Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas, United States of America

  • Scott E. Evans,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Writing – original draft, Writing – review & editing

    Affiliation Department of Pulmonary Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas, United States of America

  • Roy F. Chemaly,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Validation, Writing – original draft, Writing – review & editing

    Affiliation Department of Infectious Diseases, Infection Control and Employee Health, The University of Texas MD Anderson Cancer Center, Houston, Texas, United States of America

  • Burton F. Dickey,

    Roles Conceptualization, Investigation, Methodology, Project administration, Resources, Supervision, Writing – original draft, Writing – review & editing

    Affiliation Department of Pulmonary Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas, United States of America

  • David E. Ost

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Writing – original draft, Writing – review & editing

    Affiliation Department of Pulmonary Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas, United States of America

Progression of the Radiologic Severity Index predicts mortality in patients with parainfluenza virus-associated lower respiratory infections

  • Ajay Sheshadri, 
  • Dimpy P. Shah, 
  • Myrna Godoy, 
  • Jeremy J. Erasmus, 
  • Juhee Song, 
  • Liang Li, 
  • Scott E. Evans, 
  • Roy F. Chemaly, 
  • Burton F. Dickey, 
  • David E. Ost
PLOS
x

Abstract

Background

Radiologic severity may predict adverse outcomes after lower respiratory tract infection (LRI). However, few studies have quantified radiologic severity of LRIs. We sought to evaluate whether a semi-quantitative scoring tool, the Radiologic Severity Index (RSI), predicted mortality after parainfluenza virus (PIV)-associated LRI.

Methods

We conducted a retrospective review of consecutively-enrolled adult patients with hematologic malignancy or hematopoietic stem cell transplantation and with PIV detected in nasal wash who subsequently developed radiologically-confirmed LRI. We measured RSI (range 0–72) in each chest radiograph during the first 30 days after LRI diagnosis. We used extended Cox proportional hazards models to identify factors associated with mortality after onset of LRI with all-cause mortality as our failure event.

Results

After adjustment for patient characteristics, each 1-point increase in RSI was associated with an increased hazard of death (HR 1.13, 95% confidence interval [CI] 1.05–1.21, p = 0.0008). Baseline RSI was not predictive of death, but both peak RSI and the change from baseline to peak RSI (delta-RSI) predicted mortality (odds ratio for mortality, peak: 1.11 [95%CI 1.04–1.18], delta-RSI: 1.14 [95%CI 1.06–1.22]). A delta-RSI of ≥19.5 was 89% sensitive and 91% specific in predicting 30-day mortality.

Conclusions

We conclude that the RSI offers precise, informative and reliable assessments of LRI severity. Progression of RSI predicts 30-day mortality after LRI, but baseline RSI does not. Our results were derived from a cohort of patients with PIV-associated LRI, but can be applied in validated in other populations of patients with LRI.

Introduction

The use of mortality as a primary endpoint in clinical trials of antimicrobials has been hotly debated due to difficulty in the determination of attribution [1, 2]. The FDA recommended in 2009 that investigators consider clinical response or failure as an endpoint in trials of pneumonia [3]. Radiologic progression is considered a component of clinical failure [4, 5]. However, while some studies of pneumonia have found that radiologic progression or delayed radiologic resolution may be associated with adverse outcomes [6], other studies have failed to demonstrate this relationship [7, 8]. In other words, equipoise exists as to whether radiologic progression is clinically meaningful. The use of radiologic progression in clinical trials is limited by the fact that qualitative chest radiographic interpretations do not precisely capture degrees of change in radiologic severity [9, 10]. Moreover, qualitative chest radiograph interpretations have significant inter-observer variability (i.e., low reliability) [11, 12]. This lack of precision and reliability reduce statistical power and thereby reduce the effectiveness of qualitative interpretations of radiologic severity as an outcome measure in pneumonia trials. Currently, no validated scoring tool exists for assessing severity of radiographic infiltrates in pneumonia.

Systematic quantification of radiological severity could allow for a valid and reliable score that can serve as a biomarker of mortality. While thoracic computed tomography (CT) and chest X-ray (CXR) are important clinical tools for evaluating pneumonia [13], few studies have sought to quantify pneumonia severity radiologically [1416]. Precise, quantitative image-based assessments of pneumonia may add prognostic value to existing clinical risk assessment tools [17], and changes in radiologic severity may predict adverse outcomes after pneumonia [6]. The development of a validated quantitative assessment of radiological severity would allow investigators to measure pneumonia progression or resolution with greater precision and reliability than a dichotomous assessment (e.g. “worsening” vs. “stable”), which would improve power. Such a tool would be well suited for inclusion as part of a composite endpoint of clinical treatment failure along with mortality.

To explore the value of quantitative assessments of radiologic severity, we sought to determine whether radiologic progression of pneumonia as quantified by a systematic scoring tool, the Radiologic Severity Index (RSI) [14], would predict mortality after lower respiratory tract infection (LRI). We tested this tool in a well-characterized cohort of patients enrolled early in the course of parainfluenza virus (PIV)-associated LRI, an infection in which mortality can approach 55% in patients with hematologic malignancies [1820]. We further hypothesized that this scoring tool would have high reliability between readers. The primary objective of this study was to determine whether progression of LRI, as quantified by RSI, in a high-risk population of patients with LRI could predict mortality.

Methods

Patients

We conducted a retrospective chart review of a previously-established cohort of consecutively-enrolled adult patients with hematologic malignancies or those who had undergone hematopoietic stem cell transplantation (HSCT) who developed symptomatic PIV upper respiratory tract infection (URI) between October 1, 2002 and November 30, 2007 [18]. PIV was detected by direct immunofluorescence assay (DFA) or shell vial culture from nasal wash. We included patients with PIV URI who presented with or subsequently developed radiologic infiltrates within 28 days of PIV diagnosis in nasal wash. We excluded by consensus (A.S., D.O.) those with clinical volume overload as assessed by echocardiography, right heart catheterization, or response to diuretics, and those who had resolving infiltrates from a known prior LRI at the time of PIV detection in nasal wash. Our final cohort consisted of patients with newly-developed LRI preceded by PIV URI. This study was approved by our Institutional Review Board 4 (Federalwide Assurance #00000363) in accordance with the Helsinki Declaration of the World Medical Association with a waiver of informed consent (PA15-0892).

Definitions

PIV-associated LRI was defined as the detection of PIV from nasal wash with symptoms of a clinical pneumonia syndrome (fever >38.3°C, cough, dyspnea and/or hypoxemia in addition to new or progressive pulmonary infiltrates seen on CT or CXR) in accordance with guidelines [21]. We chose a clinical definition of LRI since not all patients underwent bronchoscopy to detect PIV or bacterial superinfection in bronchoalveolar lavage (BAL) [22]. Onset of PIV-associated LRI was defined as the first detection of radiologic infiltrates on CXR or CT. Two investigators with experience in pulmonary medicine (A.S, D.O.) determined which patients had LRI.

Data collection

All clinical and oncologic data were collected prospectively from a database of viral infections used in infection control. Data collected included demographic information (age, gender, race), underlying malignancy, cancer status (remission, relapse, or refractory), PIV detection in BAL (if performed), presence of BAL co-pathogens, cytotoxic chemotherapy, and administration of antiviral therapy with ribavirin and/or intravenous immunoglobulin therapy. For patients who underwent HSCT, we collected type of HSCT) (autologous vs. allogeneic, the presence of acute or chronic graft-versus-host disease (GVHD) prior to LRI, and corticosteroid administration within 30 days of infection.

Radiologic scoring

We applied a semi-quantitative scoring tool for LRI severity initially developed in patients with H7N9 influenza LRI [14] to our study cohort due to its applicability to both CT and CXR images and ability to predict mortality. Per Fleischner Society guidelines [23], we defined consolidation as pulmonary infiltrates that obscure the margins of vessels and airway walls and ground glass opacities (GGOs) as pulmonary infiltrates that do not obscure bronchial and vascular margins. We assessed pulmonary infiltrates on CXR and CT scans in three zones: upper (above carina), middle (below carina, above inferior pulmonary vein), and lower (below inferior pulmonary vein) in both lungs.

Table 1 shows how RSI scores were calculated. Pulmonary infiltrates were scored on a three-point scale based on the predominant pattern in that zone: normal attenuation: 1, GGOs: 2, consolidation: 3. In general, CXR infiltrates were considered to be consolidation unless there was clear evidence of interstitial infiltrates, which were scored as GGOs. We multiplied this score by a factor based on extent of volumetric involvement: normal: 0, 1–24%: 1, 25–50%: 2, 51–75%: 3, >75%: 4. Scores from each zone were added to give the final score, called the Radiologic Severity Index (RSI), which ranges from 0–72. Dense clusters of nodules on CXR or CT were considered to be consolidative infiltrates for the purpose of RSI scoring. Volumetric assessment of the extent of infiltrates on CXR was done by the best estimate on planar anteroposterior or posteroanterior views as lateral views were not always available. Fig 1 shows representative RSI scoring for both CXR and CT images.

thumbnail
Fig 1. Representative images of RSI scoring for CXR and CT images.

RSI scores are labelled within each panel. Panels (A)-(C) show CXR images from an individual patient in order of increasing severity. Panels (D)-(F) show CT images from a different individual patient in order of increasing severity.

https://doi.org/10.1371/journal.pone.0197418.g001

Two senior thoracic radiologists (M.G., J.E.) reviewed the RSI literature with two pulmonary investigators (A.S., D.O.) and scored sample CXRs and CTs from a training set to establish a consistent rule-based system for scoring images. This training set consisted of patients without PIV-associated lower respiratory tract infection (LRI) but with a broad range of severity of infiltrates. After training, the radiologists independently scored all CXR and CT images performed within 60 days of LRI in sequential order, and we used the mean of the two RSI scores for analyses. The radiologists were blinded to the primary outcome of death and to each other’s scoring. We defined baseline RSI as the RSI score on the day of LRI onset. We defined peak RSI as the highest RSI score on CXR or CT after LRI onset and delta-RSI as the change from baseline to peak RSI.

Statistical analysis

We used chi-square or Fisher’s exact test to analyze categorical data. We used a Student’s t-test or Wilcoxon rank sum to analyze continuous data. We used an extended Cox proportional hazards model to analyze the factors associated with time to death after onset of LRI [24]. We incorporated RSI into the model as a time-varying covariate. In our primary analysis, we used RSI scores from CXR only (RSI-CXR) for the first 30 days after admission and censored patients who were alive at 30 days after LRI. In secondary analyses, we added RSI scores from CT (RSI-CT) to RSI-CXR scores in separate extended Cox models and censored all RSI scores and survival at 14, 30, or 60 days. We used the last-observation-carried-forward method to impute RSI on days in which imaging was not performed. We assumed that subjects who only had imaging performed at baseline had RSI values that remained stable throughout the study period or until they died. Variables significantly associated with mortality with p-values <0.2 in univariate analysis were candidate variables in multivariate extended Cox models, and we used backward elimination to only include variables with p<0.05. In secondary analyses, we replaced RSI with the number of involved zones (0–6). We used an extended Cox model to assess the association between the number of involved zones, expressed as a time-varying covariate, and time to death. We then compared models using RSI with models using the number of involved zones by using the generalized R2 method to determine which model was best [25]. Generalized R2 estimates in Cox models are often low due to censored data and do not reflect the robustness of models; however, this technique can compare different models.

We used univariate logistic regression models with baseline, peak and delta-RSI as separate predictors of 30-day mortality. We used Bland-Altman plots [26], paired t-tests, Pitman-Morgan tests and intra-class correlation (ICC) to assess agreement, bias, and variance between readers and diagnostic modalities using all data up to 60 days post-LRI. P-values <0.05 were considered statistically significant and all tests were two-sided. We used SAS v9.4 (SAS Institute Inc., Cary, NC) for statistical analyses.

Results

Baseline characteristics

Fig 2 shows our selection of the study cohort. We excluded five patients who had resolving LRI at the time of PIV diagnosis and did not exclude any patients due to volume overload at the time of PIV diagnosis. Table 2 compares baseline characteristics between survivors and non-survivors at 30 days after LRI onset. Survivors were more likely to be female (survivors: 50%, non-survivors 11%, p = 0.04). Otherwise, there were no significant differences in age, race, underlying malignancy, remission status, or exposure to cytotoxic chemotherapy within 30 days between survivors and non-survivors. In patients who underwent bronchoscopy (n = 32), detection of PIV in BAL fluid was not significantly associated with mortality (non-survivors: 2/3, 67% detection, survivors: 6/29, 21% detection, p = 0.15), possibly due to the small number of patients who underwent BAL. Only two patients had co-infection with other pathogens detected in BAL fluid; cytomegalovirus was detected in one patient, and herpes simplex virus in the other. There was no significant difference in mortality between HSCT and non-HSCT patients (HSCT: 11.5%, non-HSCT: 16.2%, p = 0.60). Table 3 shows baseline characteristics for the subgroup of patients who had undergone HSCT. No significant differences existed between survivors and non-survivors in time from HSCT to LRI, HSCT type (autologous vs. allogeneic), transplant source (bone marrow, cord blood, or peripheral blood), administration of antiviral therapy, or baseline corticosteroid dose.

thumbnail
Fig 2. Enrollment flow chart.

Enrollment flow chart for patients with parainfluenza virus (PIV)-associated lower respiratory tract infection (LRI) included in this study for analysis. BAL = bronchoalveolar lavage.

https://doi.org/10.1371/journal.pone.0197418.g002

thumbnail
Table 3. Characteristics of patients with hematopoietic stem cell transplants.

https://doi.org/10.1371/journal.pone.0197418.t003

Change in Radiologic Severity Index as a predictor of mortality

In our primary extended Cox model, we incorporated 241 measurable RSI-CXR scores. Non-survivors underwent more radiologic tests within the study period (median number of total radiologic tests—non-survivors: 6, survivors: 3). In univariate Cox regression models, each 1-point increase in RSI-CXR measured as a time-varying covariate was associated with a 13% increase in the hazard for death (HR 1.13, 95% confidence interval [CI] 1.06–1.19, p<0.0001). The proportional hazards assumption was met. AML was associated with mortality in our univariate Cox regression model (HR 8.1, 95%CI 1.7–39.1, p = 0.01). After adjusting for age, presence of AML and gender, only the progression of RSI remained in the multivariate model and predicted mortality (multivariate HR 1.13, 95%CI 1.05–1.21, p = 0.0008, Table 4). Fig 3 shows longitudinal changes in mean RSI between survivors and non-survivors. Non-survivors had increases in RSI temporally associated with time of death, whereas RSI remained stable in survivors.

thumbnail
Fig 3. Trends in mean Radiologic Severity Index (RSI) scores over time in non-survivors (A) and survivors (B).

(A) Trends in mean RSI scores are shown for patients who died within 14 days (solid circles, n = 4), patients who died between days 15 and 28 (solid squares, n = 5), and patients who died after day 28 (solid triangles, n = 2). (B) Trends in mean RSI scores are shown for survivors.

https://doi.org/10.1371/journal.pone.0197418.g003

thumbnail
Table 4. Univariate and multivariate cox regression models for the prediction of 30-day mortality.

https://doi.org/10.1371/journal.pone.0197418.t004

In separate univariate Cox models using all RSI-CXR and RSI-CT scores for 14-, 30- or 60-day outcomes, RSI remained predictive of mortality (14-day mortality: HR 1.13, 95%CI 1.04–1.22, p = 0.005; 30-day mortality: HR 1.12, 95% CI 1.05–1.20, p = 0.0005; 60-day mortality: HR 1.11, 95%CI 1.06–1.17, p<0.0001). In secondary univariate analyses, extended Cox models using the number of involved zones predicted mortality (HR 2.24 per involved lobe, 95%CI 1.17–4.30, p = 0.02), but RSI was better at predicting mortality than the number of involved zones (RSI: R2 = 0.11, number of zones: R2 = 0.04) [25].

Prediction of mortality using baseline, peak and delta-RSI

Using univariate logistic regression models, we found that longitudinal assessments of RSI were informative, whereas baseline assessments were not. Mean delta-RSI scores were highly predictive of mortality (non-survivors: 33.6±5.6, survivors: 5.6±33.6, p<0.0001), as were peak RSI scores (non-survivors: 53.2±13.2, survivors: 18.9±17.6, p<0.0001) (Table 5). However, baseline RSI scores were not predictive of mortality (non-survivors: 19.6±8.8, survivors: 13.3±14.3, p = 0.22). Adding RSI-CT scores to the logistic regression models improved the predictive power of delta-RSI but not peak RSI (Table 6). Delta-RSI using only RSI-CXR scores had good discrimination (area under the ROC curve: 0.92) but this discrimination improved with the addition of RSI-CT scores (area under the ROC curve: 0.97).

thumbnail
Table 5. Univariate logistic regression models for the prediction of 30-day mortality with baseline RSI, peak RSI and delta-RSI with RSI-CXR scores.

https://doi.org/10.1371/journal.pone.0197418.t005

thumbnail
Table 6. Univariate logistic regression models for the prediction of 30-day mortality with baseline RSI, peak RSI and delta-RSI with any RSI score.

https://doi.org/10.1371/journal.pone.0197418.t006

Reliability of RSI between readers

Reliability between readers was excellent for all RSI scores (ICC: 0.99). Table 7 shows the distribution of radiologic patterns in CT and CXR as scored by each reader. We included all available images that were scored for all patients within 60 days of LRI onset (n = 338). Fig 4 shows Bland-Altman plots for agreement between readers in RSI, RSI-CT and RSI-CXR. There was no difference in variance between readers; reader 1 systematically assigned higher RSI scores to images as compared with reader 2, but the difference was <1 unit RSI (Table 8).

thumbnail
Fig 4. Bland-Altman plots for agreement between expert radiologists.

Bland-Altman plots for agreement between expert radiologists in (A) RSI, (B) RSI scores restricted to computed tomography (CT) only (RSI-CT), and (C) RSI scores restricted to chest x-ray (CXR) measurements only (RSI-CXR). Upper and lower gray bars show 95% confidence intervals for upper and lower limits of agreement. Center gray bar shows 95% confidence intervals for bias. Solid line represents the slope of the bias (p = NS).

https://doi.org/10.1371/journal.pone.0197418.g004

thumbnail
Table 7. Distribution of radiologic patterns in chest X-rays and chest computed tomography scans.

https://doi.org/10.1371/journal.pone.0197418.t007

Reliability of RSI between radiologic modalities

Fig 5 shows a Bland-Altman plot comparing the first available RSI-CT with the closest RSI-CXR obtained within 2 days of RSI-CT. We identified 41 CXR-CT dyads that were performed within 48 hours of each other in the same patient. The reliability between RSI-CT and RSI-CXR was good (intra-class correlation: 0.76). There was no fixed bias between RSI-CXR and RSI-CT (Table 9); however, there was bias which changed with the severity of LRI (Fig 5). When LRI was mild, RSI-CXR was systematically lower than RSI-CT (i.e. RSI-CXR underestimated RSI-CT). When LRI was severe, RSI-CXR was systematically higher than RSI-CT (i.e. RSI-CXR overestimated RSI-CT). RSI-CXR demonstrated higher variance than RSI-CT (Pitman-Morgan p = 0.0008).

thumbnail
Fig 5. Bland-Altman plots for agreement between RSI-CXR and RSI-CT.

Upper and lower gray bars show 95% confidence intervals for upper and lower limits of agreement. Center gray bar shows 95% confidence intervals for bias. Solid line represents the slope of the bias (slope = 0.38; p = 0.0008). RSI-CXR systematically underestimates RSI-CT at low RSI scores and overestimates RSI-CT at high RSI scores.

https://doi.org/10.1371/journal.pone.0197418.g005

Discussion

We present the Radiologic Severity Index (RSI), a systematic radiologic scoring tool for LRI severity previously used in H1N1 influenza LRI [14], and tested its utility in patients with clinical PIV-associated LRI. Progression of RSI after adjusting for baseline factors predicted mortality. Delta-RSI and peak RSI were accurate predictors of 30-day mortality. However, baseline RSI was not predictive of mortality. RSI performed better than traditional qualitative radiologic assessments of LRI severity. A major advantage of RSI over qualitative radiologic assessments is high inter-observer reliability. We propose that the RSI is a valid and reliable tool for the assessment of radiologic severity of LRI and would serve well as a biomarker of mortality in clinical trials as part of a composite endpoint of clinical treatment failure.

Few studies have attempted to systematically quantify LRI severity radiologically. Grieser et al. developed a semi-quantitative CT-scoring tool for the acute respiratory distress syndrome after H1N1 influenza LRI in 23 patients [15]. This tool scored infiltrates (normal/GGO/consolidation) identically to RSI but estimated extent of involvement to the nearest 10% on CT instead of 25% as with RSI. A CT-score of >100 (range 0–600) at H1N1 diagnosis was associated with higher mortality and requirement for extracorporeal membrane oxygenation. Feng et al. modified this CT-score by estimating the extent of involvement to the nearest 25% and applying the score to both CXRs and CTs performed at H7N9 LRI diagnosis in 22 patients [14]. A score of >19 on CXR or >22 on CT was predictive of mortality, with CT having more predictive value. We modified Feng et al.’s simplified quantitative tool by added further scoring guidelines (enumerated in the Methods) in order to improve reliability, especially regarding the scoring of CXRs. Our modifications retain the granularity of the above iterations but allow for easier replication of RSI scoring.

Our study of RSI adds to the existing literature by showing that progression of RSI on longitudinal assessment predicts mortality after PIV-associated LRI. Moreover, delta-RSI had excellent discriminatory ability to predict 30-day mortality, while baseline RSI scores were not predictive of mortality after PIV-associated LRI. This demonstrates that it is the change in radiologic severity that is informative. Commonly used clinical risk assessment tools for community acquired pneumonia (CAP), such as the CURB-65, were primarily developed to predict outcomes using baseline data and therefore cannot serve as outcome measures of pneumonia [2732]. In addition, these clinical risk assessment tools are only valid when measured on admission, but are not valid when used as serial longitudinal assessments. In contrast, our data show that longitudinal assessments of RSI, unlike baseline assessments of RSI, are highly associated with mortality and therefore can be used as an outcomes in trials of pneumonia.

Our findings build on those of other investigators. A substantial proportion of patients have persistent infiltrates weeks after LRI onset. Mittl et al. found that in 81 patients hospitalized with CAP, 33% had delayed resolution of pneumonia [7], but this was not associated with mortality. Bruns et al. found that in 288 patients hospitalized for CAP, 47% had infiltrates 28 days after onset of CAP and delayed resolution of CAP was associated with more severe disease [8], although progression of infiltrates was not associated with mortality. Lisboa et al. found that in 457 ICU patients, rapid progression of CAP, as defined by >50% increase in CXR infiltrates from baseline to 48 hours, was a stronger predictor of shock and mortality than bacteremia [6]. While all of these prior investigators found that radiographic changes are common during the course of LRI, their conclusions regarding the relationship between radiographic changes and mortality vary. We believe that the reason for this variability in conclusions between prior reports is due at least in part to differences in patient selection and analytic methodology (S1 Discussion). Our population was ideal for testing the validity of RSI as we focused on a well-defined population of patients who had a high baseline risk for mortality and who had early LRI, similar to Lisboa et al [6]. Mortality in pneumonia can range from 7% in non-immunocompromised inpatients with CAP [33] to >50% in immunocompromised patients with pneumonia [3436]. By studying RSI in a population of patients with a high baseline risk of mortality after PIV-associated LRI [1820], we ensured that we had a sufficiently high event rate to measure the effect of LRI progression on mortality. In addition, our focus on early LRI allowed us to preserve the potential for us to capture the full magnitude of worsening radiologic severity. Our use of longitudinal assessments with extended Cox models employing time-varying covariates gave us greater power to measure the effects of LRI progression on mortality due to the ability to precisely measure the progression of radiologic severity while accounting for changing severity and without introducing bias.

In addition, precise quantification of radiographic results likely improved our ability to detect associations of radiologic severity with mortality. Others have found that multilobar pneumonia, as determined on qualitative chest radiograph interpretation, is associated with death [3741], treatment failure [5] and delayed resolution of pneumonia [42]. Our study adds to these observations by showing that mortality increases in proportion to the number of zones involved through the use of time-varying covariates. However, RSI was a more robust estimator of LRI severity, likely due to RSI’s provision of an estimate of the extent of involvement of each lobe in addition to the number of lobes involved, allowing greater precision in severity estimates. For example, the use of number of involved lobes would classify multilobar scattered GGOs as more severe than a dense unilobar consolidation. More precisely quantifying radiologic progression using RSI improves the ability to detect meaningful radiologic changes. Furthermore, using RSI as a defined outcome has the potential to improve statistical power to detect differences over conventional qualitative assessments.

The high reliability of RSI among readers improves precision in the assessment of radiologic severity. In studies using qualitative interpretations of radiologic severity, inter-observer variability between readers of chest radiographs may have impacted the ability to find an association between radiologic severity and mortality. For example, El Solh et al found that when determining whether chest radiographs had consolidation, interstitial infiltrates, or a mixed pattern, the kappa statistic between two radiologists was 0.6. [42] Taylor et al. found a kappa statistic of 0.75–0.83 between two radiologists when using a 5-point scale for pneumonia severity [16]. In comparison, we observed an ICC of 0.99 between two expert radiologists scoring RSI on a 72-point scale, which is excellent and compares favorably to the reliability reported by other investigators. We propose that the high inter-observer reliability of RSI is due to the granularity of scoring and the ability to capture the entire continuum of radiologic progression from minimal to extensive. This granularity and high reliability are necessary characteristics for imaging biomarkers [43].

Others have highlighted the discrepancy between CXR and CT when diagnosing LRI, with discordance between imaging modalities in the diagnosis of LRI in 27–48% of patients [13, 44, 45]. CXR may underestimate or overestimate the presence of LRI as compared with CT and the agreement between the two modalities is generally poor [13, 46]. Our study adds to the existing literature by showing that RSI-CXR systematically underestimates RSI-CT when infiltrates were sparse and systematically overestimates RSI-CT when infiltrates were dense. This is likely because when infiltrates are sparse, they are more likely to be detected by CT due to cross-sectional imaging, but when infiltrates are dense, CXR is more likely to over-estimate severity due to superimposition of opacities over unaffected lung. Due to the retrospective nature of our study, the temporal dissociation between CT and CXR assessments, and the relative scarcity of RSI-CT measurements, we cannot accurately estimate the gain in accuracy with the use of RSI-CT over RSI-CXR. However, our study suggests that the addition of RSI-CT scores to RSI-CXR scores improves the predictive power of RSI when using delta-RSI. Further work is necessary to define optimal imaging strategies for assessing radiologic severity.

Not all patients in this study had proven PIV LRI, which is defined as detection of PIV from a lower respiratory tract sample and has been well established as a criterion [19, 47, 48]. However, the mortality after any PIV infection remains high in patients with hematologic malignancy or HCT recipients [18]. Furthermore, bronchoscopy is not performed routinely at our institution in all patients with PIV infection detected by other means. Therefore the diagnosis of proven PIV LRI may have been biased by selection for bronchoscopy. Therefore, we chose to study RSI in a well-characterized cohort of patients with newly-developed LRI following PIV URI who had a high baseline risk for mortality after LRI.

Our study must be interpreted in the context of certain limitations. Since we tested RSI only in a cohort of patients with PIV-associated LRI, further validation is necessary in other cohorts with LRI such as CAP or RSV-associated LRI. The study was retrospective in nature, and measurements were not obtained systematically. Measurement bias may exist because sicker patients were likely to have more assessments, and patients who died may not have had radiologic assessment at peak severity. However, the lack of follow-up studies in healthier patients would likely bias our results towards the null hypothesis, since we do not capture resolution of pneumonia. Furthermore, the lack of radiologic studies in patients who died before follow-up studies could be obtained would also bias our results towards the null hypothesis, because we would miss the progression of severe pneumonia. Our method of imputation ensured that our model likely captured progression of clinical disease, since radiology is typically performed due to changes in clinical status and not performed when a patient’s clinical status is stable (S2 Discussion). Radiologic tests were scored sequentially and not in random order, which may have introduced bias if knowledge of prior scores impacts the scores of later studies. Our study sample was small, though we had adequate power due to longitudinal assessments and time-to-event analyses. Our cohort was enrolled over 10 years ago, but treatment of PIV infection is still limited to supportive care in the modern era. In addition, as in more modern cohorts, our mortality rate in this study was high, which was necessary to test the validity of RSI [19]. However, we used all-cause mortality as an endpoint instead of mortality attributed to LRI; therefore, it is possible that some patients died of unidentified causes not directly related to PIV-associated LRI. Because we detected PIV infection using DFA and not nucleic acid amplification testing (NAAT), we may have missed some patients with PIV infection. However, patients with PIV infection who are detected by NAAT but not DFA have a similar risk of death [19]. Because of these concerns, RSI should be validated in a more contemporaneous cohort. Despite exclusion of baseline volume overload, some patients may have developed volume overload after enrollment; however, this would likely bias results toward the null hypothesis if volume overload were associated with lower mortality than LRI. Because the RSI is intended to capture radiologic infiltrates that are more diffuse, it may not be appropriate to score infections that tend to present with scattered nodular infiltrates on thoracic imaging, such as fungal infections.

Conclusions

In summary, we show that progression of radiologic infiltrates as assessed by the RSI is predictive of mortality in a high-risk cohort of patients with LRI. Longitudinal assessments of RSI were predictive of mortality, while baseline assessments were not. RSI was superior to previous methods to quantify LRI severity due to improved precision and reliability. Substantial discrepancies exist between CT and CXR assessments of LRI severity. RSI was tested in a cohort of patients with PIV-associated LRI, and future studies are needed to validate RSI in other LRI populations and to correlate RSI with other important outcomes such as health care utilization and cost. RSI is a promising tool that offers granular, reliable, precise and informative assessments of LRI severity and is well suited for consideration as part of a composite endpoint of clinical treatment failure in therapeutic trials of pneumonia.

Supporting information

S1 Discussion. Selection of study cohort when using RSI to predict mortality.

This section offers a brief discussion on the utility of RSI depending upon a study population’s baseline risk of death and based on the type of statistical analysis employed.

https://doi.org/10.1371/journal.pone.0197418.s001

(DOCX)

S2 Discussion. Measurement bias and data sampling in retrospective and prospective cohort studies.

This section offers a brief discussion on pitfalls of data sampling in retrospective and prospective cohort studies, with suggestions on optimal data sampling techniques for potential prospective studies using RSI as a biomarker of radiological pneumonia severity.

https://doi.org/10.1371/journal.pone.0197418.s002

(DOCX)

S1 Dataset. Deidentified dataset in Excel (.xls) format.

https://doi.org/10.1371/journal.pone.0197418.s003

(XLS)

References

  1. 1. Musher DM. Clinical and microbiological end points in the treatment of pneumonia. Clin Infect Dis. 2008;47 Suppl 3:S207–9. pmid:18986291.
  2. 2. Muscedere JG, Day A, Heyland DK. Mortality, attributable mortality, and clinical events as end points for clinical trials of ventilator-associated pneumonia and hospital-acquired pneumonia. Clin Infect Dis. 2010;51 Suppl 1:S120–5. pmid:20597661.
  3. 3. Laessig KA. End points in hospital-acquired pneumonia and/or ventilator-associated pneumonia clinical trials: food and drug administration perspective. Clin Infect Dis. 2010;51 Suppl 1:S117–9. pmid:20597659.
  4. 4. Menendez R, Torres A, Zalacain R, Aspa J, Martin Villasclaras JJ, Borderias L, et al. Risk factors of treatment failure in community acquired pneumonia: implications for disease outcome. Thorax. 2004;59(11):960–5. pmid:15516472.
  5. 5. Roson B, Carratala J, Fernandez-Sabe N, Tubau F, Manresa F, Gudiol F. Causes and factors associated with early failure in hospitalized patients with community-acquired pneumonia. Arch Intern Med. 2004;164(5):502–8. pmid:15006826.
  6. 6. Lisboa T, Blot S, Waterer GW, Canalis E, de Mendoza D, Rodriguez A, et al. Radiologic progression of pulmonary infiltrates predicts a worse prognosis in severe community-acquired pneumonia than bacteremia. Chest. 2009;135(1):165–72. pmid:18689575.
  7. 7. Mittl RL Jr., Schwab RJ, Duchin JS, Goin JE, Albeida SM, Miller WT. Radiographic resolution of community-acquired pneumonia. Am J Respir Crit Care Med. 1994;149(3 Pt 1):630–5. pmid:8118630.
  8. 8. Bruns AH, Oosterheert JJ, Prokop M, Lammers JW, Hak E, Hoepelman AI. Patterns of resolution of chest radiograph abnormalities in adults hospitalized with severe community-acquired pneumonia. Clin Infect Dis. 2007;45(8):983–91. pmid:17879912.
  9. 9. Neuman MI, Lee EY, Bixby S, Diperna S, Hellinger J, Markowitz R, et al. Variability in the interpretation of chest radiographs for the diagnosis of pneumonia in children. J Hosp Med. 2012;7(4):294–8.
  10. 10. Williams GJ, Macaskill P, Kerr M, Fitzgerald DA, Isaacs D, Codarini M, et al. Variability and accuracy in interpretation of consolidation on chest radiography for diagnosing pneumonia in children under 5 years of age. Pediatr Pulmonol. 2013;48(12):1195–200. pmid:23997040.
  11. 11. Hopstaken RM, Witbraad T, van Engelshoven JM, Dinant GJ. Inter-observer variation in the interpretation of chest radiographs for pneumonia in community-acquired lower respiratory tract infections. Clin Radiol. 2004;59(8):743–52. pmid:15262550.
  12. 12. Elemraid MA, Muller M, Spencer DA, Rushton SP, Gorton R, Thomas MF, et al. Accuracy of the interpretation of chest radiographs for the diagnosis of paediatric pneumonia. PLoS One. 2014;9(8):e106051. pmid:25148361.
  13. 13. Claessens YE, Debray MP, Tubach F, Brun AL, Rammaert B, Hausfater P, et al. Early Chest Computed Tomography Scan to Assist Diagnosis and Guide Treatment Decision for Suspected Community-acquired Pneumonia. Am J Respir Crit Care Med. 2015;192(8):974–82. pmid:26168322.
  14. 14. Feng F, Jiang Y, Yuan M, Shen J, Yin H, Geng D, et al. Association of radiologic findings with mortality in patients with avian influenza H7N9 pneumonia. PLoS One. 2014;9(4):e93885. pmid:24705783.
  15. 15. Grieser C, Goldmann A, Steffen IG, Kastrup M, Fernandez CM, Engert U, et al. Computed tomography findings from patients with ARDS due to Influenza A (H1N1) virus-associated pneumonia. Eur J Radiol. 2012;81(2):389–94. pmid:21306851.
  16. 16. Taylor E, Haven K, Reed P, Bissielo A, Harvey D, McArthur C, et al. A chest radiograph scoring system in patients with severe acute respiratory infection: a validation study. BMC Med Imaging. 2015;15:61. pmid:26714630.
  17. 17. Brown SM, Jones JP, Aronsky D, Jones BE, Lanspa MJ, Dean NC. Relationships among initial hospital triage, disease progression and mortality in community-acquired pneumonia. Respirology. 2012;17(8):1207–13. pmid:22805170.
  18. 18. Chemaly RF, Hanmod SS, Rathod DB, Ghantoji SS, Jiang Y, Doshi A, et al. The characteristics and outcomes of parainfluenza virus infections in 200 patients with leukemia or recipients of hematopoietic stem cell transplantation. Blood. 2012;119(12):2738–45; quiz 969. pmid:22246027.
  19. 19. Seo S, Xie H, Campbell AP, Kuypers JM, Leisenring WM, Englund JA, et al. Parainfluenza virus lower respiratory tract disease after hematopoietic cell transplant: viral detection in the lung predicts outcome. Clin Infect Dis. 2014;58(10):1357–68. pmid:24599766.
  20. 20. Shah DP, Shah PK, Azzi JM, Chemaly RF. Parainfluenza virus infections in hematopoietic cell transplant recipients and hematologic malignancy patients: A systematic review. Cancer Lett. 2016;370(2):358–64. pmid:26582658.
  21. 21. Mandell LA, Wunderink RG, Anzueto A, Bartlett JG, Campbell GD, Dean NC, et al. Infectious Diseases Society of America/American Thoracic Society consensus guidelines on the management of community-acquired pneumonia in adults. Clin Infect Dis. 2007;44 Suppl 2:S27–72. pmid:17278083.
  22. 22. Sampsonas F, Kontoyiannis DP, Dickey BF, Evans SE. Performance of a standardized bronchoalveolar lavage protocol in a comprehensive cancer center: a prospective 2-year study. Cancer. 2011;117(15):3424–33. pmid:21246537.
  23. 23. Hansell DM, Bankier AA, MacMahon H, McLoud TC, Muller NL, Remy J. Fleischner Society: glossary of terms for thoracic imaging. Radiology. 2008;246(3):697–722. pmid:18195376.
  24. 24. Cox DR. Regression Models and Life-Tables. J R Stat Soc B. 1972;34(2):187–+. WOS:A1972N572600003.
  25. 25. Allison P. Survival Analysis Using the SAS System: A Practical Guide. Cary, NC: SAS Institute Inc.; 1995.
  26. 26. Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet. 1986;1(8476):307–10. pmid:2868172.
  27. 27. Aujesky D, Auble TE, Yealy DM, Stone RA, Obrosky DS, Meehan TP, et al. Prospective comparison of three validated prediction rules for prognosis in community-acquired pneumonia. Am J Med. 2005;118(4):384–92. pmid:15808136.
  28. 28. Buising KL, Thursky KA, Black JF, MacGregor L, Street AC, Kennedy MP, et al. A prospective comparison of severity scores for identifying patients with severe community acquired pneumonia: reconsidering what is meant by severe pneumonia. Thorax. 2006;61(5):419–24. pmid:16449258.
  29. 29. Fine MJ, Auble TE, Yealy DM, Hanusa BH, Weissfeld LA, Singer DE, et al. A prediction rule to identify low-risk patients with community-acquired pneumonia. N Engl J Med. 1997;336(4):243–50. pmid:8995086.
  30. 30. Lim WS, van der Eerden MM, Laing R, Boersma WG, Karalus N, Town GI, et al. Defining community acquired pneumonia severity on presentation to hospital: an international derivation and validation study. Thorax. 2003;58(5):377–82. pmid:12728155.
  31. 31. Neill AM, Martin IR, Weir R, Anderson R, Chereshsky A, Epton MJ, et al. Community acquired pneumonia: aetiology and usefulness of severity criteria on admission. Thorax. 1996;51(10):1010–6. pmid:8977602.
  32. 32. Shindo Y, Sato S, Maruyama E, Ohashi T, Ogawa M, Imaizumi K, et al. Comparison of severity scoring systems A-DROP and CURB-65 for community-acquired pneumonia. Respirology. 2008;13(5):731–5. pmid:18713094.
  33. 33. Metersky ML, Waterer G, Nsa W, Bratzler DW. Predictors of in-hospital vs postdischarge mortality in pneumonia. Chest. 2012;142(2):476–81. pmid:22383662.
  34. 34. Specchia G, Pastore D, Carluccio P, Mele G, Montagna MT, Liso A, et al. Pneumonia in acute leukemia patients during induction therapy: experience in a single institution. Leuk Lymphoma. 2003;44(1):97–101. pmid:12691147.
  35. 35. Puig N, De La Rubia J, Jarque I, Salavert M, Moscardo F, Sanz J, et al. Characteristics of and risk factors for pneumonia in patients with hematological malignancies developing fever after autologous blood stem cell transplantation. Leuk Lymphoma. 2007;48(12):2367–74. pmid:18067012.
  36. 36. Hardak E, Avivi I, Berkun L, Raz-Pasteur A, Lavi N, Geffen Y, et al. Polymicrobial pulmonary infection in patients with hematological malignancies: prevalence, co-pathogens, course and outcome. Infection. 2016. pmid:26792011.
  37. 37. Chalmers JD, Taylor JK, Mandal P, Choudhury G, Singanayagam A, Akram AR, et al. Validation of the Infectious Diseases Society of America/American Thoratic Society minor criteria for intensive care unit admission in community-acquired pneumonia patients without major criteria or contraindications to intensive care unit care. Clin Infect Dis. 2011;53(6):503–11. pmid:21865188.
  38. 38. Fine MJ, Smith MA, Carson CA, Mutha SS, Sankey SS, Weissfeld LA, et al. Prognosis and outcomes of patients with community-acquired pneumonia. A meta-analysis. JAMA. 1996;275(2):134–41. pmid:8531309.
  39. 39. Garcia-Vidal C, Fernandez-Sabe N, Carratala J, Diaz V, Verdaguer R, Dorca J, et al. Early mortality in patients with community-acquired pneumonia: causes and risk factors. Eur Respir J. 2008;32(3):733–9. pmid:18508820.
  40. 40. Mannu GS, Loke YK, Curtain JP, Pelpola KN, Myint PK. Prognosis of multi-lobar pneumonia in community-acquired pneumonia: a systematic review and meta-analysis. Eur J Intern Med. 2013;24(8):857–63. pmid:23747042.
  41. 41. Sibila O, Meduri GU, Mortensen EM, Anzueto A, Laserna E, Fernandez JF, et al. Improving the 2007 Infectious Disease Society of America/American Thoracic Society severe community-acquired pneumonia criteria to predict intensive care unit admission. J Crit Care. 2013;28(3):284–90. pmid:23265290.
  42. 42. El Solh AA, Aquilina AT, Gunen H, Ramadan F. Radiographic resolution of community-acquired bacterial pneumonia in the elderly. J Am Geriatr Soc. 2004;52(2):224–9. pmid:14728631.
  43. 43. Abi-Dargham A, Horga G. The search for imaging biomarkers in psychiatric disorders. Nat Med. 2016;22(11):1248–55. pmid:27783066.
  44. 44. Hayden GE, Wrenn KW. Chest radiograph vs. computed tomography scan in the evaluation for pneumonia. J Emerg Med. 2009;36(3):266–70. pmid:18571356.
  45. 45. Heussel CP, Kauczor HU, Heussel G, Fischer B, Mildenberger P, Thelen M. Early detection of pneumonia in febrile neutropenic patients: use of thin-section CT. AJR Am J Roentgenol. 1997;169(5):1347–53. pmid:9353456.
  46. 46. Self WH, Courtney DM, McNaughton CD, Wunderink RG, Kline JA. High discordance of chest x-ray and computed tomography for detection of pulmonary opacities in ED patients: implications for diagnosing pneumonia. Am J Emerg Med. 2013;31(2):401–5. pmid:23083885.
  47. 47. Dignan FL, Clark A, Aitken C, Gilleece M, Jayakar V, Krishnamurthy P, et al. BCSH/BSBMT/UK clinical virology network guideline: diagnosis and management of common respiratory viral infections in patients undergoing treatment for haematological malignancies or stem cell transplantation. Br J Haematol. 2016;173(3):380–93. pmid:27060988.
  48. 48. Hirsch HH, Martino R, Ward KN, Boeckh M, Einsele H, Ljungman P. Fourth European Conference on Infections in Leukaemia (ECIL-4): guidelines for diagnosis and treatment of human respiratory syncytial virus, parainfluenza virus, metapneumovirus, rhinovirus, and coronavirus. Clin Infect Dis. 2013;56(2):258–66. pmid:23024295.