Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Using a patient-reported outcome to improve detection of cognitive impairment and dementia: The patient version of the Quick Dementia Rating System (QDRS)

  • James E. Galvin ,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Methodology, Resources, Writing – original draft, Writing – review & editing

    jeg200@miami.edu

    Affiliation Comprehensive Center for Brain Health, Department of Neurology, University of Miami Miller School of Medicine, Miami, Florida, United States of America

  • Magdalena I. Tolea,

    Roles Data curation, Formal analysis, Writing – review & editing

    Affiliation Comprehensive Center for Brain Health, Department of Neurology, University of Miami Miller School of Medicine, Miami, Florida, United States of America

  • Stephanie Chrisphonte

    Roles Data curation, Project administration, Writing – review & editing

    Affiliation Comprehensive Center for Brain Health, Department of Neurology, University of Miami Miller School of Medicine, Miami, Florida, United States of America

Abstract

Introduction

Community detection of mild cognitive impairment (MCI) and Alzheimer’s disease and related disorders (ADRD) is a challenge. While Gold Standard assessments are commonly used in research centers, these methods are time consuming, require extensive training, and are not practical in most clinical settings or in community-based research projects. Many of these methods require an informant (e.g., spouse, adult child) to provide ratings of the patients’ cognitive and functional abilities. A patient-reported outcome that captures the presence of cognitive impairment and corresponds to Gold Standard assessments could improve case ascertainment, clinical care, and recruitment into clinical research. We tested the patient version of the Quick Dementia Rating System (QDRS) as a patient-reported outcome to detect MCI and ADRD.

Methods

The patient QDRS was validated in a sample of 261 consecutive patient-caregiver dyads compared with the informant version of the QDRS, the Clinical Dementia Rating (CDR), neuropsychological tests, and Gold Standard measures of function, behavior, and mood. Psychometric properties including item variability, floor and ceiling effects, construct, concurrent, and known-groups validity, and internal consistency were determined.

Results

The patient QDRS strongly correlated with Gold Standard measures of cognition, function, mood, behavior, and global staging methods (p-values < .001) and had strong psychometric properties with excellent data quality and internal consistency (Cronbach alpha = 0.923, 95%CI:0.91–0.94). The patient QDRS had excellent agreement with the informant QDRS, the CDR and its sum of boxes (Intraclass Correlation Coefficients: 9.781–0.876). Receiver operator characteristic curves showed excellent discrimination between normal controls from CDR 0.5 (AUC:0.820;95% CI: 0.74–0.90) and for normal controls from any cognitive impairment (AUC:0.885;95% CI: 0.83–0.94).

Discussion

The patient QDRS validly and reliably differentiates individuals with and without cognitive impairment and can be completed by patients through all stages of dementia. The patient QDRS is highly correlated with Gold Standard measures of cognitive, function, behavior, and global staging. The patient QDRS provides a rapid method to screen patients for MCI and ADRD in clinical practice, determine study eligibility, improve case ascertainment in community studies.

Background

Alzheimer’s disease and related dementias (ADRD) currently affect over 5.7 million Americans and over 35 million people worldwide [1]. The number of ADRD cases is expected to increase as the number of people over age 65 grows by 62% and the number over age 85 is expected to grow by 84% [13]. More than one in eight adults over age 65 has dementia, and current projections indicate a three-fold increase by 2050 [1]. Community detection of mild cognitive impairment (MCI) [4] and early Alzheimer’s disease (AD) [5] and related disorders may be limited due to the lack of screening tests characterizing the earliest signs of impairment, monitoring response to interventions, correspondence to biomarkers [6, 7], and the potential benefits versus harms from screening [3]. The inability to detect MCI and ADRD may affect eligibility determination for care and services and impede case ascertainment and recruitment into clinical research. Primary care providers are often responsible for the detection, diagnosis, and treatment of ADRD as the number of dementia specialists (neurologists, psychiatrists, and geriatricians) and specialty centers is not sufficient to meet the growing demands [2].

Gold Standard evaluations such as the Clinical Dementia Rating (CDR) [8] are used in many research projects but require a trained clinician to administer, interpret and score the CDR and requires an extended period of time with both an informant and the patient. While feasible in a research setting such as a clinical trial or longitudinal observational study, the CDR is not practical in primary care settings or for use in epidemiologic case-ascertainment projects. Briefer evaluations tools are often used in these settings. These briefer tools can be grouped into performance-based assessments including the Mini Mental State Exam [9] or Mini-Cog [10], or interview-based assessments usually with an informant such as the AD8 [11, 12], Informant-Questionnaire in Cognitive Decline in the Elderly (IQCODE) [13], or Quick Dementia Rating System (QDRS) [14].

There are limitations with both brief approaches. Performance-measures can be biased by education, language, and culture that can lower their accuracy in underrepresented groups. Brief tests may not provide a sense of change or functional impairment if prior testing has not been done [2]. It has been reported that up to 38% of patients refuse cognitive screening tests in primary care offices [15, 16]. Furthermore, ADRD can be insidious in its onset with symptoms fluctuating over time [17]. Informant-based measures are limited by the patient being able to identify a reliable, observant informant, and in many cases, patients may not be accompanied by an informant in a clinical setting.

Patient Reported Outcome (PRO) approaches may be able to overcome the above barriers for early detection of ADRD in primary care practices [18, 19]. While performance tests have biases associated with age, education, race, language, and culture, PROs are intraindividual assessments based on what the patient believes is occurring over a defined time-period, or as a comparison to prior time [2]. PROs may provide valid information on patient functional status and well-being, can be used to enhance care quality, and are proposed for use in assessing performance and could be beneficial in the detection of cognitive impairment if they were able to adequately capture cognitive symptoms and functional impairment, and correlate with Gold Standard assessments commonly used to establish diagnoses [2]. Initially tested as an informant rating, we examined the utility of the QDRS as a self-rated PRO scale to detect MCI and ADRD compared with the informant version of the QDRS, the CDR and neuropsychological testing.

Methods

Study participants

This study was conducted in 270 consecutive patient-caregiver dyads attending our center for clinical care or participation in cognitive aging research. During the visit, the patient and caregiver underwent a comprehensive evaluation including the Clinical Dementia Rating (CDR) and its sum of boxes (CDR-SB) [8], Global Deterioration Scale (GDS) [20], mood, neuropsychological testing, caregiver ratings of patient behavior and function, and a caregiver psychosocial and needs assessment. All components of the assessment are part of standard of care at our center [21] and protocols in the clinic and research projects are identical. A waiver of consent was obtained from clinic patients and research participants provided written informed consent. This study was approved by the University of Miami Institution Review Board.

Administration of QDRS

Prior to the in-person visit, a welcome packet was mailed to the patient and caregiver to collect demographics and medical history and included both the caregiver [21] and patient versions of the QDRS. The respondents were given directions to complete the questionnaires independently of each other. The packets including the QDRS were returned prior to the appointment. The QDRS was not considered in the clinical evaluation, staging or diagnosis of the patient.

Clinical assessment

The in-person clinical assessments are modelled on the Uniform Data Set (UDS) 3.0 from the NIA Alzheimer Disease Center program [22, 23]. The CDR [8] was used to determine the presence or absence of dementia and to stage its severity; a global CDR 0 indicates no dementia; CDR 0.5 represents MCI or very mild dementia; CDR 1, 2, or 3 correspond to mild, moderate, or severe dementia. The CDR-SB was calculated by adding up the individual CDR categories giving a score from 0–18 with higher scores supporting more severe stages. The GDS [20] was determined to provide a global cognitive and function stage: a GDS 1 indicates no cognitive impairment; GDS 2 indicates subjective cognitive impairment; GDS 3 corresponds to mild cognitive impairment; GDS 4–7 corresponds to mild, moderate, moderate-severe, or severe dementia [Reisberg]. Diagnoses were determined using standard criteria for MCI [4], AD [5], dementia with Lewy bodies (DLB) [24], vascular dementia (VaD) [25], and frontotemporal degeneration (FTD) [26]. Extrapyramidal features were assessed with the Movement Disorders Society-Unified Parkinson’s Disease Rating Scale, motor subscale part III (UPDRS) [27]. The Charlson Comorbidity Index [28] was used to measure overall health and medical comorbidities. The risk of vascular contributions to dementia was assessed with the modified Hachinski scale [29]. The presence of physical frailty was assessed with the Fried Frailty Scale [30].

Cognitive assessment

Each patient was administered a 30-minute test battery at the time of the office visit to assess their cognitive status. The psychometrician was unaware of the diagnosis, CDR global score, or QDRS scores. The Montreal Cognitive Assessment [31] was used for a global screen. The rest of the battery was modeled after the UDS battery used in the NIA Alzheimer Disease Centers [23] supplemented with additional measures: 15-item Multilingual Naming Test (naming) [23]; Animal naming and Letter fluency (verbal fluency) [23]; Hopkins Verbal Learning Task (episodic memory for word lists–immediate, delayed, and cued recall) [32]; Number forward/backward and Months backwards tests (working memory) [23]; Trailmaking A and B (processing and visuospatial abilities) [33]; and a novel Number-Symbol Coding Test (executive function). Mood was assessed with the Hospital Anxiety Depression Scale [34] providing subscale scores for depression (HADS-D) and anxiety (HADS-A).

Caregiver ratings of patient cognition, function, and behavior

Standardized scales were administered to the caregivers to provide ratings of cognition, function, and behavior. In addition to the caregiver version of the QDRS, activities of daily living were captured with the Functional Activities Questionnaire (FAQ) [35]. Dementia-related behaviors and psychological features were measured with the Neuropsychiatric Inventory (NPI) [36]. Patient daytime sleepiness was assessed with the Epworth Sleepiness Scale (ESS) [37] while daytime alertness was rated on a 1–10 Likert scale (“Rate the patient’s general level of alertness for the past 3 weeks on a scale from 0 to 10”) anchored by “Fully and normally awake” (scored 10) and “Sleep all day” (scored 0) [38]. Caregiver burden was captured with the 12-item Zarit Burden Inventory [39].

Statistical analyses

Analyses were conducted with IBM SPSS Statistics v26 (Armonk, NY). Descriptive statistics were used to examine patient and caregiver demographic characteristics, informant rating scales, dementia staging, and neuropsychological testing. One-way analysis of variance (ANOVA) with Tukey-Kramer post-hoc tests were used for continuous data and Chi-square analyses were used for categorical data. Data completeness was assessed by calculating the rates of missing data for each QDRS item. To assess item variability, the item frequency distributions, range, and standard deviations were calculated. Patient and Informant QDRS and CDR-SB scores were examined for floor and ceiling effects. Factor analysis using principle components with a Varimax rotation was performed revealing a one-factor solution. Total QDRS scores and individual items were examined for their psychometric properties and compared with patient and caregiver characteristics, rating scales, and neuropsychological test performance. QDRS-derived CDR and CDR-SB scores were computed by using the first six QDRS domains. The Toileting and Personal Hygiene QDRS domain has a 0.5 category that the CDR Personal Care domain does not–in order to compare these domains Toileting and Personal Hygiene scores of 0 or 0.5 are recoded as 0 [14].

Concurrent (criterion) validity was assessed comparing the mean performance on each Gold Standard measure of cognition (e.g., CDR, GDS, neuropsychological testing), function (i.e., FAQ), behavior (e.g., NPI, HADS), and caregiver ratings (e.g., ESS, ZBI) with the patient version of the QDRS using Pearson correlation coefficients [14, 40, 41]. Internal consistency was examined as the proportion of the variability in the responses that is the result of differences in the respondents, reported as the Cronbach alpha reliability coefficient. Coefficients greater than 0.7 are good measures of internal consistency [14, 40, 41]. The intraclass correlation coefficient (ICC) assessed inter-scale reliability comparing the patient and informant versions of the QDRS individual questions, total score, and QDRS-derived CDR and CDR-SB with the independently determined CDR global score and CDR-SB correcting for chance agreement [14, 40, 41]. Simple agreement (i.e., the proportion of responses in which two observations agree such as a Pearson or Spearman correlation coefficient) is strongly influenced by the distribution of positive and negative responses, and the agreement by chance alone. The ICC instead examines the proportion of responses in agreement in relation to the agreement expected by chance [14, 40, 41]. An ICC between 0.55 and 0.75 is considered good agreement, whereas an ICC greater than 0.76 is considered excellent [42]. Receiver operator characteristic (ROC) curves were used to assess discrimination between CDR stages and the patient QDRS. Three analyses were performed. The first discriminated CDR 0 vs 0.5, which is generally the most difficult staging to determine. The second discriminated CDR 0 from CDR >0. The third examined the discrimination properties of the patient QDRS, the MoCA and combining the QDRS and MoCA. Results are reported as area under the curve (AUC) with 95% confidence intervals (CIs). Known-group validity was assessed by examining the QDRS scores by CDR staging, and dementia etiology [14, 40, 41]. Multiple comparisons were addressed using the Bonferroni correction.

Results

Sample characteristics

The mean age of the patients was 75.7±8.9 years and 15.4±2.7 years of education. The mean age of the caregivers was 55.5±15.1 years and 16.0±2.6 years of education. The sample was 96.5% White, 3.1% African American and 0.4% Asian, with 10.4% reporting Hispanic ethnicity. The cognitively impaired group (CDR>0) had a higher proportion of White patients than the CDR 0 healthy controls (94.9% vs 81.0), while African American patients had a higher proportion of healthy controls (11.9% vs 1.4%, χ2 = 14.1, p = .001). The patients had a mean CDR-SB of 4.5±4.7, a mean informant QDRS score of 6.4±6.3, a mean patient QDRS score of 4.5±4.9, and a mean MoCA score of 18.5±7.1. Caregivers were mostly spouses (65.2%), adult children (21.0%), or other individuals (13.8%) with 69.1% reporting living with the patient and having daily contact. This sample covered a range of healthy controls (CDR 0 = 41), MCI or very mild dementia (CDR 0.5 = 119), mild dementia (CDR 1 = 59), moderate dementia (CDR 2 = 35), and severe dementia (CDR 3 = 17). Consensus clinical diagnoses included: 41 Healthy Controls, 88 MCI, 42 AD, 71 DLB, 18 VaD, 9 FTD, and 1 Undefined dementia. All CDR 0 patients were able to complete the patient QDRS, while 9 individuals with cognitive impairment (one CDR 0.5, two CDR 1, two CDR 2, and four CDR 3) for a total of 261 patients who were able to complete the patient QDRS. Diagnoses of those patients who could not complete the patient QDRS include 2 AD, 3 DLB, 1 VAD and 3 FTD. Table 1 lists mean performances on all patient and caregiver rating scales used in this study by CDR staging. Both the informant and patient versions of the QDRS increase in scores across CDR stages. Table 2 demonstrates the strength of association between the patient version of the QDRS and other indices of cognition, behavior, and function. The patient QDRS was strongly correlated with all rating scales and neuropsychological tests.

QDRS data quality

Table 3 demonstrates that all items of the QDRS exhibited the full range of possible responses across the five-item QDRS response options with few missing items (range 0–0.7%), even in individuals with moderate to severe dementia. The item-level floor effects range from 40.2% (Memory and Recall) to 79.8% (Toileting and Personal Hygiene). The item-level ceiling effects range from 0.4% (Activities Outside the Home) to 5.7% (Function at Home and Hobby Activities). The standard deviation was similar for all items, ranging from 0.5 to 0.8. Thus, data quality for the QDRS were good to excellent.

thumbnail
Table 3. Item distributions, missing rates, factor loading, item-total, and inter-item correlations.

https://doi.org/10.1371/journal.pone.0240422.t003

Reliability and scale score feature of the patient QDRS

The internal consistency of the patient QDRS, a measure based on the correlation between the different QDRS questions, was assessed by its internal consistency with Cronbach alpha (Table 4). The internal consistency was excellent at 0.92 which is comparable to the informant QDRS (0.95) and CDR-SB (0.96). The patient QDRS covered the range of possible scores and the mean, median and standard deviation demonstrated a sufficient dispersion of scores for assessing the patients self-rating of their cognitive status with a low percentage of missing data. There was a modest floor (18.4%) and very low ceiling (0%) effect–these ranges were similar to the informant QDRS and the CDR-SB. The patient QDRS was strongly correlated with both the Informant QDRS and the CDR-SB.

thumbnail
Table 4. QDRS scale score features: Internal-consistency reliability, score distributions, and inter-scale correlations.

https://doi.org/10.1371/journal.pone.0240422.t004

Construct (inter-scale) validity of the patient QDRS

The informant and patient versions of the QDRS were compared to each other and to the CDR global score using Intraclass Correlation Coefficients (ICC) in Table 5. ICCs between patient and informant QDRS, patient QDRS and CDR global score, and informant QDRS and CDR are excellent for individual items, total QDRS scores, the QDRS-derived CDR global score and CDR-SB. The lowest ICC is for memory (ICC = 0.69) between the patient QDRS and the CDR global score. These analyses demonstrate the patient QDRS has high rates of agreement with both the informant QDRS and the Gold Standard CDR global score.

thumbnail
Table 5. Construct reliability (by ICC) between QDRS versions and CDR.

https://doi.org/10.1371/journal.pone.0240422.t005

The range of patient QDRS and CDR-SB scores by global CDR stages is shown in Table 6. Both the patient QDRS and CDR-SB demonstrate a range of scores within each global CDR stage reflecting the range of symptoms self-reported by the patient (QDRS) or determined by the clinician (CDR-SB). To aid in interpreting the QDRS scores, we performed ROC curves for the QDRS to derive cut-off scores that can assist clinicians and researchers. For discriminating CDR 0 normal controls (with and without subjective complaints) from CDR 0.5 very mild impairment (which includes MCI and very mild dementia), a cut-off score of 1.5 provides the best sensitivity and specificity (AUC 0.823; 95% CI 0.74–0.90, p < .001) and is identical to the cut-off for the informant QDRS [14]. As the patient QDRS may be used in clinical practices and research projects to screen for cognitive impairment, we repeated the ROC analyses discriminating CDR 0 from any non-0 CDR stage. A cut-off of 1.5 again provides the best combination of sensitivity and specificity (AUC 0.888; 95% CI 0.84–0.94, p < .001) demonstrating excellent ability to discriminate normal controls from those individuals with any form of cognitive impairment. We repeated these analyses using consensus diagnoses instead of CDR global scores. For discriminating healthy controls from MCI, a cut-off score of 1.5 provides the best sensitivity and specificity (AUC 0.821; 95% CI 0.73–0.89, p < .001). Discriminating healthy controls from individuals with any form of cognitive impairment had an AUC 0.889 (95% CI 0.84–0.94, p < .001).

We then examined whether combining the patient QDRS with a brief performance test, the MoCA could improve the detection of cognitive impairment more than either alone. For discriminating CDR 0 normal controls (with and without subjective complaints) from CDR 0.5 very mild impairment (which includes MCI and very mild dementia), the QDRS provided an AUC of 0.820 (0.74–0.90) and the MoCA provided an AUC of 0.888 (0.87–0.95). Combining the patient QDRS with the MoCA provided excellent discrimination with an AUC of 0.928 (0.89-.0.97). We repeated the ROC analyses discriminating CDR 0 from any non-0 CDR stage. the QDRS provided an AUC of 0.885 (0.83–0.94) and the MoCA provided an AUC of 0.932 (0.89–0.97). Combining the patient QDRS with the MoCA again provided excellent discrimination with an AUC of 0.962 (0.94–0.98).

Known-groups validity of the patient QDRS

The performance of the QDRS questions, total QDRS, and QDRS-derived CDR and CDR-SB scores by different dementia etiologies is demonstrated in Table 7. In general, QDRS questions perform similarly across different dementia etiologies, however several questions appear to be helpful with differential diagnosis following post-hoc analyses. QDRS question 3 (Decision Making) is more frequently endorsed by individuals with VaD. QDRS question 4 (Activities Outside the Home) is most frequently endorsed by DLB patients and least endorsed by FTD patients. QDRS question 5 (Function at Home and Hobbies) is least frequently endorsed by FTD patients. Questions 8 (Language and Communication) and 10 (Attention and Concentration) are more frequently endorsed by DLB and FTD patients. DLB patients are more likely to endorse problems with behavior (QDRS Question 7), mood (QDRS Question 9) and have higher total QDRS scores. Interestingly, although not reaching statistical significance, AD patients tended to report the lowest scores suggesting that impaired insight might be a more significant issue in AD compared with the other dementias.

thumbnail
Table 7. Performance of patient QDRS across different dementia etiologies.

https://doi.org/10.1371/journal.pone.0240422.t007

Discussion

The patient version of the QDRS is a brief dementia detection tool that validly and reliably differentiates individuals with normal cognition from those individuals with MCI and dementia. The patient version of the QDRS strongly correlated with Gold Standard assessments of cognition (e.g., CDR, neuropsychological testing), function (i.e., FAQ), and behavior (i.e., NPI) and showed strong psychometric properties and excellent data quality. The patient QDRS ratings had excellent agreement with independently obtained informant versions of the QDRS and with the CDR global and its sum of boxes. Discriminability of the patient QDRS for healthy controls vs. CDR 0.5 and CDR >0 had cut-off scores identical to the informant version. Finally combining the patient QDRS with a brief performance measure such as the MoCA further increased the accuracy of dementia detection in a valid and reliable fashion.

Evaluation of dementia typically consists of objective testing of the patient and, when available, questioning of a reliable informant [2]. While informant interviews provide a more reliable way to determine cognitive and functional change in dementia patients, informants are not always attendant. Brief office visits such as annual check-ups, often without the presence of informants, may not uncover very mild symptoms of dementia. In a recent report, the Alzheimer Association conducted surveys with 1000 primary health care providers and 1954 older adults regarding expectations, benefits, and practices about dementia screening [2, 3]. While 94% of patients saw their providers in the last year, only 47% discussed memory and only 28% received a memory assessment. This contrasts with 95% of older adults wanting to know about their memory and 51% reporting changes. Although 50% of providers reported they assess cognition as part of their evaluation, only 40% were familiar with the toolkits available to them. Additionally, many patients refuse cognitive testing for a variety of reasons, particularly if “sprung” upon them in the midst of a routine office visit. A PRO approach may provide a means of capturing cognitive impairment in an unaccompanied patient presenting to the office [4345] and could provide an “opening’ for the providers to discuss the issue of memory loss. They can create efficient and cost-effective clinical encounters with providers while also empowering patients and family caregivers to engage in early detection of ADRD [4648]. Completion of the patient QDRS prior to the physician visit can offer several advantages above and beyond what is captured through medical records review and simple questions including (a) capture of non-memory symptoms (e.g., orientation, problem-solving, daily functioning) that are both disturbing to patients and families and are more likely to be accepted as a change that requires medical attention; (b) provide information about the patient’s perception of their real-world functioning; (c) provide information at baseline visits where prior testing may not be available; (d) capture of progression over time; and (e) allow for staging of ADRD in a brief, valid, and time- and cost-effective manner [14, 49]. This is an important point as in the era of COVID-19, nearly all evaluations are done remotely. We recently completed a study of 288 individuals with community based assessments by non-physician clinician with remote follow-up calls [3] and found a willingness to have their memory evaluated, complete the measures, complete the phone follow-up, without evidence of harm.

To date, self-rating scales for dementia have not gained common use, perhaps due the general perception that dementia patients lack insight and deny cognitive decline, even in mild forms of dementia [50, 51]. However, awareness of deficits varies greatly between individuals and patients can offer reliable accounts of cognitive change, whether or not they perceive the change as a problem [2, 50]. The AD8 has demonstrated validity as a PRO [46] as has the Healthy Aging Brain Care Monitor [47]. Large multisite studies such as the Alzheimer’s Disease Cooperative Study and the Alzheimer Disease Neuroimaging Initiative ask participants to provide self-ratings of cognitive complaints using the Cognitive Function Instrument [52] or the Cognitive Change Index [53]. The Self -Administered Gerocognitive Examination [54] has been used to identify those individuals with MCI and early stage ADRD by testing orientation, language, cognition, visuospatial-construction, executive, and memory domains without any staff supervision. Additionally, patients with cognitive impairment are asked to self-rate a number of physical, psychological, and social symptoms including mood [55] and quality of life [56]. In this study, even patients with severe dementia (CDR 3) were able to complete the QDRS will little missing data.

There are several limitations in this study. The patient QDRS was validated in the context of an academic research setting where the prevalence of MCI and dementia are high, and the patients tend to be highly educated and predominantly White. Validation of the patient QDRS in other settings where dementia prevalence is lower (i.e. community samples) and the sample is more diverse is needed. There is the potential for recall biases as patients may choose to tell the physician what they think they want to hear or may not recall. In this paper, we tested for this by comparing the patient QDRS to an independently collected caregiver QDRS and the physician directed gold standard evaluation. As this is a cross-sectional study, the longitudinal properties of the patient QDRS still need to be elucidated. The majority of cases consisted of MCI, AD, and DLB. There were fewer VaD cases and only progressive aphasic forms of FTD. Other dementia types need to be studied. The patient QDRS was completed prior to the in-person evaluation. While instructions were provided to complete the QDRS independently, we cannot be sure that the patient did not ask others for help answering the question. Finally, AD patients endorsed the fewest number of self-reported symptoms. Although the QDRS scores for AD patients performed well compared with neuropsychological testing and the CDR global score and its sum of boxes, denial or anosognosia [50, 51] in AD patients may limit the reliability in the more advanced stages of disease.

Strengths of this study include the use of a comprehensive evaluation that is part of standard of care with measurement of multiple patient and caregiver constructs using Gold Standard instruments. Another advantage of the QDRS is its brevity consisting of 10 questions to be printed on one piece of paper or viewed in a single screenshot to maximize its clinical and research utility that can be answered by patients even in the severe stages of dementia. Although not designed as a differential diagnostic tool, the QDRS as a PRO may assist clinicians during the initial visit in diagnosis as patients with different dementia etiologies self-reported symptoms differently. The patient QDRS may serve as an effective clinical tool for dementia screening, case-ascertainment in epidemiological studies, in busy primary care settings, and in instances where an informant is not available. Combining the QDRS with a brief performance measure may provide excellent power to detect cognitive impairment. The patient QDRS performed reliably and validly in comparison to standardized scales of a comprehensive cognitive neurology evaluation, but in a brief fashion that could facilitate its use in clinical care and research.

Acknowledgments

We thank the patients, caregivers, participants and study partners that contributed to this study.

References

  1. 1. Alzheimer Association 2019 Facts and Figures. https://www.alz.org/alzheimers-dementia/facts-figures. Accessed July 13, 2020
  2. 2. Galvin JE. Using informant and performance screening methods to detect mild cognitive impairment and dementia. Curr Rep Gerontol 2018; 7:19–25.
  3. 3. Galvin JE, Tolea MI, Chrisphonte SC. What do older adults do with results from dementia screening. PLoS One 2020;15:e0235534. pmid:32609745
  4. 4. Albert MS, DeKosky ST, Dickson D, Dubois B, Feldman HH, Fox NC, et al. The diagnosis of mild cognitive impairment due to Alzheimer’s disease: recommendations from the National Institute on Aging–Alzheimer’s association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimers Dement. 2011;7: 270–279. pmid:21514249
  5. 5. McKhann GM, Knopman DS, Chertkow H, Hyman BT, Jack CR Jr, Kawas CH, et al. The diagnosis of dementia due to Alzheimer’s disease: recommendations from the National Institute on Aging–Alzheimer’s association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimers Dement. 2011;7: 263–269. pmid:21514250
  6. 6. Galvin JE, Fagan AM, Holtzman DM et al. Relationship of dementia screening tests with biomarkers of Alzheimer’s disease. Brain 2010; 133: 3290–3300. pmid:20823087
  7. 7. Galvin JE. Dementia screening, biomarkers and protein misfolding: Implications for public health and diagnosis. Prion. 2011;5:16–21. pmid:21164279
  8. 8. Morris JC. The Clinical Dementia Rating (CDR): Current version and scoring rules., Neurol, 1993; 43: 2412–2414
  9. 9. Folstein MF, Folstein SE, McHugh PR. "Mini-mental state". A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. 1975;12:189–198. pmid:1202204
  10. 10. Rosenbloom M, Barclay TR, Borson S, Werner AM, Erickson LO, Crow JM, et al. Screening Positive for Cognitive Impairment: Impact on Healthcare Utilization and Provider Action in Primary and Specialty Care Practices. J Gen Intern Med. 2018;33:1746–1751. pmid:30097978
  11. 11. Galvin JE, Roe CM, Powlishta KK et al. The AD8: a brief informant interview to detect dementia. Neurology 2005;65:559–594. pmid:16116116
  12. 12. Galvin JE, Roe CM, Xiong C, Morris JC. The validity and reliability of the AD8 informant interview for dementia. Neurology, 2006;67:1942–1948. pmid:17159098
  13. 13. Jorm AF, Scott R, Cullen JS, MacKinnon AJ. Performance of the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) as a screening test for dementia. Psychol Med. 1991;21:785–790. pmid:1946866
  14. 14. Galvin JE. The Quick Dementia Rating System (QDRS): A rapid dementia staging tool. Alzheimer Dem (DADM) 2015; 1:249–259.
  15. 15. Fowler NR, Perkins AJ, Gao S, Sachs GA, and Boustani MA. Risks and Benefits of Screening for Dementia in Primary Care: The Indiana University Cognitive Health Outcomes Investigation of the Comparative Effectiveness of Dementia Screening (IU CHOICE) Trial. J Am Geriatr Soc. 2019; 00:1–9.
  16. 16. Harrawood A, Fowler NR, Perkins AJ, LaMantia MA, Boustani MA. Acceptability and Results of Dementia Screening Among Older Adults in the United States. Curr Alzheimer Res. 2018; 15(1): 51–55. pmid:28891444
  17. 17. Holsinger T, Deveau J, Boustani M, Williams JW. Does this patient have dementia? JAMA 2007; 297:2391–2404. pmid:17551132
  18. 18. Keller S, Dy S, Wilson R, Dukhanin V, Snyder C, Wu AJ. Selecting patient-reported outcome measures to contribute to primary care performance measurement: A mixed models approach. Gen Intern Med. 2020 Jun 3. Online ahead of print. pmid:32495096
  19. 19. Black N. Patient reported outcome measures could help transform healthcare. BMJ. 2013;346:f167. pmid:23358487
  20. 20. Reisberg B. Global measures: utility in defining and measuring treatment response in dementia. Int Psychogeriatr. 2007;19:421–456. pmid:17480241
  21. 21. Galvin JE, Valois L, Zweig Y. Collaborative transdisciplinary team approach for dementia care. Neurodegener Dis Manag. 2014;4:455–469. pmid:25531688
  22. 22. Beekly DL, Ramos EM, Lee WW, et. al; NIA Alzheimer's Disease Centers. The National Alzheimer's Coordinating Center (NACC) database: The Uniform Data Set. Alzheimer Dis Assoc Disord. 2007;21:249–258. pmid:17804958
  23. 23. Weintraub S, Besser L, Dodge HH, et. al. Version 3 of the Alzheimer Disease Centers' Neuropsychological Test Battery in the Uniform Data Set (UDS). Alzheimer Dis Assoc Disord. 2018;32:10–17. pmid:29240561
  24. 24. McKeith IG, Boeve BF, Dickson DW, et. al. Diagnosis and management of dementia with Lewy bodies: Fourth consensus report of the DLB Consortium. Neurology. 2017;89:88–100. pmid:28592453
  25. 25. Skrobot OA, O'Brien J, Black S, et al. The Vascular Impairment of Cognition Classification Consensus Study. Alzheimers Dement. 2017;13:624–633. pmid:27960092
  26. 26. Olney NT, Spina S, Miller BL. Frontotemporal Dementia. Neurol Clin. 2017;35:339–374. pmid:28410663
  27. 27. Goetz CG, Tilley BC, Shaftman SR, et al; Movement Disorder Society UPDRS Revision Task Force. Movement Disorder Society-sponsored revision of the Unified Parkinson's Disease Rating Scale (MDS-UPDRS): scale presentation and clinimetric testing results. Mov Disord. 2008;23:2129–2170. pmid:19025984
  28. 28. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40:373–383. pmid:3558716
  29. 29. Rosen WG, Terry RD, Fuld PA, Katzman R, Peck A. Pathological verification of ischemic score in differentiation of dementias. Ann Neurol. 1980;7:486–488. pmid:7396427
  30. 30. Fried LP, Tangen CM, Walston J, et al, Cardiovascular Health Study Collaborative Research Group. Frailty in older adults: Evidence for a phenotype. J Gerontol A Biol Sci Med Sci 2001;56:M146–156. pmid:11253156
  31. 31. Nasreddine ZS, Phillips NA, Bédirian V, et al. The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc. 2005;53:695–699. pmid:15817019
  32. 32. Shapiro AM, Benedict RH, Schretlen D, Brandt J. Construct and concurrent validity of the Hopkins Verbal Learning Test-revised. Clin Neuropsychol. 1999;13:348–358. pmid:10726605
  33. 33. Reitan RM, Validity of the trail making test as an indication of organic brain damage, Perceptual and Motor Skills, 1958;8:271–276.
  34. 34. Snaith RP. The Hospital Anxiety and Depression Scale. Health Qual Life Outcomes. 2003;1;1:29.
  35. 35. Tappen RM, Rosselli M, Engstrom G. Evaluation of the Functional Activities Questionnaire (FAQ) in cognitive screening across four American ethnic groups. Clin Neuropsychol. 2010;24:646–661. pmid:20473827
  36. 36. Kaufer DI, Cummings JL, Ketchel P, et al. Validation of the NPI-Q, a brief clinical form of the Neuropsychiatric Inventory. J Neuropsychiatry Clin Neurosci. 2000;12:233–239. pmid:11001602
  37. 37. Johns MW. A new method for measuring daytime sleepiness: the Epworth sleepiness scale. Sleep. 1991;14:540–545. pmid:1798888
  38. 38. Boeve BF, Molano JR, Ferman TJ, et al. Validation of the Mayo Sleep Questionnaire to screen for REM sleep behavior disorder in a community-based sample. J Clin Sleep Med. 2013;9:475–480. pmid:23674939
  39. 39. Herbert R, Bravo G, Preville M. Reliability, validity, and reference values of the Zarit Burden Interview for assessing informal caregivers of community-dwelling older persons with dementia. Can J Aging 2000;19:494–507.
  40. 40. Monahan PO, Boustani M, Alder C, et al. A practical clinical tool to monitor dementia symptoms: The HABC-Monitor. Clin Interv Aging 2012;7:143–157. pmid:22791987
  41. 41. Streiner DL, Norman GR. Health measurement scale: a practical guide to their development and use. 4th ed. Oxford, England: Oxford University Press; 2008.
  42. 42. Fleiss JL, Chilton NW. The measurement of interexaminer agreement on periodontal disease. J Periodontal Res 1983;18:601–6. pmid:6230433
  43. 43. Pogatzki-Zahn E, Schnabel K, Kaiser U. Patient-reported outcome measures for acute and chronic pain: current knowledge and future directions. Curr Opin Anaesthesiol. 2019;32:616–622. pmid:31415046
  44. 44. Marrero DG, Hilliard ME, Maahs DM, McAuliffe-Fogarty AH, Hunter CM. Using patient reported outcomes in diabetes research and practice: Recommendations from a national workshop. Diabetes Res Clin Pract. 2019;153:23–29. pmid:31128133
  45. 45. Smith AW, Jensen RE. Beyond methods to applied research: Realizing the vision of PROMIS®. Health Psychol. 2019;38:347–350. pmid:31045416
  46. 46. Galvin JE, Roe CM, Coats MA, Morris JC. Patient’s rating of cognitive ability. Arch Neurol 2007;64:725–750. pmid:17502472
  47. 47. Monahan PO, Alder CA, Khan BA, Stump T, Boustani MA. The Healthy Aging Brain Care (HABC) monitor: validation of the patient self-report version of the clinical tool designed to measure and monitor cognitive, functional, and psychological health. Clin Interv Aging. 2014; 9:2123–32. pmid:25584024
  48. 48. Holmes MM, Stanescu S, Bishop FL. The use of measurement systems to support patient self-management of long-term conditions: an overview opportunities and challenges. Patient Relat Outcome Meas. 2019;10:385–394. pmid:31908555
  49. 49. Berman SE, Koscik RL, Clark LR, Mueller KD, Bluder L, Galvin JE, et al. Use of the Quick Dementia Rating System (QDRS) in the Wisconsin Registry for Alzheimer’s Prevention. J Alz Dis Report. 2017; 1:9–13.
  50. 50. Kelleher M, Tolea MI, Galvin JE. Anosognosia increases caregiver burden in mild cognitive impairment. Int J Geriatr Psych, 2016;31:799–808.
  51. 51. Ganguli M, Du Y, Rodriguez EG, et al. Discrepancies in information provided to primary care physicians by patients with and without dementia: The Steel Valley Seniors Survey. Am J Geriatr Psychiatry. 2006;14:446–455. pmid:16670249
  52. 52. Li C, Neugroschl J, Luo X, Zhu C, Aisen P, Ferris S, et al. The Utility of the Cognitive Function Instrument (CFI) to detect decline in non-demented older adults. J Alzheimers Dis 2017;60:427–437. pmid:28854503
  53. 53. Rattanabannakit C, Risacher SL, Gao S, Lane KA, Brown SA, McDonald BC, et al. The Cognitive Change Index as a measure of self and informant perception of cognitive decline: relation to neuropsychological tests. J Alzheimers Dis. 2016;51:1145–55. pmid:26923008
  54. 54. Scharre DW, Chang SI, Murden RA, Lamb J, Beversdorf DQ, Kataki M, et al. Self-administered gerocognitive examination (SAGE): a brief cognitive assessment instrument for mild cognitive impairment (MCI) and early dementia. Alzheimer Dis Assoc Disord. 2010;24:64–71. pmid:20220323
  55. 55. Yesavage JA. Geriatric Depression Scale. Psychopharmacol Bull. 1988;24:709–711. pmid:3249773
  56. 56. James BD, Xie SX, Karlawish JH. How do patients with Alzheimer disease rate their overall quality of life? Am J Geriatr Psychiatry. 2005;13:484–490. pmid:15956268