Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Computerized decision support is an effective approach to select memory clinic patients for amyloid-PET

  • Hanneke F. M. Rhodius-Meester ,

    Roles Conceptualization, Formal analysis, Methodology, Project administration, Visualization, Writing – original draft

    h.rhodius@amsterdamumc.nl

    Affiliations Alzheimer Center Amsterdam, Neurology, Amsterdam UMC Location VUmc, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands, Amsterdam Neuroscience, Neurodegeneration, Amsterdam, The Netherlands, Department of Internal Medicine, Geriatric Medicine Section, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands, Department of Geriatric Medicine, The Memory Clinic, Oslo University Hospital, Oslo, Norway

  • Ingrid S. van Maurik,

    Roles Conceptualization, Formal analysis, Methodology, Visualization, Writing – review & editing

    Affiliations Alzheimer Center Amsterdam, Neurology, Amsterdam UMC Location VUmc, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands, Amsterdam Neuroscience, Neurodegeneration, Amsterdam, The Netherlands, Epidemiology and Data Science, Amsterdam UMC Location Vrije Universiteit Amsterdam, Amsterdam, The Netherlands, Amsterdam Public Health, Methodology, Amsterdam, The Netherlands

  • Lyduine E. Collij,

    Roles Formal analysis, Methodology, Software, Writing – review & editing

    Affiliation Department of Radiology and Nuclear Medicine, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands

  • Aniek M. van Gils,

    Roles Resources, Writing – review & editing

    Affiliations Alzheimer Center Amsterdam, Neurology, Amsterdam UMC Location VUmc, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands, Amsterdam Neuroscience, Neurodegeneration, Amsterdam, The Netherlands

  • Juha Koikkalainen,

    Roles Formal analysis, Methodology, Software, Validation, Writing – review & editing

    Affiliation Combinostics Ltd., Tampere, Finland

  • Antti Tolonen,

    Roles Formal analysis, Methodology, Writing – review & editing

    Affiliation Combinostics Ltd., Tampere, Finland

  • Yolande A. L. Pijnenburg,

    Roles Supervision, Writing – review & editing

    Affiliations Alzheimer Center Amsterdam, Neurology, Amsterdam UMC Location VUmc, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands, Amsterdam Neuroscience, Neurodegeneration, Amsterdam, The Netherlands

  • Johannes Berkhof,

    Roles Methodology, Writing – review & editing

    Affiliations Epidemiology and Data Science, Amsterdam UMC Location Vrije Universiteit Amsterdam, Amsterdam, The Netherlands, Amsterdam Public Health, Methodology, Amsterdam, The Netherlands

  • Frederik Barkhof,

    Roles Data curation, Methodology, Writing – review & editing

    Affiliations Department of Radiology and Nuclear Medicine, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands, Queen Square Institute of Neurology and Centre for Medical Image Computing, University College London, London, United Kingdom

  • Elsmarieke van de Giessen,

    Roles Data curation, Software, Writing – review & editing

    Affiliations Alzheimer Center Amsterdam, Neurology, Amsterdam UMC Location VUmc, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands, Amsterdam Neuroscience, Neurodegeneration, Amsterdam, The Netherlands, Department of Radiology and Nuclear Medicine, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands

  • Jyrki Lötjönen,

    Roles Methodology, Software, Supervision, Writing – review & editing

    Affiliation Combinostics Ltd., Tampere, Finland

  • Wiesje M. van der Flier

    Roles Conceptualization, Supervision, Writing – review & editing

    Affiliations Alzheimer Center Amsterdam, Neurology, Amsterdam UMC Location VUmc, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands, Amsterdam Neuroscience, Neurodegeneration, Amsterdam, The Netherlands, Epidemiology and Data Science, Amsterdam UMC Location Vrije Universiteit Amsterdam, Amsterdam, The Netherlands, Amsterdam Public Health, Methodology, Amsterdam, The Netherlands

Abstract

Background

The use of amyloid-PET in dementia workup is upcoming. At the same time, amyloid-PET is costly and limitedly available. While the appropriate use criteria (AUC) aim for optimal use of amyloid-PET, their limited sensitivity hinders the translation to clinical practice. Therefore, there is a need for tools that guide selection of patients for whom amyloid-PET has the most clinical utility. We aimed to develop a computerized decision support approach to select patients for amyloid-PET.

Methods

We included 286 subjects (135 controls, 108 Alzheimer’s disease dementia, 33 frontotemporal lobe dementia, and 10 vascular dementia) from the Amsterdam Dementia Cohort, with available neuropsychology, APOE, MRI and [18F]florbetaben amyloid-PET. In our computerized decision support approach, using supervised machine learning based on the DSI classifier, we first classified the subjects using only neuropsychology, APOE, and quantified MRI. Then, for subjects with uncertain classification (probability of correct class (PCC) < 0.75) we enriched classification by adding (hypothetical) amyloid positive (AD-like) and negative (normal) PET visual read results and assessed whether the diagnosis became more certain in at least one scenario (PPC≥0.75). If this was the case, the actual visual read result was used in the final classification. We compared the proportion of PET scans and patients diagnosed with sufficient certainty in the computerized approach with three scenarios: 1) without amyloid-PET, 2) amyloid-PET according to the AUC, and 3) amyloid-PET for all patients.

Results

The computerized approach advised PET in n = 60(21%) patients, leading to a diagnosis with sufficient certainty in n = 188(66%) patients. This approach was more efficient than the other three scenarios: 1) without amyloid-PET, diagnostic classification was obtained in n = 155(54%), 2) applying the AUC resulted in amyloid-PET in n = 113(40%) and diagnostic classification in n = 156(55%), and 3) performing amyloid-PET in all resulted in diagnostic classification in n = 154(54%).

Conclusion

Our computerized data-driven approach selected 21% of memory clinic patients for amyloid-PET, without compromising diagnostic performance. Our work contributes to a cost-effective implementation and could support clinicians in making a balanced decision in ordering additional amyloid PET during the dementia workup.

Introduction

The neuropathological hallmark of Alzheimer’s disease (AD), amyloid-beta, can be visualized via amyloid positron emission tomography (PET) [13]. After having shown clinical impact in memory clinic patients, the use of amyloid-PET in daily clinical practice is upcoming, both for accurate and etiological diagnosis and to initiate disease-modifying treatment (DMT) [48]. At the same time, amyloid-PET is costly and limitedly available outside tertiary memory clinics. There is a need for tools that can aid clinicians in identifying which patient would benefit from amyloid-PET to ensure an accurate etiological diagnosis, whilst remaining efficient [912].

The Amyloid Imaging Task Force (AIT) has developed appropriate use criteria (AUC) based on expert opinion, to foster the optimal use of amyloid-PET [13]. Amyloid-PET is deemed appropriate in patients with possible AD ‘for whom substantial uncertainty exists and for whom greater confidence would result from determining whether amyloid pathology is present or not’. Also, amyloid-PET may be performed in young-onset dementia to increase diagnostic confidence [13]. Despite the efforts of the AIT, the AUC are not sufficiently able to discriminate between patients who would benefit from amyloid-PET and those who would not [1416]. For example, we showed that in an unselected memory clinic cohort, patients not fulfilling the AUC also benefited from amyloid-PET [14]. Translation of the AUC to clinical practice is thus challenging, hampering successful implementation of this expensive test in memory clinicians [17]. Studies have repeatedly shown that amyloid-PET increases diagnostic confidence. Nonetheless, it is likely that in some patients diagnostic confidence was already high enough before amyloid-PET, whilst in others confidence may remain low, even after amyloid PET. Knowledge on how amyloid status would impact etiological diagnosis in individual patients would help clinicians to decide which patient should undergo an amyloid-PET and which patient should not.

We previously developed a computerized decision support approach to support clinicians in identifying patients most likely to benefit from cerebrospinal fluid (CSF) biomarkers [18]. This data-driven approach restricted CSF testing to 26% of cases without compromising diagnostic accuracy. In this work, we took a similar data-driven approach to predict which patients would benefit from amyloid-PET testing. More specifically, we tested whether this approach may help to answer the following question: if a clinician already has detailed information on neuropsychological tests, APOE and brain imaging, would additional amyloid-PET contribute to a more certain etiological diagnosis?

Material and methods

Subjects

We retrospectively included 286 memory clinic subjects who visited our memory clinic seeking medical help between January 2015 and December 2016 (the Amsterdam Dementia Cohort) with a diagnosis of Alzheimer’s dementia (AD), frontotemporal lobe dementia (FTD), vascular dementia (VaD), or subjective cognitive decline (SCD) [19, 20]. As part of the ABIDE (Alzheimer Biomarkers in Daily practice) project [5, 21], [18F]florbetaben PET was offered for clinical care to all consecutive memory clinic patients between January 2015 and December 2016. Subjects who had both amyloid-PET and brain MRI results available were included.

All subjects received a standardized work-up at baseline to come to a diagnosis, including medical history, physical, neurological and neuropsychological assessment, MRI, laboratory tests, and amyloid-PET. A diagnosis of SCD was made when the cognitive complaints could not be confirmed by cognitive testing and criteria for mild cognitive impairment (MCI) or dementia were not met. Subjects with SCD served as controls. Probable AD was diagnosed using the core clinical criteria of the NIA-AA [22]. Probable FTD (including the behavioural variant of FTD, progressive non-fluent aphasia, and semantic dementia) was diagnosed using the criteria from Rasckovsky and Gorno-Tempini, respectively [23, 24]. VaD was diagnosed using the NINDS-AIREN criteria [25].Since the classifier we used for this study (see below for detailed description) is currently only able to classify controls, patients with AD, FTD and VAD, subjects with other diagnoses, such as dementia with Lewy bodies (DLB) were not included.

The data in this study were collected during routine care and retrieved retrospectively. The Daily Board of the Medical Ethical Committee (METc) of the VUmc Medical Center provided an exemption to seek formal approval. All patients provided written informed consent for their data to be used for research purposes. The authors had no access to information that could identify individual participants during or after data collection.

Neuropsychology testing

Cognitive functions were assessed with a brief standardized test battery, including widely used tests. We used the Mini-Mental State Examination (MMSE) for global cognitive functioning [26]. For memory, we applied the Rey auditory verbal learning task (RAVLT) [26]. To measure mental speed and executive functioning, we included Trail Making Tests A and B (TMT-A, TMT-B) [27]. Language and executive functioning were tested by category fluency (animals) [28]. Finally, for behavioral symptoms, we used the Neuropsychiatric Inventory (NPI) [29]. Missing data ranged from n = 3 (1%) (MMSE) to n = 75 (26%) (NPI).

APOE genotype

Apolipoprotein E (APOE) genotype was determined with the light cycler APOE mutation detection method (Roche diagnostics GmbH, Mannheim, Germany). Patients were dichotomized into APOE e4 carriers (hetero- and homozygous) and non-carriers. APOE data were available in 283 (99%) subjects.

Imaging markers

MRI images were acquired using 1.5 T or 3 T scanners including 3D isotropic T1 and 2D or 3D FLAIR sequences. We extracted six imaging markers using the cNeuro® cMRI quantification tool as described in [18]:

  • Computed medial temporal lobe atrophy (cMTA) was computed for the left and right hemispheres from the volumes of the hippocampus and inferior lateral ventricle as described in [30, 31]. The volumes were obtained from a multi-atlas segmentation algorithm [32].
  • Computed global cortical atrophy (cGCA) measured the gray matter concentration based on the voxel-based morphometry (VBM) analysis [30, 31].
  • AD similarity scale was computed by representing the patient image as a linear combination of regional volumes from a database of previously diagnosed patients [17, 38]. The AD similarity scale was defined as the share of the weights from the linear model having the diagnostic label AD.
  • Anterior-posterior index was defined as a ratio of the cortical volumes at frontal and temporal lobe regions to those at parietal and occipital lobe regions [33].
  • The volume of white matter hyperintensities (WMH) was extracted from FLAIR images [31, 34].

Amyloid-PET

Procedures for amyloid-PET using [18F]florbetaben have been described in detail elsewhere [5, 21]. Per standard protocol, 20-minute scans consisting of 4x5 minute frames were collected 90–110 minutes post-injection of approximately 300 MBq±20% [18F]florbetaben (Neuraceq, Life Molecular Imaging, Berlin, Germany). We used visual reads and repeated the analyses using Centiloids. Visual reads were available in all subjects, Centiloids in 248 (87%).

PET scans were visually assessed by a certified and experienced nuclear physician blinded to clinical diagnosis. Images were scaled based on the total white matter signal and grey color scaling. Transverse, sagittal, and coronal views were displayed using the software package Vinci 2.56. Images were rated as either positive (binding in one or more cortical brain regions unilaterally) or negative (predominantly white matter uptake) according to criteria defined in the label by the manufacturer (Life Molecular Imaging).

For Centiloid quantification, all scans were pre-processed using a validated standard Centiloid pipeline and converted to the Centiloid scale[34]. Briefly, the four frames from the PET images were first averaged and co-registered to the corresponding T1-weighted scans. Then, the T1- weighted MRI scans were warped to standard space; the same warp was applied to warp the co-registered PET image. These procedures were performed in SPM12. PET images were intensity normalized using the whole cerebellum as the reference region using the mask provided by the Centiloid method [34] (http://www.gaain.org/centiloid-project). Global cortical Centiloid values were calculated using the standard GAAIN target region Centiloid. Centiloid calibration has been previously described [35].

Disease State Index classifier and probability of correct class

The Disease State Index (DSI) classifier was previously developed and validated in the European FP7 PredictND project [36, 37]. The DSI is a simple, supervised, and data-driven machine learning method that compares different diagnostic groups with each other; in this work controls, AD, FTD, and VaD. There is no need to impute data or exclude cases with incomplete data, as the classifier can handle missing data. The classifier is based on a training set with diagnosed patients [36]. For each single test (e.g. neuropsychological, APOE-status, MRI), the similarities of each patient’s data to the distributions of the diagnostic groups in the training set are computed. When single tests are combined, the tests with higher classification accuracy are weighted more. First, a DSI value is calculated for each pair-wise comparison (AD-controls, AD-FTD, AD-VaD, FTD-controls, FTD-VaD, VaD-controls). Then, the final DSI-value for each diagnostic group (controls-AD-FTD-VaD) is calculated by averaging the corresponding DSI-values, e.g AD-controls, AD-FTD and AD-VaD for AD etc, as described in [18]. As a result, a DSI value (continuous value between zero and one) is given for each diagnostic group (controls-AD-FTD-VaD), estimating the likelihood of the specific diagnosis.

This study was performed using five-fold cross-validation, i.e., 80% of the dataset was used as the training set in the DSI classifier when classifying the remaining 20% of the patients. This was repeated five times so that each patient was classified once.

A high DSI value or a big difference in DSI values between the two most likely diagnostic groups provides more certainty in making a diagnosis than a low value or a small difference [18]. The probability that the diagnostic group of the highest DSI value is correct is defined as the probability of correct class (PCC). The diagnosis suggested is compared with the ground truth diagnosis for the cases with comparable highest DSI value and the difference between two highest DSI values in a reference database and the share of correct diagnoses is calculated. The reference database used consisted of 770 memory clinic patients (Amsterdam dementia cohort and PredictND) diagnosed with the same guidelines as used in the current study [20, 37]. That dataset consisted of 308 controls, 338 ADs, 89 FTDs, and 35 VaDs. The mean age was 65.8 ± 8.7 years, and 54% were females.

In this study, patients were considered as having a diagnosis with sufficient certainty if PCC was ≥0.75. This selection was a compromise between the number of diagnosed patients and accuracy. In clinical practice, the clinician can adjust the applied PCC cutoff depending on the pre-test probability.

Diagnostic scenario’s to select patients for amyloid-PET

We applied four diagnostic scenarios (Fig 1) in which patients were considered as having a diagnosis with sufficient certainty if the PCC was ≥0.75. As amyloid-PET measures, we used visual reads and repeated our analyses using Centiloid quantification.

  • Scenario A: In this Computer-supported decision approach, we performed amyloid-PET only when predicted to change the certainty in diagnosis based on our data-driven method (Fig 1A). Patients were regarded as sufficient certain cases if PCC was ≥0.75 based on APOE, neuropsychology, and MRI (step one). When PCC was <0.75, the computer tool added both (hypothetical) positive and negative amyloid PET. Hypothetical PCC’s were computed for both positive amyloid-PET and negative amyloid-PET values (step two). If either of these hypothetical PCC values reached >0.75, the actually observed amyloid-PET values were added, after which DSI and PCC, were computed (step three). On repeating this scenario using Centiloid values, we took the mean Centiloid value for AD patients (69.40 ± 39.3) and the mean Centiloid value for controls (11.95 ± 24.31) as hypothetical values in step two.

We compared this approach, with the following three ‘control scenario’s:

  • Scenario B: In the No amyloid-PET approach, we calculated DSI and PPC for each patient using only APOE, neuropsychology and MRI, excluding amyloid-PET (Fig 1B).
  • Scenario C: In the AUC scenario, we performed amyloid-PET based on the AUC criteria (Fig 1C). We classified patients as AUC-positive (AUC+) and AUC-negative (AUC-), according to [14]. In this paper patients were classified during pre-PET multidisciplinary meetings as AUC+ when they either i) had AD as diagnostic possibility (≥15%) but with a confidence <85% in AD as diagnosis, or ii) had a young-onset dementia (<65 years old. All other patients were classified as AUC-[38]. In patients classified AUC+, we calculated PCC by adding amyloid-PET (step two). For AUC- patients, no amyloid-PET was added.
  • Scenario D: In the All amyloid-PET approach (Fig 1D)., we calculated DSI and PCC for each patient using APOE, neuropsychology, MRI, and amyloid-PET.
thumbnail
Fig 1. Flow chart for the four diagnostic approaches, using amyloid-PET visual read, summarizing the results in the last column.

AUC: appropriate use criteria, AUC+: patients fulfilling appropriate use criteria according to [13], operationalized as described in [14], PCC: probability of correct class, NP: neuropsychology, MRI: magnetic resonance imaging, Sim: simulate, FU: follow-up. Numbers in circles denote groups described in Table 2.

https://doi.org/10.1371/journal.pone.0303111.g001

For all four approaches, we reported the number of patients diagnosed with sufficient certainty and the number of patients in which PET was performed.

Statistical analyses

Further scrutinizing the data of scenario A, we tested differences in baseline characteristics, diagnosis, and DSI between patients with sufficiently certain diagnosis based directly on neuropsychology and MRI (step one in computerized decision support approach, group 1 in Fig 1A), patients not eligible for amyloid-PET testing (step two, group 2) and patients with actual amyloid-PET testing (step three, groups 3 and 4).

Lastly, we visualized the impact of different PCC cut-offs for the proportion of patients diagnosed (percentage of patients above PCC cutoff) and the proportion of patients with amyloid-PET measurement for all four diagnostic approaches described above.

MRI markers were normalized for age, sex and head size [39]. Statistical analyses were performed using SPSS version 22 (IBM, Armonk, NY, USA), STATA version 14.1, and R version 3.5.3. A MATLAB toolbox created by [40] was used in the DSI analyses. The analyses were performed in MATLAB version R2018b (MathWorks, Natick, MA, USA).

Results

Baseline characteristics

In the study sample, the mean age was 64±8 and 129 (45%) were females. Table 1 shows details of the baseline characteristic of this sample, stratified per diagnostic group.

thumbnail
Table 1. Baseline characteristics according to baseline diagnosis.

https://doi.org/10.1371/journal.pone.0303111.t001

Diagnostic approaches to select patients for amyloid-PET

In our search for the optimal approach to select patients for amyloid-PET, we applied four diagnostic approaches. Fig 1 shows the flowchart of these four approaches and summarizes the number of patients with sufficient certain diagnoses (PCC≥0.75), and the number of patients selected for amyloid-PET. In these results, the amyloid-PET biomarker was the visual read. First, we applied the computerized decision support approach (scenario A). Using demographics, APOE, neuropsychology and MRI only, diagnostic prediction was sufficiently certain (PCC ≥0.75) in 155 (54%) cases. In the 131 (46%) remaining cases, hypothetical positive and negative amyloid-PET values were added (step 1), and this led to an increase of PCC to ≥0.75 in 60 (46%) cases, thus advising performing an amyloid-PET scan (step 2). When real amyloid-PET values were actually added to the model, we observed a PCC≥0.75 in 33 (55%) patients (step 3). Overall, the computerized approach led to a diagnosis with sufficient confidence in 188 (66%) patients by performing PET in 60 (21%) patients, with correct classification of 152 patients.

We compared our data-driven approach to three control scenario’s. In scenario B, the scenario without amyloid- PET, we used demographics, APOE, neuropsychology, and MRI only, and found a diagnosis with sufficient confidence in 155 patients (54%) from which 125 were correctly classified. Scenario C, applying amyloid-PET based on the AUC, led to amyloid-PET in a larger group of 113 (40%) patients, yet not to a higher proportion of patients with a certain diagnosis, 156 (55%), and correctly classifying 138 patients. In scenario D, performing amyloid-PET in all patients, again did not lead to more diagnoses with sufficient confidence, namely 154 (54%), and correct classification in 142 patients.

Using Centiloid values instead of visual reads yielded similar results (see S1 Fig).

Differences in patients groups using computerized decision support approach

Following the flowchart of the computerized decision support approach in Fig 1A, four distinct groups can be separated in the three steps, marked with 1-2-3-4 in the figure and summarized in Table 2. In the first group are those patients with a diagnosis with sufficient certainty, using only demographics, APOE, neuropsychology, and MRI. This group was the most extensive (n = 155) and contained patients with all types of diagnoses. These patients had the largest difference in DSI value between the first and second suggested diagnoses. Presumably, this group had a clear, distinct profile, both clinically and on imaging, and little co-morbidity. This group contrasts with the second group, containing the patients in which adding hypothetical amyloid-PET values did not increase diagnostic certainty (n = 71). Here, the difference between first and the second DSI was the smallest, indicating co-morbid neuropathology or neuropsychological profiles that are hard to distinguish from each other. This group could not be certainly diagnosed neither with nor without amyloid PET, and follow-up or other testing is advised. The third group included those patients in whom the computerized approach suggested amyloid-PET according to hypothetical +/- amyloid-PET results (n = 33). After adding actual PET values, the PCC increased to ≥0.75. In this group, the amyloid-PET scan was often positive (64%) and contained mainly patients with AD (23/33). The final group consisted of the patients for whom, despite performing amyloid-PET scan, the diagnosis remained unclear (n = 27).

thumbnail
Table 2. Comparison of different patient groups deriving from the computerized decision support approach using visual reads; matching Fig 1A.

https://doi.org/10.1371/journal.pone.0303111.t002

Effect of different PCC cutoffs on diagnosis and amyloid-PET

For the results described above, we used PCC≥0.75 to define which diagnostic prediction is accurate and has sufficient certainty. Of note, this is an arbitrary choice. To study the effect of the PCC cutoff, we repeated all our analyses for different PCC cutoffs ranging from 0.5 to 1.0. In Fig 2, we compared all four approaches based on the proportion of patients diagnosed with sufficient certainty (Fig 2A) and the number of performed amyloid-PET scans (Fig 2B) using different PCC cutoffs. As expected, the share of patients with certain diagnosis declined with increasing PCC, independent of the scenario used. Overall the proportion of diagnosed patients was largest when using the computerized decision support approach and the lowest when performing no PET, independent of the PCC cutoff.

thumbnail
Fig 2. Visualization of the share of patients diagnosed (blue, 2A) and the share of patients with amyloid-PET performed (red, 2B) for different probability of correct class cutoffs, comparing computerized decision support, no amyloid-PET, AUC, and amyloid-PET for all patients.

Blue: proportion of patients diagnosed, Red: proportion of patients with amyloid-PET taken, PCC: probability of correct class. Solid lines show results for the computerized decision support (Fig 1A), dotted lines show results for using no amyloid-PET, but only demographics, APOE, neuropsychology and MRI (Fig 1B), dashed dotted lines show results for AUC (Fig 1C) and dashed lines using all data (Fig 1D).

https://doi.org/10.1371/journal.pone.0303111.g002

Example of visualization of computerized decision support in clinical practice

How the computerized decision support approach could be used in clinical practice is visualized in Fig 3. Case A, for example, is a 65-year-old female, who experiences memory problems, but also scores low on fluency and high on NPI, while MRI showed hardly atrophy. Based on demographics, neuropsychology, and MRI, the classifier suggested an FTD diagnosis (DSI 0.72) yet with a minimal difference to the next most probable diagnosis AD (DSI 0.71). Therefore, the probability of correct class (PCC) is low (0.51). When the tool adds hypothetical positive and negative amyloid-PET scans, the clinician can see that both a positive and a negative amyloid-PET result, would influence the diagnostic certainty (PCC > 0.75 in both situations). The lower panel shows results after addition of the actual amyloid PET scan, which was positive in this case, leading to a high PCC (0.78) for AD diagnosis (DSI 0.81). In case B, a 71–year-old female who has trouble performing the cognitive tests due to impaired understanding, yet surprisingly does perform TMT-A relatively fast, whereas MRI showed mild bitemporal atrophy. The classifier showed a low PCC (0.50) using only demographics, neuropsychology, and MRI, with equal DSI for both AD and FTD diagnosis (DSI 0.63). In this case, adding a hypothetical positive or negative amyloid-PET changed the PCC to >0.75 for a negative amyloid PET scan (albeit not for a positive PET scan). Based on an increase to >0.75 in one of the scenario’s, the clinician is advised to embark on ordering an amyloid-PET scan, which in this case was negative. A clinical diagnosis of probable FTD was confirmed.

thumbnail
Fig 3. Examples of visualization of the computerized decision approach for clinical use, applying hypothetical positive and negative amyloid-PET scan, based on visual reads.

NP: neuropsychology, MRI: magnetic resonance imaging, PCC: probability of correct class, AD: Alzheimer’s disease, FTD: Frontotemporal dementia, VAD: Vascular dementia, CN: control.

https://doi.org/10.1371/journal.pone.0303111.g003

Discussion

In this study, a data-driven approach in which diagnostics classification is enriched by adding (hypothetical) amyloid positive (AD-like) and negative (normal) PET to aid the clinician in deciding whether performing an actual amyloid-PET scan contributed to a more certain diagnosis. Our computerized decision support approach advised performing an amyloid-PET scan in 21% of the patients without compromising proportion of correctly classified cases. Our approach was thus more efficient than the other scenario’s, where we would have performed PET in all patients, in none, or according to the appropriate use criteria (AUC). When implemented in a computer tool, this approach can support clinicians in making a balanced decision in ordering additional (expensive) amyloid-PET testing using personalized patient data.

Approaches such as the data-driven approach we demonstrated in this study, can aid in translating appropriate use criteria (AUC) to clinical practice. The AUC state that amyloid-PET is deemed appropriate in patients with possible AD ‘for whom substantial uncertainty exists and for whom greater confidence would result from determining whether amyloid pathology is present or not’, and in young-onset dementia to increase diagnostic confidence [13]. How to operationalize these criteria is not clear. Severable studies have shown that the current AUC advises amyloid-PET both too few and too many patients [4, 14, 16, 41]. Even in our study, 40% of the patients would require amyloid-PET, according to the AUC, without leading to higher proportion of patients with certain diagnosis.

Several prediction models predicting positive or negative amyloid-PET scans have been developed [4244]. We add to this literature by developing a data-driven method with a different starting point. Namely, what happens to the diagnosis if an amyloid-PET is normal or abnormal? This approach follows the way clinicians think more naturally; ‘would ordering an amyloid-PET scan help me in gaining a clearer and more certain diagnosis?’. We simulated a positive (AD-like) and negative (normal) amyloid-PET to estimate whether knowledge of amyloid status might impact (confidence in) diagnosis in an unselected memory clinic population, including controls, AD, FTD, VaD patients. Our computerized approach(scenario A) led to 152 (53%) correctly classified subjects while performing amyloid-PET in only 60 (21%) subjects. Performing PET in all (scenario D) led to 142 (49%) correctly classified subjects, yet by performing amyloid-PET in 286 (100%) subjects. As can be seen in Fig 1, accuracy is slightly higher in D, since overall less patients received a diagnosis in this approach. One can imagine that in case of multiple pathologies or borderline amyloid-PET results, adding amyloid-PET only confuses and leads thus to a lower number of certainly diagnosed patients. These findings show that it is possible to think of scenario’s where expensive diagnostic tests are used only when they are likely to increase diagnostic certainty, which is in line with appropriate use criteria stating that an additional test should only be performed when it will increase the confidence of the clinician in a certain diagnosis.

As the prevalence of dementia increases and new disease-modifying therapies (DMTs) entering, there is an increasing need for precise etiological diagnosis, while the diagnostic work-up needs to remain efficient [45]. To initiate treatment with DMTs, an accurate etiological diagnosis is crucial [46].In this future clinical practice, where DMTs are widely available [47], a data-driven approach will serve as a valuable tool for narrowing the target population for treatment. Our study presents a data-driven approach that is aimed at achieving diagnosis in the most efficient manner without compromising diagnostic performance. In addition such approach may streamline clinical decision-making pipelines with blood-based biomarkers in the future by limiting the number of patients that require confirmatory testing [48]. However, detecting underlying (AD) pathology marks only the beginning. Once the diagnose is made, the subsequent step will be to define eligibility. In a new EU-project, we will further develop this stepwise approach to identify potential eligible patients [49]. This approach will encompass other patients as well, such as those with mild cognitive impairment (MCI) and DLB, to address the relevant question whether they have underlying AD or not. Novel decision models have to be developed to aid in this classification, of which work is ongoing.

The classifier used in this study, is based on simple supervised machine learning and is thus able to deal with missing data. Providing visualization of the approach, with a PCC cut-off that clinicians can alter, as in Fig 3, helps clinicians to understand what the tool ‘thinks’, as opposed to a black box [9]. Visualization is also helpful in shared decision-making, guiding clinician and patient in discussing whether to perform amyloid-PET [50]. The visualization we have shown here can be further optimized in co-creation with end-users and usability testing in clinical practice. The cut-off of 0.75 for the PCC was selected for this manuscript to demonstrate how the proposed computerized decision support algorithm typically performs. If the clinician prefers higher classification accuracy with less patients diagnosed, a higher cut-off should be used, and vice versa. To date this method is not yet available for clinical use, but the previously developed data driven approach to select the optimal patient for CSF [18] is available via the cNeuro® tool.

Finally, the strengths of our study are the use of an unselected memory clinic cohort, consisting of controls and patients with AD, FTD and VaD, reflecting clinical practice [51].

There are also limitations to discuss. First, we were not able to include other neurodegenerative diseases, such as DLB, or those with MCI, since the used classifier to date does not include these patients. Tauopathies mimicking AD, such as argyrophillic grain disease (AGD) are not in our database, and could therefore not be included. To reflect even better ordinary clinical practice, development of the classifier is ongoing to include more diagnostic groups [52]. However, this study was set up to address the use of amyloid PET in differential diagnosis of a number of common differential diagnostic dilemma’s, in particular AD versus FTD versus VaD. Second, we included only patients from a tertiary memory clinic, which may hamper generalizability, yet also reflects daily practice since amyloid-PET is mainly ordered in tertiary memory clinics. Third, we classified patients as having one diagnosis, while patients seldom have one type of neurodegeneration but often comorbid pathology. Yet, the DSI classifier provides room for comorbid pathology by providing a DSI for each diagnosis, where co-existing pathologies would lead to multiple diagnoses with comparable DSI, while pure disease would results in one DSI standing out compared to the others. Also, in this cohort, comorbid pathology was present, given the often small differences in DSI between the first and second suggested diagnosis. In addition, amyloid-PET is not as specific as tau-PET, leading to more frequent discordance with the clinical diagnosis, which was also the case in our cohort. Fourth, while all patients received a standardized workup and were scanned with the same PET scanner, the MRI scanners differed. Yet we know from previous studies that our MRI quantification tool can deal with different scanners and field strengths [32]. Finally, we performed our analyses with visual readings being a dichotomize measure, which could be a disadvantage. However, visual readings are most often used in clinical practice and thus are easy for clinicians to understand. In addition, repeating our analyses with continuous values, namely Centiloids, a linear transformation off SUV, showed comparable results. This shows the face validity of our results.

Conclusion

With the current difficulties in selecting those who might benefit from amyloid-PET and the future challenges with increasing need for biomarker confirmation, for example in the context of initiating disease modifying treatment, smart tools are needed to efficiently use resources and keep healthcare affordable. We developed a data-driven approach using patient’s data and show that restricting the ordering of amyloid-PET to 21% of patients without compromising diagnostic performance. Future studies focusing on implementing tools like this into clinical practice, to efficiently guide stepwise diagnostic testing, are the next step [49].

Supporting information

S1 Fig. Flow chart for the four diagnostic approaches, using Centiloid values, summarizing the results in the last column.

AUC: appropriate use criteria, AUC+: patients fulfilling appropriate use criteria according to [13], operationalized as described in [14], PCC: probability of correct class, NP: neuropsychology, MRI: magnetic resonance imaging, Sim: simulate, FU: follow-up.

https://doi.org/10.1371/journal.pone.0303111.s001

(PPTX)

Acknowledgments

Research of the Alzheimer’s Center Amsterdam is part of the neurodegeneration research program of Amsterdam Neuroscience. We thank Mahnaz Shekari for her significant contribution to the processing of the amyloid-PET images.

References

  1. 1. Chiotis K, Saint-Aubert L, Boccardi M, Gietl A, Picco A, Varrone A, et al. Clinical validity of increased cortical uptake of amyloid ligands on PET as a biomarker for Alzheimer’s disease in the context of a structured 5-phase development framework. Neurobiol Aging. 2017;52:214–27. Epub 2017/03/21. pmid:28317650.
  2. 2. Frisoni GB, Boccardi M, Barkhof F, Blennow K, Cappa S, Chiotis K, et al. Strategic roadmap for an early diagnosis of Alzheimer’s disease based on biomarkers. The Lancet Neurology. 2017;16(8):661–76. Epub 2017/07/20. pmid:28721928.
  3. 3. Klunk WE. Amyloid imaging as a biomarker for cerebral beta-amyloidosis and risk prediction for Alzheimer dementia. Neurobiol Aging. 2011;32 Suppl 1:S20–36. Epub 2011/12/07. pmid:22078170.
  4. 4. Rabinovici GD, Gatsonis C, Apgar C, Chaudhary K, Gareen I, Hanna L, et al. Association of Amyloid Positron Emission Tomography With Subsequent Change in Clinical Management Among Medicare Beneficiaries With Mild Cognitive Impairment or Dementia. Jama. 2019;321(13):1286–94. Epub 2019/04/03. pmid:30938796.
  5. 5. de Wilde A, van der Flier WM, Pelkmans W, Bouwman F, Verwer J, Groot C, et al. Association of Amyloid Positron Emission Tomography With Changes in Diagnosis and Patient Treatment in an Unselected Memory Clinic Cohort: The ABIDE Project. JAMA Neurol. 2018;75(9):1062–70. Epub 2018/06/12. pmid:29889941.
  6. 6. Zwan MD, Bouwman FH, Konijnenberg E, van der Flier WM, Lammertsma AA, Verhey FR, et al. Diagnostic impact of [(18)F]flutemetamol PET in early-onset dementia. Alzheimer’s research & therapy. 2017;9(1):2. Epub 2017/01/18. pmid:28093088.
  7. 7. Frisoni GB, Barkhof F, Altomare D, Berkhof J, Boccardi M, Canzoneri E, et al. AMYPAD Diagnostic and Patient Management Study: Rationale and design. Alzheimers Dement. 2019;15(3):388–99. Epub 2018/10/20. pmid:30339801.
  8. 8. Altomare D, Barkhof F, Caprioglio C, Collij LE, Scheltens P, Lopes Alves I, et al. Clinical Effect of Early vs Late Amyloid Positron Emission Tomography in Memory Clinic Patients: The AMYPAD-DPMS Randomized Clinical Trial. JAMA Neurol. 2023;80(6):548–57. pmid:37155177.
  9. 9. Shortliffe EH, Sepulveda MJ. Clinical Decision Support in the Era of Artificial Intelligence. Jama. 2018;320(21):2199–200. Epub 2018/11/07. pmid:30398550.
  10. 10. Chételat G, Arbizu J, Barthel H, Garibotto V, Law I, Morbelli S, et al. Amyloid-PET and (18)F-FDG-PET in the diagnostic investigation of Alzheimer’s disease and other dementias. Lancet Neurol. 2020;19(11):951–62. Epub 2020/10/26. pmid:33098804.
  11. 11. Silva-Spínola A, Baldeiras I, Arrais JP, Santana I. The Road to Personalized Medicine in Alzheimer’s Disease: The Use of Artificial Intelligence. Biomedicines. 2022;10(2). Epub 2022/02/26. pmid:35203524.
  12. 12. Hampel H, Au R, Mattke S, van der Flier WM, Aisen P, Apostolova L, et al. Designing the next-generation clinical care pathway for Alzheimer’s disease. Nature Aging. 2022;2(8):692–703. pmid:37118137
  13. 13. Johnson KA, Minoshima S, Bohnen NI, Donohoe KJ, Foster NL, Herscovitch P, et al. Appropriate use criteria for amyloid PET: a report of the Amyloid Imaging Task Force, the Society of Nuclear Medicine and Molecular Imaging, and the Alzheimer’s Association. Alzheimers Dement. 2013;9(1):e-1–16. Epub 2013/01/31. pmid:23360977.
  14. 14. de Wilde A, Ossenkoppele R, Pelkmans W, Bouwman F, Groot C, van Maurik I, et al. Assessment of the appropriate use criteria for amyloid PET in an unselected memory clinic cohort: The ABIDE project. Alzheimers Dement. 2019;15(11):1458–67. Epub 2019/10/09. pmid:31594684.
  15. 15. Apostolova LG, Haider JM, Goukasian N, Rabinovici GD, Chetelat G, Ringman JM, et al. Critical review of the Appropriate Use Criteria for amyloid imaging: Effect on diagnosis and patient care. Alzheimers Dement (Amst). 2016;5:15–22. Epub 2017/01/06. pmid:28054024.
  16. 16. Turk KW, Vives-Rodriguez A, Schiloski KA, Marin A, Wang R, Singh P, et al. Amyloid PET ordering practices in a memory disorders clinic. Alzheimer’s & dementia (New York, N Y). 2022;8(1):e12333. Epub 2022/08/23. pmid:35992217.
  17. 17. Pemberton HG, Collij LE, Heeman F, Bollack A, Shekari M, Salvadó G, et al. Quantification of amyloid PET for future clinical use: a state-of-the-art review. European journal of nuclear medicine and molecular imaging. 2022;49(10):3508–28. Epub 20220407. pmid:35389071.
  18. 18. Rhodius-Meester HFM, van Maurik IS, Koikkalainen J, Tolonen A, Frederiksen KS, Hasselbalch SG, et al. Selection of memory clinic patients for CSF biomarker assessment can be restricted to a quarter of cases by using computerized decision support, without compromising diagnostic accuracy. PloS one. 2020;15(1):e0226784. Epub 2020/01/16. pmid:31940390.
  19. 19. van der Flier WM, Pijnenburg YA, Prins N, Lemstra AW, Bouwman FH, Teunissen CE, et al. Optimizing patient care and research: the Amsterdam Dementia Cohort. J Alzheimers Dis. 2014;41(1):313–27.; pmid:24614907
  20. 20. van der Flier WM, Scheltens P. Amsterdam Dementia Cohort: Performing Research to Optimize Care. J Alzheimers Dis. 2018;62(3):1091–111. Epub 2018/03/23. pmid:29562540.
  21. 21. de Wilde A, van Maurik IS, Kunneman M, Bouwman F, Zwan M, Willemse EA, et al. Alzheimer’s biomarkers in daily practice (ABIDE) project: Rationale and design. Alzheimers Dement (Amst). 2017;6:143–51. Epub 2017/02/28. pmid:28239639.
  22. 22. McKhann GM, Knopman DS, Chertkow H, Hyman BT, Jack CR, Kawas CH, et al. The diagnosis of dementia due to Alzheimer’s disease: recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimers Dement. 2011;7(3):263–9. pmid:21514250
  23. 23. Rascovsky K, Hodges JR, Knopman D, Mendez MF, Kramer JH, Neuhaus J, et al. Sensitivity of revised diagnostic criteria for the behavioural variant of frontotemporal dementia. Brain. 2011;134(Pt 9):2456–77.; pmid:21810890
  24. 24. Gorno-Tempini ML, Hillis AE, Weintraub S, Kertesz A, Mendez M, Cappa SF, et al. Classification of primary progressive aphasia and its variants. Neurology. 2011;76(11):1006–14.; pmid:21325651
  25. 25. Roman GC, Tatemichi TK, Erkinjuntti T, Cummings JL, Masdeu JC, Garcia JH, et al. Vascular dementia: diagnostic criteria for research studies. Report of the NINDS-AIREN International Workshop. Neurology. 1993;43(2):250–60. pmid:8094895
  26. 26. Folstein MF, Folstein SE, McHugh PR. "Mini-mental state". A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. 1975;12(3):189–98. pmid:1202204
  27. 27. Reitan R. Validity of the Trail Making Test as an indicator of organic brain damage. Percept Mot Skills. 1958;8:271–6.
  28. 28. Van der Elst W, Van Boxtel MP, Van Breukelen GJ, Jolles J. Normative data for the Animal, Profession and Letter M Naming verbal fluency tests for Dutch speaking participants and the effects of age, education, and sex. J Int Neuropsychol Soc. 2006;12(1):80–9.; pmid:16433947
  29. 29. Cummings JL, Mega M, Gray K, Rosenberg-Thompson S, Carusi DA, Gornbein J. The Neuropsychiatric Inventory: comprehensive assessment of psychopathology in dementia. Neurology. 1994;44(12):2308–14. pmid:7991117
  30. 30. Scheltens P, Leys D, Barkhof F, Huglo D, Weinstein HC, Vermersch P, et al. Atrophy of medial temporal lobes on MRI in "probable" Alzheimer’s disease and normal ageing: diagnostic value and neuropsychological correlates. J Neurol Neurosurg Psychiatry. 1992;55(10):967–72. pmid:1431963
  31. 31. Koikkalainen JR, Rhodius-Meester HFM, Frederiksen KS, Bruun M, Hasselbalch SG, Baroni M, et al. Automatically computed rating scales from MRI for patients with cognitive disorders. European radiology. 2019. Epub 2019/02/24. pmid:30796570.
  32. 32. Koikkalainen J, Rhodius-Meester H, Tolonen A, Barkhof F, Tijms B, Lemstra AW, et al. Differential diagnosis of neurodegenerative diseases using structural MRI data. Neuroimage Clin. 2016;11:435–49. pmid:27104138
  33. 33. Bruun M, Koikkalainen J, Rhodius-Meester HFM, Baroni M, Gjerum L, van Gils M, et al. Detecting frontotemporal dementia syndromes using MRI biomarkers. NeuroImage Clinical. 2019;22:101711. Epub 2019/02/12. pmid:30743135.
  34. 34. Klunk WE, Koeppe RA, Price JC, Benzinger TL, Devous MD, Jagust WJ, et al. The Centiloid Project: standardizing quantitative amyloid plaque estimation by PET. Alzheimer’s & dementia: the journal of the Alzheimer’s Association. 2015;11(1):1–15e1-4. pmid:25443857.
  35. 35. Collij LE, Salvadó G, de Wilde A, Altomare D, Shekari M, Gispert JD, et al. Quantification of [(18) F]florbetaben amyloid-PET imaging in a mixed memory clinic population: The ABIDE project. Alzheimers Dement. 2022. Epub 20221207. pmid:36478646.
  36. 36. Mattila J, Koikkalainen J, Virkki A, Simonsen A, van GM, Waldemar G, et al. A disease state fingerprint for evaluation of Alzheimer’s disease. J Alzheimers Dis. 2011;27(1):163–76.; pmid:21799247
  37. 37. Bruun M, Frederiksen KS, Rhodius-Meester HFM, Baroni M, Gjerum L, Koikkalainen J, et al. Impact of a Clinical Decision Support Tool on Dementia Diagnostics in Memory Clinics: The PredictND Validation Study. Current Alzheimer research. 2019;16(2):91–101. Epub 2019/01/04. pmid:30605060.
  38. 38. Altomare D, Ferrari C, Festari C, Guerra UP, Muscio C, Padovani A, et al. Quantitative appraisal of the Amyloid Imaging Taskforce appropriate use criteria for amyloid-PET. Alzheimers Dement. 2018;14(8):1088–98. Epub 20180419. pmid:29679576.
  39. 39. Buckner RL, Head D, Parker J, Fotenos AF, Marcus D, Morris JC, Snyder AZ. A unified approach for morphometric and functional data analysis in young, old, and demented adults using automated atlas-based head size normalization: reliability and validation against manual measurement of total intracranial volume. Neuroimage. 2004;23(2):724–38. Epub 2004/10/19. pmid:15488422.
  40. 40. Cluitmans L, Mattila J, Runtti H, van Gils M, Lotjonen J. A MATLAB toolbox for classification and visualization of heterogenous multi-scale human data using the Disease State Fingerprint method. Stud Health Technol Inform. 2013;189:77–82. Epub 2013/06/07. pmid:23739361.
  41. 41. Altomare D, Collij L, Caprioglio C, Scheltens P, van Berckel BNM, Alves IL, et al. Description of a European memory clinic cohort undergoing amyloid-PET: The AMYPAD Diagnostic and Patient Management Study. Alzheimers Dement. 2022. Epub 2022/06/19. pmid:35715930.
  42. 42. Albright J, Ashford MT, Jin C, Neuhaus J, Rabinovici GD, Truran D, et al. Machine learning approaches to predicting amyloid status using data from an online research and recruitment registry: The Brain Health Registry. Alzheimers Dement (Amst). 2021;13(1):e12207. Epub 2021/06/18. pmid:34136635.
  43. 43. Palmqvist S, Insel PS, Zetterberg H, Blennow K, Brix B, Stomrud E, et al. Accurate risk estimation of beta-amyloid positivity to identify prodromal Alzheimer’s disease: Cross-validation study of practical algorithms. Alzheimers Dement. 2019;15(2):194–204. Epub 2018/10/27. pmid:30365928.
  44. 44. Pekkala T, Hall A, Ngandu T, van Gils M, Helisalmi S, Hänninen T, et al. Detecting Amyloid Positivity in Elderly With Increased Risk of Cognitive Decline. Front Aging Neurosci. 2020;12:228. Epub 2020/08/28. pmid:32848707.
  45. 45. 2023 Alzheimer’s disease facts and figures. Alzheimers Dement. 2023;19(4):1598–695. Epub 20230314. pmid:36918389.
  46. 46. Cummings J, Aisen P, Apostolova LG, Atri A, Salloway S, Weiner M. Aducanumab: Appropriate Use Recommendations. J Prev Alzheimers Dis. 2021;8(4):398–410. pmid:34585212.
  47. 47. Lam J, Hlávka J, Mattke S. The Potential Emergence of Disease-Modifying Treatments for Alzheimer Disease: The Role of Primary Care in Managing the Patient Journey. Journal of the American Board of Family Medicine: JABFM. 2019;32(6):931–40. Epub 2019/11/11. pmid:31704763.
  48. 48. Blennow K, Galasko D, Perneczky R, Quevenco F-C, van der Flier WM, Akinwonmi A, et al. The potential clinical value of plasma biomarkers in Alzheimer’s disease. Alzheimer’s & Dementia. 2023;19(12):5805–16. pmid:37694991
  49. 49. Tate A, Suárez-Calvet M, Ekelund M, Eriksson S, Eriksdotter M, Van Der Flier WM, et al. Precision medicine in neurodegeneration: the IHI-PROMINENT project. Frontiers in neurology. 2023;14:1175922. Epub 20230802. pmid:37602259.
  50. 50. van Gils AM, Visser LNC, Hendriksen HMA, Georges J, van der Flier WM, Rhodius-Meester HFM. Development and design of a diagnostic report to support communication in dementia: Co-creation with patients and care partners. Alzheimer’s & Dementia: Diagnosis, Assessment & Disease Monitoring. 2022;14(1):e12333. pmid:36092691
  51. 51. Tolonen A, Rhodius-Meester H F. M., Bruun M, Koikkalainen J, Barkhof F, Lemstra A, et al. Data-Driven Differential Diagnosis of Dementia Using Multiclass Disease State Index Classifier. Front Aging Neurosci. 2018;10:111. pmid:29922145
  52. 52. van Gils AM, van de Beek M, van Unnik A, Tolonen A, Handgraaf D, van Leeuwenstijn M, et al. Optimizing cCOG, a Web-based tool, to detect dementia with Lewy Bodies. Alzheimers Dement (Amst). 2022;14(1):e12379. Epub 20221222. pmid:36569383.