Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Validity of screening instruments for the detection of dementia and mild cognitive impairment in hospital inpatients: A systematic review of diagnostic accuracy studies

  • Aljoscha Benjamin Hwang ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Validation, Visualization, Writing – original draft

    Affiliations Clinic for Neurology and Neurorehabilitation, Cantonal Hospital Lucerne, Lucerne, Switzerland, Department of Health Sciences and Health Policy, University of Lucerne, Lucerne, Switzerland

  • Stefan Boes ,

    Contributed equally to this work with: Stefan Boes, Thomas Nyffeler, Guido Schuepfer

    Roles Conceptualization, Methodology, Project administration, Resources, Supervision, Validation, Writing – review & editing

    Affiliation Department of Health Sciences and Health Policy, University of Lucerne, Lucerne, Switzerland

  • Thomas Nyffeler ,

    Contributed equally to this work with: Stefan Boes, Thomas Nyffeler, Guido Schuepfer

    Roles Conceptualization, Methodology, Supervision, Validation, Writing – review & editing

    Affiliation Clinic for Neurology and Neurorehabilitation, Cantonal Hospital Lucerne, Lucerne, Switzerland

  • Guido Schuepfer

    Contributed equally to this work with: Stefan Boes, Thomas Nyffeler, Guido Schuepfer

    Roles Conceptualization, Methodology, Resources, Supervision, Writing – review & editing

    Affiliation Staff Medicine, Cantonal Hospital Lucerne, Lucerne, Switzerland

Validity of screening instruments for the detection of dementia and mild cognitive impairment in hospital inpatients: A systematic review of diagnostic accuracy studies

  • Aljoscha Benjamin Hwang, 
  • Stefan Boes, 
  • Thomas Nyffeler, 
  • Guido Schuepfer



As the population ages, Alzheimer's disease and other subtypes of dementia are becoming increasingly prevalent. However, in recent years, diagnosis has often been delayed or not made at all. Thus, improving the rate of diagnosis has become an integral part of national dementia strategies. Although screening for dementia remains controversial, the case is strong for screening for dementia and other forms of cognitive impairment in hospital inpatients. For this reason, the objective of this systematic review was to provide clinicians, who wish to implement screening, an up-to-date choice of cognitive tests with the most extensive evidence base for the use in elective hospital inpatients.


For this systematic review, PubMed, PsycINFO and Cochrane Library were searched by using a multi-concept search strategy. The databases were accessed on April 10, 2019. All cross-sectional studies that utilized brief, multi-domain cognitive tests as index test and a reference standard diagnosis of dementia or mild cognitive impairment as comparator were included. Only studies conducted in the hospital setting, sampling from unselected, elective inpatients older than 64 were considered.


Six studies met the inclusion criteria, with a total of 2112 participants. Diagnostic accuracy data for the Six-Item Cognitive Impairment Test, Cognitive Performance Scale, Clock-Drawing Test, Mini-Mental Status Examination, and Time & Change test were extracted and descriptively analyzed. Clinical and methodological heterogeneity between the studies precluded performing a meta-analysis.


This review found only a small number of instruments and was not able to recommend a single best instrument for use in a hospital setting. Although it was not possible to estimate the pooled operating characteristics, the included description of instrument characteristics, the descriptive analysis of performance measures, and the critical evaluation of the reporting studies may contribute to clinician's choice of the screening instrument that fits best their purpose.



Dementia is a progressive syndrome of global cognitive impairment. It encompasses a group of neurodegenerative disorders that are characterized by a progressive and irreversible decline of brain functions, with symptoms such as memory loss, disorientation, and the inability to perform daily activities of living independently [1]. Possible epiphenomena include neuropsychiatric symptoms and challenging behaviors of varying type and severity [2]. The most common dementia types include vascular dementia (VD), dementia with Lewy bodies (LBD), and Alzheimer's disease (AD), which is in around 40% the neuropathological diagnosis in patients with clinically diagnosed dementia disorder [36]. The process of AD pathology can be described as a continuum with a long asymptomatic preclinical stage; an early symptomatic clinical stage, which encompasses mild cognitive impairment (MCI), or prodromal AD; and a dementia stage, with dementia further divided into mild, moderate, and severe [79].

In 2015, more than 45 million people worldwide were estimated to be living with dementia. This number will almost double every 20 years, reaching 75 million in 2030 [10]. Of the Swiss population around 145 000 people are affected (year 2017), with a prevalence of 9% in individuals aged over 65 years, increasing to a prevalence of approximately 30% in adults aged 85 years and older [11]. Given that the prevalence of dementia rises steeply after the age of 65, the number of people living with dementia is expected to increase significantly due to the growing elderly population in Switzerland [12]. For the time being, Alzheimer's disease and related forms of dementia are incurable [13] and the burden on Swiss society is expected to grow substantially in the future. A national study, commissioned by the Swiss Alzheimer's Society, reported estimated annual costs of dementia at 6.3 billion CHF for 2007 and 6.96 billion CHF for 2009 [14, 15]. These findings are consistent with contemporary international studies, which predict rising global costs at similar rates until 2030 and beyond [10, 16, 17]. In response to this, Switzerland and many other countries have recognized dementia as a public health priority [1820] and developed national dementia strategies [21].

Despite the absence of a cure for dementia, numerous strategies emphasize earlier diagnosis and intervention [2225] in accordance with the Alzheimer Cooperative Valuation in Europe (ALCOVE), which recommends, that diagnosis should generally occur earlier than is currently common practice [26]. When speaking about earlier diagnosis, the conventional understanding usually distinguishes between early diagnosis and timely diagnosis. Whereas the term "early diagnosis" reflects the identification of people in the asymptomatic phase as a result of population or targeted screening, the term "timely diagnosis" is used to reflect diagnosis occurring at a time when patients and their family first notice changes in cognitive performance and seek medical examination [27]. In recent years, diagnosis has often been delayed or not made at all [22, 27, 28]. Multiyear delays from first symptom occurrence to presentation and inactivity by health professionals in offering help have been attributed mostly to patients or families not having the knowledge to realize the symptoms are part of a medical condition, the mutual false belief that nothing can be done, and the stigma of dementia preventing open discussions, respectively [2730]. According to the World Alzheimer Report, only between a third and a half of people with dementia ever receive a formal diagnosis, that is usually necessary for insurers to pay for medical services [22, 27].

In view of this well-documented and widely recognized problem of inadequate recognition of dementia, national and international advisory- and policy-making groups have evaluated the possibility of earlier diagnosis facilitated by screening for dementia or mild cognitive impairment [3137]. However, the U.S. Preventive Services Task Force concluded in its 2014 statement that evidence was insufficient to recommend routine screening for cognitive impairment in community-dwelling adults in the general primary care population who are older than 65 years [38]. Consistently, none of the remaining organizations recommended routine screening of patients in whom cognitive impairment was not symptomatic, but diagnostic workup when memory problems or dementia were suspected [36, 39].

This drive toward earlier diagnosis and intervention has been accompanied by a debate about the value of arriving at a diagnosis of dementia earlier in the disease process [27, 4042]. Several studies reported evidence that supports a possible beneficial effect of early and accurate diagnosis [27, 28, 40]. Early diagnosis potentially offers the opportunity for early interventions that slow down or lessen the disease process [4348], implementation of coordinated care plans while the patient is still competent to do so [49], better management of symptoms [50, 51], and postponement of institutionalization [47]. On the other hand, it should be acknowledged that diagnostic processes are costly and can come along with major psychological and psychosocial effects [27, 5254]. Another concern is misdiagnosis, which can result in unnecessary or incorrect treatment [55].

Since then, new research findings regarding benefits and harms, the approval of new pharmaceutical agents for treatment, and growing media attention have converged to challenge this previous thinking about screening for cognitive impairment [56, 57]. As a result, changes in health care policies and priorities, such as the introduction of an opportunistic "dementia case-finding scheme" in the United Kingdom [58, 59], the Alzheimer's Foundation of America's National Memory Screening Program [60], and the implementation of cognitive assessments in the Medicare Annual Wellness Visit in the United States [61] have occurred.


Although most policy statements acknowledge that physicians should be sensitive to evidence of cognitive impairment and should act on their suspicion, recommendations for operationalizing the detection of possible dementia are scarce [35, 62]. Usually, frontline recognition and assessment of people with possible dementia, regardless of the setting, requires a test of cognitive function, third-party anamnesis, or both [38, 62]. At the moment, neuropsychological tests, usually developed and validated in primary care and memory clinics, are regarded as the most implementable instruments for screening [35, 36, 63, 64]. They are usually paper-and-pencil-based, easy to administer, and take between 10 and 45 minutes to complete. Country-specific guidelines and/or systematic literature reviews on which instruments to favor have already been published for the primary care setting [6577]. With respect to the hospital setting however, where dementia and MCI are much more prevalent [7880], comparable guidelines concentrate on minorities or selected patient groups, such as geriatric, stroke or emergency patients [8184]. The variations in demographic features, health condition, disease prevalence, and severity but also, differences in test conditions (e.g. timing, interventions between index test and reference standard) entail separate external validation prior to general application. In response to this gap in information, two systematic reviews have recently been conducted to establish adequate tools for dementia screening, considering the particularities in secondary care. In 2010, Appels and colleagues [85] reported validation studies sampling from selected hospital outpatients with a focus on mild dementia and rather extensive screening instruments (10 to 45 min administration time). In comparison, in 2013, Jackson and colleagues [86] performed a review and meta-analysis of validated dementia screening instruments in unselected general hospital inpatients. Unselected, elective inpatients that account for some 40% of all hospitalizations have not been evaluated so far [8789].

Clinical role of index test

In the hospital setting, the knowledge that a patient has or might have dementia or MCI is essential because of the multiple immediate implications for care. Hospital medical staff may administer brief cognitive screening tests before or on the day of admission and, depending on the test results, cause additional investigations to be made to confirm whether a diagnosis is present or not; provide appropriate care during the hospital stay (e.g., choice of anesthesia, involvement of primary caregiver, medication management, etc.), and realize adequate discharge management [9093], which may then lead to avoiding new medical events known to be more likely among patients with cognitive impairment and promoting earlier diagnosis [94, 95].


Many screening instruments are recommended for the application in primary care setting but not so many, for screening in older hospital inpatients. The aim of this review is to provide clinicians, who wish to implement screening for dementia or MCI, an up-to-date choice of practical and accurate instruments that have been validated well for the use in unselected, elective hospital inpatients.


Eligibility criteria

Articles were limited to the English and German languages. Abstracts fulfilling the following criteria were included:

Clinical setting: Only studies conducted in a hospital setting (general or university hospital) involving elective inpatients over 64 years of age as the main study group, or as a clearly defined subgroup, were included. The aim of the review was to identify screening instruments and to establish their diagnostic accuracy in unselected samples within the hospital setting. For this reason, studies including participants that were selected on the basis of a specific disease or medical field (e.g., Parkinson's disease or orthopedic patients) were excluded. In addition, wards providing services exclusively for patients with diseases related to dementia (psychiatric and neurology) were excluded. In case of mixed settings, studies were excluded if no separate data was presented for outnumbered elective inpatients.

Target condition: Mild cognitive impairment (MCI), dementia, and any common dementia subtype, including Alzheimer's disease (AD), vascular dementia (VD), Lewy body dementia (LBD), and frontotemporal dementia (FTD).

Index tests: Screening during pre-operative examination or hospitalization in the more stable, elective inpatients might be less affected by time as a limiting factor. In comparison, especially in emergency departments or primary care setting, where time is scarce, administration time is key determinant of whether screening instruments are used in clinical practice or not [96, 97]. For this reason, screening instruments with a short, but also medium administration time (up to 15 minutes in non-impaired patients) were considered. Furthermore, instruments had to cover more than one cognitive domain to be eligible for inclusion because the coverage of multiple cognitive domains undoubtedly increases the instrument's sensitivity to different types of dementia [96, 98]. Optimally, the instrument had to cover at least the domains of "learning and memory" and "executive function", which are considered central to a diagnosis of dementia -most particularly to its most prevalent forms, Alzheimer's disease (AD) and vascular dementia (VD) [3, 99, 100].

Although the incorporation of informant reports into assessments for dementia is known to increase the overall accuracy of detection of cases and non-cases, tests that are wholly informant rated were not considered [101103], solely because, in the clinical setting, the presence of an informant is not the norm and proxy rating comes with confidentiality concerns. Self-administered tests, measures that assessed daily living activities and functional status, and telephonic or computerized self-tests were also excluded.

The full-texts were reviewed against the following additional inclusion criteria:

Types of studies: Cross-sectional studies, in which inpatients received the index test and reference standard diagnostic assessment during a hospital stay, preferably on the day of admission and before the commencement of treatment, were included. Studies were excluded for inadequate reporting (e.g., studies that did not report sensitivity or specificity), non-availability of the full-text article, or if subjects with prevalent target disease at baseline were included. Case-control studies and longitudinal studies (or related, nested case-control studies) were excluded due to the high risk of spectrum bias [96]. Also, studies sampling fewer than 100 participants were excluded due to the potential for bias in selection and lack of representativeness.

Reference standards: Studies were included that used a reference standard for MCI, all-cause dementia or any standardized definition of subtypes. For MCI, the reference standard diagnosis had to be made according to published criteria, that is, Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-V) [104], National Institute of Ageing-Alzheimer's Association criteria [105], Petersen [106], Gauthier [107], or Winblad [108] criteria. For all-cause dementia, any version of the DSM [104, 109], and the International Classification of Diseases (ICD) [110] criteria were included. For dementia subtypes (e.g. AD or probable AD, vascular dementia, or Lewy body dementia) common diagnostic criteria were included [111114]. In order not to further restrict the number of eligible studies, diagnostic accuracy studies that compared the index test with a diagnosis based on an expert consensus, or results of the Mini-Mental State Examination (MMSE) test were also included. Studies that applied a neuropathological diagnosis which needs to be verified post-mortem were excluded [115].

Information sources

An electronic literature search was conducted in the following databases: PubMed from 1972, Cochrane Library from 1992, and PsycINFO from 1967. All databases were accessed on the September 6, 2018. To ensure published literature saturation relevant systematic literature reviews, the reference sections of selected articles and the 'similar articles' feature in PubMed were assessed for further relevant studies. An update search was performed on April 10, 2019.


To search the databases, a multi-concept search strategy was applied. The primary strategy used the following concepts: (a) Disease: Dementia and cognition disorders (general terms, both free text and MeSH, exploded); (b) Outcome: Validation and sensitivity and specificity values (both free text and MeSH, exploded); (c) Intervention: Diagnostic tests, mass screening, etc. (both free text and MeSH, exploded), and (d) Setting: Aged in-patients (both free text and MeSH, exploded).

The secondary strategy involved a review of dementia practice guidelines [35] to identify recommended screening instruments. Irrespective of the setting targeted by those practice guidelines, recommended instruments were used as key search terms to run an additional electronic search if they met the fore mentioned criteria of a screening instrument. The original search strategies were developed for the PubMed database and slightly adapted to run on Cochrane Library and PsychINFO. The search strategy was peer-reviewed (by MA), using the PRESS 2015 Guideline Evidence-Based Checklist [116]. Disagreements were discussed and decided by consensus. The search strategy for PubMed is documented in the Supporting Information (S1 Appendix). Search strategies for PsychINFO and Cochrane are available from the corresponding author upon request.

Study selection

The titles and abstracts (where needed) were independently screened by the author (ABH) and one trained assessor (MG). Full-texts were independently reviewed by two assessors (ABH and MG). Any disagreements were discussed and decided by consensus. For all articles whose full-text was screened, additional information from authors was sought to resolve questions about eligibility, and reasons for exclusion were recorded (maximum three email contact attempts; if data was not available, the article was excluded). All articles selected were included only after reaching a consensus among all the authors. The study selection process was detailed in a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram.

Data extraction and management

From the selected articles, the following data was extracted using an extraction sheet, which was pilot-tested on three randomly selected articles: Country; type of hospital; patient group; target condition; sample size; age, and mean age; gender ratio; index test and applied cut-off; reference standard; point in time of screening; other assessments; assessment for delirium; prevalence, and sensitivity and specificity. The complete data extraction form is available on reasonable request from the corresponding author. Data extraction was done by one author (ABH) and one reviewer (MG). Any disagreements were decided by consensus. In case of uncertainties, study authors were contacted by e-mail (maximum of three email attempts; if data was not available, the article was tagged with a no-data-badge).

Risk of bias and applicability

One author (ABH) and one reviewer (MG) independently assessed, discussed, and reached a consensus on the methodological quality of each included study, using the recommended quality assessment tool for Diagnostic Accuracy Studies (QUADAS-2) [117, 118] and the Standards for the Reporting of Diagnostic Accuracy studies checklist (STARD 2015) [119]. Brief definitions describing the operational application of both instruments are detailed in the Supporting Information (S2 and S3 Appendices).

Statistical analysis and synthesis of results

Statistical analysis was performed according to the Cochrane guidelines for diagnostic test accuracy reviews [118]. For all included studies, diagnostic accuracy data was presented in two-by-two tables and used to calculate sensitivity and specificity values as well as measures of statistical uncertainty (95% confidence intervals). Data from each study was presented graphically by plotting estimates of sensitivities and specificities on a coupled forest plot. For studies that reported more than one threshold, only sensitivity and specificity data at the most common threshold were included in the two-by-two table. Investigation of heterogeneity was done through visual examination of the forest plot.

Registration and protocol

The pre-defined review protocol was registered at the PROSPERO international prospective register of systematic reviews (, registration number CRD42019133093). The protocol for this review and its primary search strategy are accessible on and as supporting information; see S1 and S4 Appendices.


Study selection and characteristics

The initial literature search revealed 1524 citations. Thirty-three of them originated from the reference sections of selected systematic reviews. In the end, 1518 articles were excluded and six studies investigating the validity of five different instruments were included in the reviewing process. The results of the search are summarized in the flow diagram (Fig 1. Study flow diagram). The inter-rater agreement between the author (ABH) and the peer-reviewer (MG) was moderate with a Cohan's kappa of 0.48

Of the six studies, two studies were conducted in Australia and one study each took place in the Netherlands, Switzerland, the UK, and the US. All studies sampled from consecutive inpatients being admitted to university hospitals [120122], general hospitals [123], or to a mix of both [124, 125]. While five studies included inpatients from medical and surgical wards, one study included inpatients from the general internal medicine ward only [122]. Merely two studies reported admission or discharge diagnoses [120, 122]. Also, reporting of patient characteristics (ethnicity, marital status, living situation, and educational attainment) was inconsistent. Diagnosis included all-cause dementia (4) and cognitive impairment (2), as defined by the third and fourth DSM version, [121, 123, 125] an expert diagnosis following interview and cognitive assessment [126]; and MMSE test results as criterion standard, respectively [122, 124]. Delirium as a cause of cognitive impairment was ruled out in four studies [121, 123, 125, 126]. Studies ranged in size from 103 to 776 participants (total n = 2112), including patients older than 69 years, with a mean value between 78 and 80 years. On average, studies included slightly more women (58%) than men (42%). The mean point prevalence of the target condition ranged between 14% and 33%. Further study details, that is, index test, cut-off value, point in time of screening, or other performed assessments can be found in the Characteristics of Included Studies (Table 1).

Risk of bias and applicability

To assess the study quality and risk of bias, all included studies were reviewed using the QUADAS-2 methodology (Fig 2. Risk of bias and applicability concerns graph). Of all included studies, only Travers and colleagues [125] Cognitive Performance Scale (CPS) was rated at low risk in all the categories. In the patient selection domain, two studies were rated high risk of bias due to inappropriate exclusion of privately insured and comatose patients [122] and patients who were not able to sustain their attention sufficiently [124]. Two studies were considered as having an unclear risk because it was not stated whether a consecutive or random sample was enrolled [121]. In the index test domain, one study was rated at high risk of bias because the index test results were interpreted with the knowledge of the results of the reference standard [121]. In the reference standard domain, four studies were rated high risk, and two studies were considered to be at unclear risk of bias. The high risk-rated studies interpreted the reference standard results with the knowledge of the results of the index test [121, 125] or used a reference standard that is not likely to correctly classify the target condition [122, 124]. The studies considered as unclear only mentioned vague information about whether the reference standard rater was blinded to the results of the index test results or not [123, 126]. In the flow and timing domain, three studies were rated at high risk of bias because not all patients were included in the analysis [121] or not all patients received the same reference standard [123]. Two studies were rated with unclear risk of bias. They did not provide enough information about whether any interventions were done between the administration of the index test and the reference standard [124, 126].

Fig 2. Risk of bias and applicability concerns graph.

(A) Abbreviations: CDT: Clock Drawing Test; T&C: Time & Change Test; MMSE: Mini-Mental Status Examination; CPS: Cognitive Performance Scale; 6CIT: Six-Item Cognitive Impairment test.

Finally, regarding the assessment of applicability concerns, for the majority of studies there was no concern that the included patients, the conduct and interpretation of the index test, or the reference standard did not match the review question. However, for four studies there were high applicability concerns, because the index tests took place more than 48 hours after hospital admission and there were no sufficient information whether prior index testing any measures with potentially negative effects on patient's cognitive performance were performed or not [121, 124, 126].

Reporting quality was assessed using the STARD guideline. In all included articles limitations in reporting were found. Reporting items of particular concern were: Description of sample size calculation (Item 18: No paper did report on its pre-specified sample size), flow of participants (Item 19: No paper did visualize the patient flow using a flow diagram), distribution of alternative diagnoses in those without the target condition (Item 21b: No paper did establish and document diagnoses of subjects without the target condition) and reporting of registration number and name of registry (Item 28: Only one study reported on this item). Affected by irregular reporting were also items 15 and 16, describing how indeterminate index test or reference standard results and missing data were handled (only two studies reported on these items). Further details are illustrated in Table 2. (STARD 2015 Checklist).


For the final review, six studies on five unique screening instruments were selected [121126]. The instruments studied were the Clock- Drawing Test (CDT), the Cognitive Performance Scale (CPS), the Mini-Mental Status Examination (MMSE), the Time & Change (T&C) test, and the Six-Item Cognitive Impairment Test (6-CIT). The MMSE and T&C were administered in two studies [121, 122, 125]. Hereinafter, all five instruments are briefly described, following a portrayal of the diagnostic accuracy data in the setting to be evaluated.


Clock Drawing Test (CDT). The CDT is a commonly used, brief neuropsychological test, sensitive to cognitive changes and functional skills [127]. Originally, the CDT was developed as an instrument for attentional and visual disorders [128, 129]. Due to its valuable characteristics (i.e., free of charge, quick, easy to administer, relatively high robustness), the CDT has gained in popularity among practitioners and researchers as a screening instrument for Alzheimer's dementia either by itself or as part of a test battery [130137]. Because of its simplicity and brevity, the CDT is well accepted by older and very old adults [138]. Although the CDT covers several cognitive domains and can thus provide some more information on the actual nature of cognitive impairment, it does not differentiate among Alzheimer's disease (AD), dementia with Lewy bodies (DLB), and cognitively impaired Parkinson's disease [139, 140]. In clinical practice, there are basically three approaches on how to administer the CDT. The most common administration instructions ask the patient to draw a clock face with all its numbers and set the time to 10 past 11 [137]. Variations can include a pre-drawn clock face, a different time setting, or a toy clock from which the patient needs to read the time [132, 141, 142]. In addition to the differences in how to administer the test, there are also various scoring methods [137, 143]. Commonly put into practice is the classification of drawn clocks into distinct classes. Death and colleagues distinguishes four classes, normal clocks (4), clocks with minor spacing abnormalities (3), clocks with major spacing abnormalities (2) and bizarre clocks (1). Clocks class 1 and 2 indicate cognitive impairment, and class 3 and 4 no cognitive impairment [123]. In the literature, there is no consensus about which scoring method is the most adequate, mainly because comparative studies have been questioned with respect to a methodologically diverse set of included studies [144].

Cognitive Performance Scale (CPS). The Cognitive Performance Scale was developed in 1994 as a standardized, comprehensive assessment instrument for cognitive function in nursing-home residents [145]. It is based on a subset of five items of the Minimum Data Set (MDS), which were combined to create a single, functionally meaningful, seven-category ranked scale [145]. The CPS is free of charge and takes less than 3 minutes to administer and covers several cognitive subdomains (i.e. short- and long-term memory, orientation, and executive function) [145]. For scoring, all items are combined by a branching logic with five decision nodes, daily decision making ability, short-term memory, procedural memory, ability to make self-understood and ability to feed oneself. Using this branching logic, patients can be classified in seven ranked categories, ranging from Intact (0) to Very Severe Impairment (6) [145]. According to Morris and colleagues, a rank of 2 or higher indicates the presence of cognitive impairment.

Due to limitations, that is, low sensitivity to early impairment, overestimation in dependent patients with comorbidities and depressive symptoms and underestimation in older patients, Morris and colleagues revised the CPS in 2016 [146]. The revised Cognitive Performance Scale 2 includes a new, most-independent category and a series of dichotomous severity options, providing a stepped hierarchical report of cognitive performance decline. The levels of cognitive impairment expanded from seven to nine and, thus, enabled repeated assessments to detect changes, in particular in early levels of cognitive decline.

At present, both CPS versions can only be scored following the administration of the complete interRAI AC, an instrument to obtain detailed information about patient's physical and cognitive status and psychosocial functioning (including the Minimum Data Set).

Mini-Mental Status Examination (MMSE).The Mini-Mental Status Examination is a brief measure of cognitive functioning and its change and was developed more than 30 years ago [136]. Although originally distributed free of charge, the MMSE has recently been subject to copyright restrictions [147]. The MMSE takes around five to 10 minutes to administer and is available in multiple languages. Its use as a cognitive test is widespread among researchers and specialists [148150], though it is not very popular in primary care, because its administration time is considered too long [151]. The MMSE was developed from items selected from different neuropsychological batteries. Although it covers five cognitive subdomains, Orientation, Registration, Attention and Calculation, Recall, and Language [136], the MMSE is not an adequate instrument to identify early stages of dementia or distinguish different subtypes of dementia [152]. It does not assess executive functions, and there are only a few episodic and semantic memory or visuospatial tasks [153]. There are 11 items, with a maximum score of 30. For persons with at least eight years of education, the presence of suspected cognitive impairment or dementia is determined by a score below the cut-off value of 23/24 [136], with lower scores indicating increasing cognitive impairment [154]. Since 1975, numerous other cut-offs have been calculated from the receiver operating characteristic (ROC) curve analysis of specific populations together with adjustments of sociocultural variables (such as age, ethnicity, and education), which have been found to have an effect on the performance of the MMSE [155157].

Six-Item Cognitive Impairment Test (6-CIT). The 6-CIT, originally referred to as Six-Item Orientation-Memory-Concentration Test, was developed in 1983 by Katzman and colleagues [158] by shortening the Blessed and colleagues' Mental Status Test [159]. It was designed as a screening test for dementia and is freely available. Because of its practicality, high acceptability, and decent psychometric properties [160], the 6-CIT has been used in research and a broad range of settings in clinical practice [161165]. For use in primary care and hospital setting, the 6-CIT has been recommended as a cognitive screening tool by the Alzheimer's Society and the National Collaborating Centre for Mental Health (UK) [166, 167]. The 6-CIT takes less than 10 minutes to administer and involves three tests of temporal orientation, a short-term memory test and two tests of attention [160]. It is scored out of 28, scores greater than 10 indicate cognitive impairment [168]. Because of its verbal method of test administration, the 6-CIT can also be used in visually impaired patients [76]. The performance of the 6-CIT is influenced by age, education, and ethnicity [76, 160, 168] and thus needs adjustment when administered in diverse settings.

Time and Change Test (T&C). The T&C test, originally developed in 1998 by Inouye and colleagues, is a simple, standardized, performance-based test for the detection of dementia [126]. Due to its brevity, it takes less than five minutes to administer; it is highly acceptable to patients and may offer particular advantages in clinical and research settings where frequent examination of cognitive status is required [126, 169]. The T&C test incorporates supplemental cognitive domains such as calculation, conceptualization, and visuospatial ability [126]. It consists of a telling-time task and a making-change task. In the telling-time task, patients must respond to a clock face set at 11:10. Patients are allowed two tries within a 60-second period. If the patient fails to respond correctly, the task is terminated and recorded as an error. In the making-change task, the patients are asked to give one dollar in change from a group of coins with smaller denominations. Patients are allowed two tries in 120 seconds. Incorrect responses on either or both tasks indicate dementia. According to Inouye and colleagues, the T&C test is only minimally affected by education.

Diagnostic accuracy.

The diagnostic accuracy data for the inpatient setting was extracted for each study and are summarized together with sensitivity, specificity, and statistical uncertainty intervals in the forest plot presented in Fig 3 (Summary of diagnostic accuracy data). Positive predictive values (PPV) and negative predictive values (NPV) are also summarized in the designated figure.

Fig 3. Summary of diagnostic accuracy data.

(A) Abbreviations: CPS: Cognitive Performance Scale; MMSE: Mini-Mental Status Examination; 6-CIT: Six Item Cognitive Impairment Test; T&C: Time & Change test; CDT: Clock-Drawing Test (B) All 95% CI have been calculated using the Wilson formula with continuity correction.

The study by Travers and colleagues applied the CPS and MMSE. For the CPS, at a threshold of ≥2, a sensitivity of 0.68 and specificity of 0.89 was reported [125]. The authors concluded, upon their study findings, that the CPS could substitute other widely used instruments for screening for dementia in older hospital inpatients. However, at present, the CPS can only be scored following administration of the complete interRAI Acute Care assessment. For the accuracy of the MMSE, which was evaluated as a comparator to the CPS, Travers and colleagues reported 0.75 for sensitivity and 0.73 for specificity using the common cut-off value of <24 [125]. The diagnosis according DSM-IV was established with access to the MMSE results. Therefore, overestimation is a legitimate concern.

For the accuracy of the 6-CIT, Tuijl and colleagues reported the largest number of correct positive predictions and the lowest number of false-positive predictions at a cut-off value of ≥11. Sensitivity and specificity were 0.71 and 0.98, respectively [124]. Compared with the MMSE, based on its equal performance but greater practicality, the 6CIT appears to be preferable to the MMSE in screening for cognitive impairment in older patients in a general hospital setting. However, the QUADAS-2 assessment revealed significant methodological limitations that may have led to overestimation of the accuracy of the 6-CIT.

The second study evaluating the diagnostic accuracy of the CPS, using the same common cut-off value of ≥2, reported 0.56 for sensitivity and 0.93 for specificity [122]. Compared to the rather positive conclusion of Travers and colleagues, the concluding Büla and colleague's statement remained more critical toward an implementation of the CPS in the hospital setting, mainly, because their analyses have shown that the CPS tends to underestimate cognitive impairment in older patients and overestimate it in dependent patients with depressive symptoms and comorbidities. Both are likely to be found in the hospital setting. Methodological limitations arose from the domain of patient selection, index test and reference standard. Due to inappropriate exclusions (privately insured patients and surgical patients), spectrum bias may have led to overestimation; thus, accuracy data should be interpreted with caution. Last, visual assessment of the forest plot showed high levels of heterogeneity where there is little overlap in the sensitivity confidence intervals of both CPS studies [122, 125].

For the T&C, which was evaluated in two studies, Inouye and colleagues reported 0.86 and 0.71 for sensitivity and specificity, respectively [126]. A positive test for either of the two components was classified as a positive result for dementia. According to the authors, the T&C was well tolerated and acceptable to nearly all participants across educational levels and diverse cultural backgrounds. Considering its relatively high sensitivity and negative predictive value (0.97), the T&C is advocated as a brief screener in high-risk settings for sequential use in combination with further clinical assessments to evaluate possible positive results. The methodological assessment showed no risk of bias or applicability concerns except for the index test, which was not administered early during hospitalization.

The second study with the objective to validate the T&C as a screening tool reported unusually divergent diagnostic accuracy values. For the same cut-off, Nair and colleagues reported in its Australian study 0.44 for sensitivity and 0.91 for specificity [121]. As evidenced by the high participation rate, the T&C test appeared, once again, to be readily acceptable but failed as a sensitive screening tool. As possible explanation, the authors argued that their country-specific adaptation of the making-change task, namely, the use of Australian coin money, may have varied the complexity and affected its sensitivity. The relatively small sample size (N = 103), accounted for low accuracy and large confidence intervals, especially with regard to sensitivity. Methodological limitations emerged from three of four domains. Only the patient selection domain was not rated as high risk of bias but was considered unclear because whether a consecutive or random sample was enrolled was not mentioned. Not surprisingly, the forest plot presented diverging confidence intervals without any overlap. For the second instrument evaluated, the MMSE, Nair and colleagues reported 0.88 for sensitivity and 0.94 for specificity [121]. Compared with the T&C, based on its superior performance and usefulness, the MMSE appears to be preferable to the T&C test in screening for dementia in older, general-hospital inpatients. However, in addition to the above mentioned methodological limitations, another potential problem with this study was that the DSM-IV diagnoses and the MMSE scores were obtained by the same interviewer. This possible source of confounding may have led to an overestimation of the performance of the MMSE relative to the T&C test. In comparison to the MMSE accuracy data presented by Travers and colleagues, the visual assessment of the forest plot showed, at least for the sensitivity confidence intervals, a good degree of overlap at a cut-off value of <24.

Last, for the CDT, Death and colleagues presented 0.77 and 0.87 for sensitivity and specificity, respectively [123]. Because of its characteristics, the test is proposed as a simple and rapid tool for use by admitting junior staff to highlight possible dementia and to alert to the necessity for further testing. The QUADAS-2 assessment revealed methodological limitations regarding the flow and timing domain that may have led to overestimation or underestimation of the accuracy of the CDT. In particular, because patients were only reviewed according to DSM-III diagnostic criteria, if a discrepancy in test results occurred between the CDT and MMSE, accuracy should be interpreted with caution. Furthermore, the reference standard domain was considered to be at unclear risk because whether the reference standard results were interpreted with or without the knowledge of the results of the index test was not stated. Finally, the comparative visual assessment revealed rather large confidence intervals, which may originate from the relatively small study sample (N = 117).

Because of the insufficient number of included studies, no meta-analysis of the diagnostic test accuracy, investigation of heterogeneity, and sensitivity analyses were conducted.


Summary of main results

The aim of this review was to search for and identify adequate instruments for screening for dementia and MCI in unselected, elective hospital inpatients, with a restriction to validation studies of high quality to minimize possible biases due to methodological shortcomings and differences in reporting. Accordingly, this review applied a well-constructed search strategy and included quality assessments. The overall number of studies included in this review is small. Only six studies evaluating five unique screening instruments were found. Four instruments, the Cognitive Performance Scale (CPS), the Mini-Mental Status Examination (MMSE), the Time & Change (T&C), and the Clock Drawing Test (CDT) were investigated as screening tools for detecting dementia; the CPS and the 6-Item Cognitive Impairment Test were investigated as screening tools for unspecified cognitive impairment. Overall, despite a restrictive combination of inclusion criteria, a considerable number of included studies were rated as having a high risk of bias or applicability concerns, in particular in the flow and timing and index test domain, respectively. In addition, in six cases, risk of bias was rated as unclear. This originated directly from the limitations in reporting, when older articles seemingly adhered less to recent guidelines. Consequently, the scarcity of information, methodological limitations, and heterogeneity of study characteristics did not allow formal meta-analyses of study results and further analysis.

Strengths and weaknesses of the review

The strengths of this review include the use of a multi-concept search strategy to identify a wide spectrum of potential articles, which would reduce the risk of publication bias. The primary search concept used terms from four domains and was complemented by a more rigorous second concept, which used instrument names from the Dementia Practice Guidelines as key search terms. While the latter concept ensured coverage of widely used instruments, primarily in the primary care setting, the former targeted, but more sensitive search approach, may have identified studies that could have been overlooked. For quality and comprehensiveness reassurance, the search strategy was peer-reviewed, using the PRESS 2015 Guideline Evidence-Based Checklist.

Importantly, this review also included a detailed quality assessment that provided crucial information for the interpretation of the reported studies. For quality assessment, the recommended assessment tools for Diagnostic Accuracy Studies (QUADAS-2) and the Standards for the Reporting of Diagnostic Accuracy studies (STARD 2015) checklist were applied. This review itself reports according to the PRISMA Statement for Preferred Reporting Items for a Systematic Review and Meta-Analysis of Diagnostic Test Accuracy Studies (PRISMA-DTA statement 2018).

Finally, in contrary to recent reviews, this systematic review excluded case-control studies, which are prone to overestimate diagnostic accuracy by including phenotypic extremes; that is two extreme populations are compared, rather than typical healthy and diseased populations. Complementary, the focus on rather naturalistic, cross-sectional validation studies provided an applicable choice of instruments for systematic screening during hospital routine care.

This review has several limitations. Formal meta-analyses and additional analysis were precluded due to the small number of studies reported. Initially, for meta-analysis of sensitivity and specificity, it was planned to use the bivariate random-effects model approach (if studies used the same index test at a common threshold) or the hierarchical summary ROC (HSROC) method (if multiple thresholds were reported) [170, 171]. For the investigation of heterogeneity, in addition to the visual examination of the forest plot, performing meta-regression was planned by fitting HSROC models with pre-specified covariates. Therefore, drawing conclusions from the reported studies regarding the diagnostic accuracy of included screening instruments was limited.

Furthermore, the inclusion of the MMSE as a criterion standard could be criticized. The choice to accept any screening instrument as a criterion standard could be justified by two reasons: (1) In particular in cross-sectional studies embedded in daily clinical routine, the confirmation of the index test with a more adequate reference standard, i.e. clinical assessment, or neuropsychological testing, with explicit diagnostic criteria with or without expert consensus, is usually logistically constrained and thus, often deliberately avoided. (2) The diagnostic criteria for MCI used in this review are relatively recent. Therefore, with the motive to increase the number of potentially eligible studies, this review also included diagnostic accuracy studies comparing their index test with the MMSE as a criterion standard. The choice of the MMSE is justified by its widespread use in research and popularity in the clinical setting. However, even though the MMSE is widely used, it has imperfect specificity and sensitivity, and very limited ability to differentiate between MCI patients and healthy controls [154].

An additional limitation originated from low methodological quality of some included studies. Instrument accuracy was potentially overestimated (due to selection bias, time lag bias, information bias, and study result elimination bias); thus, results should be interpreted with caution. The exclusion of informant-rated questionnaires, web-based and telephonic screening tools can be seen as another shortcoming. Due to this restriction, some promising screening instruments were not evaluated in this review. Finally, the exclusion of non-English language, gray literature, and unpublished studies also had potential for bias.

Applicability of findings to the review question

In 2013, Jackson and colleagues conducted a very similar review and meta-analysis [86]. Their intent was to determine which of the instruments advocated for screening for dementia had been validated in older hospital inpatients. In the end, in most of their included studies, the sample population was either mixed with outpatients [172, 173], geriatric [174176], or admitted through the emergency department [177, 178]. Jackson and colleagues reported the largest evidence base (with more than one report) for the use of the Abbreviated Mental Test Score (AMTS) and stated a clear need for more validation studies to inform screening for dementia in hospital inpatients best. In 2018, Carpenter and colleagues performed a systematic review and meta-analysis of the diagnostic accuracy of brief screening instruments for dementia in geriatric ED patients [82]. The AMT-4, a shorter, 4-item version of the AMTS, was found the most accurate ED screening instrument to rule in dementia.

This present review found only a small number of validated instruments and was not able to recommend a single best instrument. Concerning the AMTS, for screening MCI or dementia in unselected, elective hospital inpatients, this review found no evidence. Although the findings of this review do not advocate a specific instrument in terms of best diagnostic accuracy, the results do suggest that for screening dementia there are valuable instruments as the majority of the included studies report satisfying sensitivity and negative predictive values–both of which need to be maximized in order to miss relatively few true cases.

The lack of evidence is surprising, because despite the wider public interest in dementia and the recent debates about targeted screening initiatives, this review found not one eligible study published after 2013. Although it was not possible to estimate the pooled operating characteristics, the included description of instrument characteristics, the descriptive analysis of performance measures, and the critical evaluation of the reporting studies may contribute to clinicians' choice of the best screening instrument for their purpose.

Implications for clinical practice

At the present time, there is insufficient evidence to recommend for or against the use of a specific instrument for screening for dementia or MCI in older hospital inpatients. Although some instruments performed comparatively well and were advocated by the individual study authors, based on the limited information currently available, a universal recommendation for routine use would be of questionable quality and little clinical utility. In the end, whatever test is used, evidence of cognitive impairment on single tests must be interpreted in the light of contextual and other information. The main caveat is that simple cognitive tests used in isolation are not reliable enough. In addition, even in the absence of dementia, inpatients may perform poorly because of other reasons (e.g., medication, pain, language barriers, and cultural issues) and/or competing disorders (e.g., delirium, depression, diabetes) [179]. Delirium is the most common cause; it affects at least one in 10 hospital inpatients [79, 180, 181]. For this reason, a sequential use in combination with detailed expert assessments is highly recommended before establishing diagnosis and following care pathways.

If screening is chosen, timing matters. In general, the sooner MCI or dementia is identified during a hospital stay, the sooner appropriate interventions can be tailored to the individual's needs (e.g., choice of anesthesia, involvement of primary caregiver, medication management) [94]. Under the assumption that elective inpatients are in general more stable and are not in need of immediate care, clinicians should consider incorporating screening as part of the overall hospital admission assessment and follow-up further evaluations both during and after hospitalization. In many cases, subsequent detailed examinations may only be realistic after discharge. As long as clinicians are accustomed to managing possible confounding factors and are trained in the use of cognitive tests, such test do have a role in screening for dementia or MCI in hospital inpatients [90]. However, the time needed to perform assessments of cognitive function means an increase in the workload. Although the costs of initial screening can be kept quite inexpensive, the costs of a subsequent diagnostic workup will vary, depending on the specific diagnostic pathway. Finally, it needs to be said that at present, evidence that screening for dementia is effective is lacking [41, 182].

Implications for research

At present, there is a clear need for further validation studies of dementia or MCI screening instruments for older hospital inpatients, rather than for the development of new instruments. Future studies should incorporate strong methodological study designs to minimize the risks of bias but also need to report in sufficient detail, so that trustworthiness and applicability of the study findings can be judged. The conduct of a meta-analysis might be a valuable objective for future research, provided that the number of validation studies to be evaluated is sufficient. In addition, on the basis of well-validated cognitive tests, distinct recommendations for clinicians on how to identify patients with possible dementia systematically should be established. Ultimately, this will also require additional evidence regarding the cost-effectiveness of screening for dementia or MCI. A corresponding analysis of the benefits and costs of screening should be measured in terms of the value of timely and correct diagnosis and the application of adequate medical treatments and care management programs.

Supporting information

S2 Appendix. Assessment of methodological quality using QUADAS-2.


S3 Appendix. Standards for the Reporting of Diagnostic Accuracy studies checklist.



The authors would like to thank Mattia Gianinazzi (MA), analyst at Biogen, for his peer-reviewing of the search strategy (PRESS 2015 Guideline), independent screening of search results (title, abstract, and full-text), and assessment of methodological quality of each included study (QUADAS-2, STARD 2015). The authors would also like to thank HP Switzerland for the provision of Windows devices to support efficient literature search and screening.


  1. 1. Masuhr K, Neumann M. Neurologie. 7th. ed: Thieme; 2013.
  2. 2. Hywel T. Understanding Behaviour in Dementia that Challenges. Nursing Older People (through 2013). 2011;23(9):8.
  3. 3. Rizzi L, Rosset I, Roriz-Cruz M. Global Epidemiology of Dementia: Alzheimer's and Vascular Types. BioMed Research International. 2014;2014:1–8.
  4. 4. Grinberg LT, Nitrini R, Suemoto CK, Lucena Ferretti-Rebustini RE, Leite RE, Farfel JM, et al. Prevalence of dementia subtypes in a developing country: a clinicopathological study. Clinics (Sao Paulo, Brazil). 2013;68(8):1140–5.
  5. 5. Brunnstrom H, Gustafson L, Passant U, Englund E. Prevalence of dementia subtypes: a 30-year retrospective survey of neuropathological reports. Archives of gerontology and geriatrics. 2009;49(1):146–9. pmid:18692255
  6. 6. Goodman R, Lochner K, Thambisetty M, Wingo T, Posner S, Ling S. Prevalence of dementia subtypes in United States Medicare fee-for-service beneficiaries, 2011–2013. Alzheimer's & Dementia: The Journal of the Alzheimer's Association. 2017;13(1):28–37.
  7. 7. Dubois B. The Emergence of a New Conceptual Framework for Alzheimer's Disease. Journal of Alzheimer's disease. 2018;62(3):1059–66. pmid:29036825
  8. 8. Sperling R, Aisen P, Beckett L, Bennett D, Craft S, Fagan A, et al. Toward defining the preclinical stages of Alzheimer's disease: Recommendations from the National Institute on Aging-Alzheimer's Association workgroups on diagnostic guidelines for Alzheimer's disease. Alzheimer's & Dementia. 2011;7(3):280–92.
  9. 9. Aisen P, Cummings J, Jack C, Morris J, Sperling R, Frölich L, et al. On the path to 2025: understanding the Alzheimer’s disease continuum. Alzheimer's research & therapy. 2017;9(1):60.
  10. 10. Prince M, Wimo A, Guerchet M, Ali G, Wu Y, Prina M. World Alzheimer Report 2015: The Global Impact of Dementia—An Analysis of Prevalence, Incidence, Cost and Trends. Alzheimer's Disease International (ADI); 2015 [cited 2018 November 10]. Available from:
  11. 11. Swiss Alzheimer's Society. Menschen mit Demenz in der Schweiz: Zahlen und Prognosen. Swiss Alzheimer's Society; 2018 [cited 2018 August 10]. Available from:
  12. 12. Swiss Federal Statistical Office. Population projections for Switzerland 2015–2045 [Internet]. Swiss Federal Statistical Office; 2016 [cited 2018 September 5]. Available from:
  13. 13. Winblad B, Amouyel P, Andrieu S, Ballard C, Brayne C, Brodaty H, et al. Defeating Alzheimer's disease and other dementias: a priority for European science and society. The Lancet Neurology. 2016;15(5):455–532. pmid:26987701
  14. 14. Ecoplan. Kosten der Demenz in der Schweiz. Swiss Alzheimer's Society; 2010 [cited 2018 July 20]. Available from:
  15. 15. Ecoplan. Kosten der Demenz in der Schweiz: Update. Swiss Alzheimer's Society; 2009 [cited 2018 July 20]. Available from:
  16. 16. Brown L, Hansnata E, La H. Economic Cost of Dementia in Australia 2016–2056. Alzheimer’s Australia, University of Canberra; 2017 [cited 2018 September 11]. Available from:
  17. 17. Wimo A, Jönsson L, Bond J, Prince M, Winblad B. The worldwide economic impact of dementia 2010. Alzheimer's & Dementia. 2013;9(1):1–11.
  18. 18. Department of Health and Social Care and Prime Minister's Office UK. G8 dementia summit declaration [Internet]. Department of Health and Social Care and Prime Minister's Office UK; 2010 [cited 2018 November 9]. Available from:
  19. 19. Alzheimer Cooperative Valuation in Europe. The European Joint Action on Dementia: Synthesis Report [Internet]. Alzheimer Cooperative Valuation in Europe; 2013 [cited 2018 August 27]. Available from:
  20. 20. International Longevity Centre—UK. The European Dementia Research Agenda. International Longevity Centre—UK; 2011 [cited 2018 May 19]. Available from:
  21. 21. Europe Alzheimer. National Dementia Strategies [Internet]. Alzheimer Europe; 2017 [cited 2018 November 22]. Available from:
  22. 22. Federal Department of Home Affairs. National Dementia Strategy 2014–2019. Federal Office of Public Health and Swiss Conference of the Cantonal Ministers of Public Health; 2018 [cited 2018 October 25]. Available from:
  23. 23. Department of Health and Social Care UK. Living well with dementia: A national dementia strategy [Internet]. Department of Health; 2009 [cited 2018 December 2]. Available from:
  24. 24. Scottish Government. Scotland's national dementia strategy 2017–2020. The Scottish Government Edinburgh; 2010 [cited 2018 May 2]. Available from:
  25. 25. Lee S. Dementia Strategy Korea. International journal of geriatric psychiatry. 2010;25(9):931–2. pmid:20803725
  26. 26. Brooker D, Fontaine JL, Evans S, Bray J, Saad K. Public health guidance to facilitate timely diagnosis of dementia: Alzheimer's Cooperative Valuation in Europe recommendations. International journal of geriatric psychiatry. 2014;29(7):682–93. pmid:24458456
  27. 27. Prince M, Bryce R, Ferri C. World Alzheimer Report 2011: The benefits of early diagnosis and intervention. Alzheimer's Disease International; 2011 [cited 2018 August 29].
  28. 28. Phillips J, Pond D, Goode S. Timely Diagnosis of Dementia: Can we do better. Alzheimer's Australia; 2011 [cited 2018 October 11]. Available from:
  29. 29. Koch T, Iliffe S. Rapid appraisal of barriers to the diagnosis and management of patients with dementia in primary care: a systematic review. BMC family practice. 2010;11(1):52.
  30. 30. Knopman D, Donohue JA, Gutterman EM. Patterns of Care in the Early Stages of Alzheimer's Disease: Impediments to Timely Diagnosis. Journal of the American Geriatrics Society. 2000;48(3):300–4. pmid:10733057
  31. 31. Boustani M, Peterson B, Hanson L, Harris R, Lohr KN. Screening for dementia in primary care: a summary of the evidence for the US Preventive Services Task Force. Annals of internal medicine. 2003;138(11):927–37. pmid:12779304
  32. 32. Petersen RC, Stevens JC, Ganguli M, Tangalos EG, Cummings J, DeKosky S. Practice parameter: early detection of dementia: mild cognitive impairment (an evidence-based review): report of the Quality Standards Subcommittee of the American Academy of Neurology. Neurology. 2001;56(9):1133–42. pmid:11342677
  33. 33. Eccles M, Clarke J, Livingston M, Freemantle N, Mason J. North of England evidence based guidelines development project: guideline for the primary care management of dementia. BMJ: British Medical Journal. 1998;317(7161):802. pmid:9740574
  34. 34. Patterson C, Grek A, Gauthier S, Bergman H, Cohen C, Feightner J, et al. The recognition, assessment and management of dementing disorders: conclusions from the Canadian Consensus Conference on Dementia. Canadian Journal of Neurological Sciences. 2001;28(S1):S3–S16.
  35. 35. Ngo J, Holroyd-Leduc JM. Systematic review of recent dementia practice guidelines. Age and ageing. 2014;44(1):25–33. pmid:25341676
  36. 36. Ashford JW, Borson S, O’Hara R, Dash P, Frank L, Robert P, et al. Should older adults be screened for dementia? Alzheimer's & Dementia. 2006;2(2):76–85.
  37. 37. Dementia Study Group of the Italian Neurological Society. Guidelines for the diagnosis of dementia and Alzheimer's disease. Neurological Sciences. 2000;21(4):187–94. pmid:11214656
  38. 38. Moyer VA. Screening for cognitive impairment in older adults: US Preventive Services Task Force recommendation statement. Annals of internal medicine. 2014;160(11):791–7. pmid:24663815
  39. 39. Ashford JW, Borson S, O’Hara R, Dash P, Frank L, Robert P, et al. Should older adults be screened for dementia? It is important to screen for evidence of dementia!: Elsevier; 2007.
  40. 40. Dubois B, Padovani A, Scheltens P, Rossi A, Dell’Agnello G. Timely diagnosis for Alzheimer’s disease: a literature review on benefits and challenges. Journal of Alzheimer's disease. 2016;49(3):617–31. pmid:26484931
  41. 41. Robinson L, Tang E, Taylor J-P. Dementia: timely diagnosis and early intervention. Bmj. 2015;350:302–9.
  42. 42. Yumiko A, Asuna A, Yoko M. The national dementia strategy in Japan. International journal of geriatric psychiatry. 2010;25(9):896–9. pmid:20803715
  43. 43. Raina P, Santaguida P, Ismaila A, Patterson C, Cowan D, Levine M, et al. Effectiveness of cholinesterase inhibitors and memantine for treating dementia: evidence review for a clinical practice guideline. Annals of internal medicine. 2008;148(5):379–97. pmid:18316756
  44. 44. Birks J. Cholinesterase inhibitors for Alzheimer’s disease. The Cochrane database of systematic reviews; 2006 [cited 2018 June 22]. Available from:
  45. 45. Spijker A, Vernooij‐Dassen M, Vasse E, Adang E, Wollersheim H, Grol R, et al. Effectiveness of nonpharmacological interventions in delaying the institutionalization of patients with dementia: a meta‐analysis. Journal of the American Geriatrics Society. 2008;56(6):1116–28. pmid:18410323
  46. 46. Rolinski M, Fox C, Maidment I, McShane R. Cholinesterase inhibitors for dementia with Lewy bodies, Parkinson's disease dementia and cognitive impairment in Parkinson's disease.2012 [cited 2018 July 2]. Available from: pmid:22419314
  47. 47. Wilkinson D. A review of the effects of memantine on clinical progression in Alzheimer's disease. International journal of geriatric psychiatry. 2012;27(8):769–76. pmid:21964871
  48. 48. McShane R, Areosa A, Minakaran N. Memantine for dementia. The Cochrane database of systematic reviews; 2006 [cited 2018 September 19]; (2). Available from:
  49. 49. Pimouguet C, Lavaud T, Dartigues J, Helmer C. Dementia case management effectiveness on health care costs and resource utilization: a systematic review of randomized controlled trials. The journal of nutrition, health & aging. 2010;14(8):669–76.
  50. 50. Livingston G, Barber J, Rapaport P, Knapp M, Griffin M, King D, et al. Long-term clinical and cost-effectiveness of psychological intervention for family carers of people with dementia: a single-blind, randomised, controlled trial. The Lancet Psychiatry. 2014;1(7):539–48. pmid:26361313
  51. 51. Clare L, Linden DEJ, Woods RT, Whitaker R, Evans SJ, Parkinson CH, et al. Goal-Oriented Cognitive Rehabilitation for People With Early-Stage Alzheimer Disease: A Single-Blind Randomized Controlled Trial of Clinical Efficacy. The American Journal of Geriatric Psychiatry. 2010;18(10):928–39. pmid:20808145
  52. 52. Manthorpe J, Samsi K, Campbell S, Abley C, Keady J, Bond J, et al. From forgetfulness to dementia: clinical and commissioning implications of diagnostic experiences. British Journal of General Practice. 2013;63(606):e69–e75. pmid:23336476
  53. 53. Bunn F, Goodman C, Sworn K, Rait G, Brayne C, Robinson L, et al. Psychosocial factors that shape patient and carer experiences of dementia diagnosis and treatment: a systematic review of qualitative studies. PLoS medicine. 2012;9(10):e1001331. pmid:23118618
  54. 54. Wimo A, Religa D, Spångberg K, Edlund AK, Winblad B, Eriksdotter M. Costs of diagnosing dementia: results from SveDem, the Swedish Dementia Registry. International journal of geriatric psychiatry. 2013;28(10):1039–44. pmid:23440702
  55. 55. Gaugler JE, Ascher-Svanum H, Roth DL, Fafowora T, Siderowf A, Beach TG. Characteristics of patients misdiagnosed with Alzheimer’s disease and their medication use: an analysis of the NACC-UDS database. BMC geriatrics. 2013;13(1):137.
  56. 56. Borson S, Frank L, Bayley PJ, Boustani M, Dean M, Lin P-J, et al. Improving dementia care: the role of screening and detection of cognitive impairment. Alzheimer's & Dementia. 2013;9(2):151–9.
  57. 57. Calzà L, Beltrami D, Gagliardi G, Ghidoni E, Marcello N, Rossini-Favretti R, et al. Should we screen for cognitive decline and dementia? Maturitas. 2015;82(1):28–35. pmid:26152814
  58. 58. Kmietowicz Z. Cameron launches challenge to end “national crisis” of poor dementia care. BMJ. 2012;344.
  59. 59. Department of Health—UK. Using the Commissioning for Quality and Innovation (CQUIN) payment framework. Department of Health; 2012 [cited 2018 September 18]. Available from:
  60. 60. Bayley PJ, Kong JY, Mendiondo M, Lazzeroni LC, Borson S, Buschke H, et al. Findings from the national memory screening day program. Journal of the American Geriatrics Society. 2015;63(2):309–14. pmid:25643739
  61. 61. Cordell CB, Borson S, Boustani M, Chodosh J, Reuben D, Verghese J, et al. Alzheimer's Association recommendations for operationalizing the detection of cognitive impairment during the Medicare Annual Wellness Visit in a primary care setting. Alzheimer's & dementia: the journal of the Alzheimer's Association. 2013;9(2):141–50.
  62. 62. Arevalo-Rodriguez I, Pedraza OL, Rodríguez A, Sánchez E, Gich I, Solà I, et al. Alzheimer’s disease dementia guidelines for diagnostic testing: a systematic review. American Journal of Alzheimer's Disease & Other Dementias®. 2013;28(2):111–9.
  63. 63. Laske C, Sohrabi HR, Frost SM, López-de-Ipiña K, Garrard P, Buscema M, et al. Innovative diagnostic tools for early detection of Alzheimer's disease. Alzheimer's & Dementia. 2015;11(5):561–78.
  64. 64. Lin J, O'Connor E, Rossom R, Perdue L, Burda B, Thompson M, et al. Screening for Cognitive Impairment in Older Adults: An Evidence Update for the U.S. Preventive Services Task Force. Agency for Healthcare Research and Quality (US); 2013 [cited 2018 November 22]. Available from:
  65. 65. Brouwers M, Kho M, Browman G, Burgers J, Cluzeau F, Feder GZ. For the AGREE Next Steps Consortium. AGREE II: Advancing guideline development, reporting and evaluation in healthcare. Canadian Medical Association Journal. 2010;182(18):E839–E42.
  66. 66. Alzheimer-Demenz. Konsensus 2012 zur Diagnostik und Therapie von Demenzkranken in der Schweiz. Praxis; 2012 [cited 2018 August 24]. Available from:
  67. 67. Chertkow H. Introduction: the third Canadian consensus conference on the diagnosis and treatment of dementia, 2006. Alzheimer's & dementia: the journal of the Alzheimer's Association. 2007;3(4):262–5.
  68. 68. National Collaborating Centre for Mental Health. Dementia: A NICE-SCIE Guideline on Supporting People With Dementia and Their Carers in Health and Social Care. The British Psychological Society & The Royal College of Psychiatrists.; 2007 [cited 2018 October 26]. Available from:
  69. 69. Feldman HH, Jacova C, Robillard A, Garcia A, Chow T, Borrie M, et al. Diagnosis and treatment of dementia: 2. Diagnosis. Canadian Medical Association Journal. 2008;178(7):825–36. pmid:18362376
  70. 70. Gauthier S, Patterson C, Chertkow H, Gordon M, Herrmann N, Rockwood K, et al. 4th Canadian consensus conference on the diagnosis and treatment of dementia. Canadian Journal of Neurological Sciences. 2012;39(S5):S1–S8.
  71. 71. Hogan DB, Bailey P, Black S, Carswell A, Chertkow H, Clarke B, et al. Diagnosis and treatment of dementia: 4. Approach to management of mild to moderate dementia. Canadian Medical Association Journal. 2008;179(8):787–93. pmid:18838454
  72. 72. Hort J O’brien J, Gainotti G, Pirttila T, Popescu B, Rektorova I, et al. EFNS guidelines for the diagnosis and management of Alzheimer’s disease. European Journal of Neurology. 2010;17(10):1236–48. pmid:20831773
  73. 73. Steinberg E, Greenfield S, Wolman D, Mancher M, Graham R. Clinical practice guidelines we can trust. National Academies Press; 2011 [cited 2018 July 7]. Available from:
  74. 74. Sorbi S, Hort J, Erkinjuntti T, Fladby T, Gainotti G, Gurvit H, et al. EFNS‐ENS Guidelines on the diagnosis and management of disorders associated with dementia. European Journal of Neurology. 2012;19(9):1159–79. pmid:22891773
  75. 75. American Geriatrics Society. A Guide to Dementia Diagnosis and Treatment. American Geriatrics Society; 2011 [cited 2018 November 12]. Available from:
  76. 76. Yokomizo JE, Simon SS, Bottino CM. Cognitive screening for dementia in primary care: a systematic review. International psychogeriatrics. 2014;26(11):1783–804. pmid:25023857
  77. 77. Razak MA, Ahmad N, Chan Y, Kasim NM, Yusof M, Ghani MA, et al. Validity of screening tools for dementia and mild cognitive impairment among the elderly in primary health care: a systematic review. Public health. 2019;169:84–92. pmid:30826688
  78. 78. Boustani M, Baker MS, Campbell N, Munger S, Hui SL, Castelluccio P, et al. Impact and recognition of cognitive impairment among hospitalized elders. Journal of hospital medicine: an official publication of the Society of Hospital Medicine. 2010;5(2):69–75.
  79. 79. Mukadam N, Sampson EL. A systematic review of the prevalence, associations and outcomes of dementia in older general hospital inpatients. International psychogeriatrics. 2011;23(3):344–55. pmid:20716393
  80. 80. Sampson EL, Blanchard MR, Jones L, Tookman A, King M. Dementia in the acute hospital: prospective cohort study of prevalence and mortality. The British Journal of Psychiatry. 2009;195(1):61–6. pmid:19567898
  81. 81. Carpenter CR, Bassett ER, Fischer GM, Shirshekan J, Galvin JE, Morris JC. Four sensitive screening tools to detect cognitive dysfunction in geriatric emergency department patients: brief Alzheimer's Screen, Short Blessed Test, Ottawa 3DY, and the caregiver-completed AD8. Academic Emergency Medicine. 2011;18(4):374–84. pmid:21496140
  82. 82. Carpenter CR, Banerjee J, Keyes D, Eagles D, Schnitker L, Barbic D, et al. Accuracy of Dementia Screening Instruments in Emergency Medicine: A Diagnostic Meta‐analysis. Academic Emergency Medicine. 2019;26(2):226–45. pmid:30222232
  83. 83. de Koning I, van Kooten F, Dippel DW, van Harskamp F, Grobbee DE, Kluft C, et al. The CAMCOG: a useful screening instrument for dementia in stroke patients. Stroke. 1998;29(10):2080–6. pmid:9756586
  84. 84. Swain DG, O'Brien AG, Nightingale PG. Cognitive assessment in elderly patients admitted to hospital: the relationship between the shortened version of the Abbreviated Mental Test and the Abbreviated Mental Test and Mini-Mental State Examination. Clinical rehabilitation. 2000;14(6):608–10. pmid:11128735
  85. 85. Appels BA, Scherder E. The diagnostic accuracy of dementia-screening instruments with an administration time of 10 to 45 minutes for use in secondary care: a systematic review. American journal of Alzheimer's disease and other dementias. 2010;25(4):301–16. pmid:20539025
  86. 86. Jackson TA, Naqvi SH, Sheehan B. Screening for dementia in general hospital inpatients: a systematic review and meta-analysis of available instruments. Age and ageing. 2013;42(6):689–95. pmid:24100618
  87. 87. Swiss Federal Statistical Office. Swiss Hospital Medical Statistics Tables 2017 [Internet]. Swiss Federal Statistical Office; 2017 [cited 2018 December 27]. Available from:
  88. 88. Schuur JD, Venkatesh AK. The growing role of emergency departments in hospital admissions. New England Journal of Medicine. 2012;367(5):391–3. pmid:22784039
  89. 89. Cowling TE, Soljak MA, Bell D, Majeed A. Emergency hospital admissions via accident and emergency departments in England: time trend, conceptual framework and policy implications. Journal of the Royal Society of Medicine. 2014;107(11):432–8. pmid:25377736
  90. 90. Shenkin SD, Russ TC, Ryan TM, MacLullich AM. Screening for dementia and other causes of cognitive impairment in general hospital in-patients. Age and ageing. 2014;43:166–68. pmid:24280009
  91. 91. Gray SL, Anderson ML, Dublin S, Hanlon JT, Hubbard R, Walker RL, et al. Cumulative Use of Strong Anticholinergic Medications and Incident Dementia: 746. Pharmacoepidemiology and Drug Safety. 2015;24:426.
  92. 92. Campbell N, Boustani M, Limbil T, Ott C, Fox C, Maidment I, et al. The cognitive impact of anticholinergics: a clinical review. Clinical interventions in aging. 2009;4:225. pmid:19554093
  93. 93. Gieche H. Mit der Nebendiagnose Demenz im Akutspital—Den Spitalaufenthalt optimal vorbereiten. Swiss Alzheimer's Association; 2015 [cited 2018 October 2]. Available from:
  94. 94. Travers C, Gray L, Martin-Khan M, Hubbard R. Evidence for the safety and quality issues associated with the care of patients with cognitive impairment in acute care settings: a rapid review. Australian Commission on Safety and Qualiy in Health Care (ACSQHC); 2013 [cited 2018 August 16]. Available from:
  95. 95. Maslow M, Mezey M. Adverse health events in hospitalized patients with dementia. The journals of gerontology Series A, Biological sciences and medical sciences. 2003;58(1):76–81.
  96. 96. Lorentz WJ, Scanlan JM, Borson S. Brief Screening Tests for Dementia. The Canadian Journal of Psychiatry. 2002;47(8):723–33. pmid:12420650
  97. 97. Yang L, Yan J, Jin X, Jin Y, Yu W, Xu S, et al. Screening for Dementia in Older Adults: Comparison of Mini-Mental State Examination, Mini-Cog, Clock Drawing Test and AD8. PLoS One. 2016;11(12).
  98. 98. Cullen B , O'Neill B, Evans JJ, Coen RF, Lawlor BA. A review of screening tests for cognitive impairment. Journal of Neurology Neurosurgery and Psychiatry. 2007;78(8):790–9.
  99. 99. O'Brien JT, Thomas A. Vascular dementia. The Lancet. 2015;386(10004):1698–706.
  100. 100. Ott A, Breteler MMB, van Harskamp F, Claus JJ, van der Cammen TJM, Grobbee DE, et al. Prevalence of Alzheimer's disease and vascular dementia: association with education. The Rotterdam study. BMJ. 1995;310(6985):970–3. pmid:7728032
  101. 101. Galvin JE. Using Informant and Performance Screening Methods to Detect Mild Cognitive Impairment and Dementia. Current Geriatrics Reports. 2018;7(1):19–25. pmid:29963365
  102. 102. Mackinnon A, Khalilian A, Jorm AF, Korten AE, Christensen H, Mulligan R. Improving screening accuracy for dementia in a community sample by augmenting cognitive testing with informant report. Journal of Clinical Epidemiology. 2003;56(4):358–66. pmid:12767413
  103. 103. Mackinnon A, Mulligan R. Combining cognitive testing and informant report to increase accuracy in screening for dementia. The American journal of psychiatry. 1998;155(11):1529–35. pmid:9812113
  104. 104. Samuel BG. Diagnostic and Statistical Manual of Mental Disorders, 4th ed. (DSM-IV). American Journal of Psychiatry. 1995;152(8):1228–848.
  105. 105. Albert MS, DeKosky ST, Dickson D, Dubois B, Feldman HH, Fox NC, et al. The diagnosis of mild cognitive impairment due to Alzheimer's disease: recommendations from the National Institute on Aging-Alzheimer's Association workgroups on diagnostic guidelines for Alzheimer's disease. Alzheimer's & dementia: the journal of the Alzheimer's Association. 2011;7(3):270–9.
  106. 106. Petersen RC, Negash S. Mild cognitive impairment: an overview. CNS spectrums. 2008;13(1):45–53. pmid:18204414
  107. 107. Gauthier S, Reisberg B, Zaudig M, Petersen RC, Ritchie K, Broich K, et al. Mild cognitive impairment. The lancet. 2006;367(9518):1262–70.
  108. 108. Winblad B, Palmer K, Kivipelto M, Jelic V, Fratiglioni L, Wahlund LO, et al. Mild cognitive impairment–beyond controversies, towards a consensus: report of the International Working Group on Mild Cognitive Impairment. Journal of internal medicine. 2004;256(3):240–6. pmid:15324367
  109. 109. Samuel BG. Diagnostic and statistical manual of mental disorders (revised 4th ed.). American Journal of Psychiatry; 2000 [cited 2018 August 27]. Available from:
  110. 110. World Health Organization. International statistical classification of diseases and related health problems. World Health Organization; 2018 [cited 2018 December 9]. Available from:
  111. 111. Englund B, Brun A, Gustafson L, Passant U, Mann D, Neary D, et al. Clinical and neuropathological criteria for frontotemporal dementia. The Lund and Manchester Groups. Journal of neurology, neurosurgery, and psychiatry. 1994;57(4):416–8. pmid:8163988
  112. 112. Roman GC, Tatemichi TK, Erkinjuntti T, Cummings JL, Masdeu JC, Garcia JH, et al. Vascular dementia: diagnostic criteria for research studies. Report of the NINDS-AIREN International Workshop. Neurology. 1993;43(2):250–60. pmid:8094895
  113. 113. McKeith IG, Galasko D, Kosaka K, Perry EK, Dickson DW, Hansen LA, et al. Consensus guidelines for the clinical and pathologic diagnosis of dementia with Lewy bodies (DLB): report of the consortium on DLB international workshop. Neurology. 1996;47(5):1113–24. pmid:8909416
  114. 114. McKhann G, Drachman D, Folstein M, Katzman R, Price D, Stadlan EM. Clinical diagnosis of Alzheimer's disease: report of the NINCDS-ADRDA Work Group under the auspices of Department of Health and Human Services Task Force on Alzheimer's Disease. Neurology. 1984;34(7):939–44. pmid:6610841
  115. 115. Davis DH, Creavin ST, Noel-Storr A, Quinn TJ, Smailagic N, Hyde C, et al. Neuropsychological tests for the diagnosis of Alzheimer's disease dementia and other dementias: a generic protocol for cross-sectional and delayed-verification studies. Cochrane Database of Systematic Review; 2013 [cited 2018 August 14]. Available from:
  116. 116. McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS Peer Review of Electronic Search Strategies: 2015 Guideline Statement. Journal of clinical epidemiology. 2016;75:40–6. pmid:27005575
  117. 117. Whiting PF, Rutjes AS, Westwood ME, et al. Quadas-2: A revised tool for the quality assessment of diagnostic accuracy studies. Annals of Internal Medicine. 2011;155(8):529–36. pmid:22007046
  118. 118. Macaskill P GC, Deeks JJ, Harbord RM, Takwoingi Y. Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy: Chapter 10 Analysing and Presenting Results. The Cochrane Collaboration; 2010; 1.0. Available from:
  119. 119. Cohen JF, Korevaar DA, Altman DG, Bruns DE, Gatsonis CA, Hooft L, et al. STARD 2015 guidelines for reporting diagnostic accuracy studies: explanation and elaboration. BMJ Open. 2016;6(11):1–17.
  120. 120. Inouye SK, Robison JT, Froehlich TE, Richardson ED. The time and change test: a simple screening test for dementia. The journals of gerontology Series A, Biological sciences and medical sciences. 1998;53(4):M281–6. pmid:18314567
  121. 121. Nair BR, Browne WL, Chua LE, D’Este C, O'Dea I, Agho K. Validating an Australian version of the Time and Change Test: A screening test for cognitive impairment. Australasian Journal on Ageing. 2007;26(2):87–90.
  122. 122. Bula CJ, Wietlisbach V. Use of the Cognitive Performance Scale (CPS) to detect cognitive impairment in the acute care setting: concurrent and predictive validity. Brain research bulletin. 2009;80(4–5):173–8. pmid:19559765
  123. 123. Death J, Douglas A, Kenny RA. Comparison of clock drawing with Mini Mental State Examination as a screening test in elderly acute hospital admissions. Postgraduate medical journal. 1993;69(815):696–700. pmid:8255833
  124. 124. Tuijl JP, Scholte EM, de Craen AJ, van der Mast RC. Screening for cognitive impairment in older general hospital patients: comparison of the Six-Item Cognitive Impairment Test with the Mini-Mental State Examination. International journal of geriatric psychiatry. 2012;27(7):755–62. pmid:21919059
  125. 125. Travers C, Byrne GJ, Pachana NA, Klein K, Gray L. Validation of the interRAI Cognitive Performance Scale against independent clinical diagnosis and the Mini-Mental State Examination in older hospitalized patients. The journal of nutrition, health & aging. 2013;17(5):435–9.
  126. 126. Inouye SK, Robison JT, Froehlich TE, Richardson ED. The Time and Change Test: A Simple Screening Test for Dementia. The Journals of Gerontology: Series A. 1998;53A(4):M281–M6.
  127. 127. Hubbard EJ, Santini V, Blankevoort CG, Volkers KM, Barrup MS, Byerly L, et al. Clock drawing performance in cognitively normal elderly. Archives of Clinical Neuropsychology. 2008;23(3):295–327. pmid:18243644
  128. 128. Battersby WS, Bender MB, Pollack M, Kahn RL. Unilateral spatial agnosia (inattention) in patients with cerebral lesions. Brain: a journal of neurology. 1956;79(1):68–93.
  129. 129. Critchley M. The parietal lobes. 1 ed: Hafner Pub; 1953.
  130. 130. Agrell B, Dehlin O. The clock-drawing test. Age and ageing. 1998;41 Suppl 3:iii41–5.
  131. 131. Schramm U, Berger G, Muller R, Kratzsch T, Peters J, Frolich L. Psychometric properties of Clock Drawing Test and MMSE or Short Performance Test (SKT) in dementia screening in a memory clinic population. International journal of geriatric psychiatry. 2002;17(3):254–60. pmid:11921154
  132. 132. Shulman KI. Clock-drawing: is it the ideal cognitive screening test? International journal of geriatric psychiatry. 2000;15(6):548–61. pmid:10861923
  133. 133. Solomon PR, Hirschoff A, Kelly B, Relin M, Brush M, DeVeaux RD, et al. A 7 minute neurocognitive screening battery highly sensitive to Alzheimer's disease. Archives of neurology. 1998;55(3):349–55. pmid:9520009
  134. 134. Cacho J, Benito-Leon J, Garcia-Garcia R, Fernandez-Calvo B, Vicente-Villardon JL, Mitchell AJ. Does the combination of the MMSE and clock drawing test (mini-clock) improve the detection of mild Alzheimer's disease and mild cognitive impairment? Journal of Alzheimer's disease. 2010;22(3):889–96. pmid:20858951
  135. 135. Shulman KI, Shedletsky R, Silver IL. The challenge of time: Clock‐drawing and cognitive function in the elderly. International journal of geriatric psychiatry. 1986;1(2):135–40.
  136. 136. Folstein MF, Folstein SE, McHugh PR. "Mini-mental state". A practical method for grading the cognitive state of patients for the clinician. Journal of psychiatric research. 1975;12(3):189–98. pmid:1202204
  137. 137. Palsetia D, Rao GP, Tiwari SC, Lodha P, De Sousa A. The Clock Drawing Test versus Mini-mental Status Examination as a Screening Tool for Dementia: A Clinical Comparison. Indian journal of psychological medicine. 2018;40(1):1–10. pmid:29403122
  138. 138. Mazancova AF, Nikolai T, Stepankova H, Kopecek M, Bezdicek O. The Reliability of Clock Drawing Test Scoring Systems Modeled on the Normative Data in Healthy Aging and Nonamnestic Mild Cognitive Impairment. Assessment. 2017;24(7):945–57. pmid:26933141
  139. 139. Storey JE, Rowland JT, Basic D, Conforti DA. Accuracy of the clock drawing test for detecting dementia in a multicultural sample of elderly Australian patients. International psychogeriatrics. 2002;14(3):259–71. pmid:12475087
  140. 140. Cahn-Weiner DA, Williams K, Grace J, Tremont G, Westervelt H, Stern RA. Discrimination of dementia with lewy bodies from Alzheimer disease and Parkinson disease using the clock drawing test. Cognitive and behavioral neurology. 2003;16(2):85–92. pmid:12799594
  141. 141. Nyborn JA, Himali JJ, Beiser AS, Devine SA, Du Y, Kaplan E, et al. The Framingham Heart Study clock drawing performance: normative data from the offspring cohort. Experimental aging research. 2013;39(1):80–108. pmid:23316738
  142. 142. Bodner T, Delazer M, Kemmler G, Gurka P, Marksteiner J, Fleischhacker WW. Clock drawing, clock reading, clock setting, and judgment of clock faces in elderly people with dementia and depression. Journal of the American Geriatrics Society. 2004;52(7):1146–50. pmid:15209653
  143. 143. Hazan E, Frankenburg F, Brenkel M, Shulman K. The test of time: a history of clock drawing. International journal of geriatric psychiatry. 2018;33(1):e22–e30. pmid:28556262
  144. 144. Aprahamian I, Martinelli J, Neri A. The Clock Drawing Test: A review of its accuracy in screening for dementia. 2009;3(2):74–81. pmid:29213615
  145. 145. Morris JN, Fries BE, Mehr DR, Hawes C, Phillips C, Mor V, et al. MDS Cognitive Performance Scale. Journal of Gerontology. 1994;49(4):M174–M82. pmid:8014392
  146. 146. Morris JN, Howard EP, Steel K, Perlman C, Fries BE, Garms-Homolová V, et al. Updating the Cognitive Performance Scale. Journal of geriatric psychiatry and neurology. 2016;29(1):47–55. pmid:26251111
  147. 147. de Silva V, Hanwella R. Why are we copyrighting science? Bmj. 2010;341:c4738. pmid:20847026
  148. 148. Davey RJ, Jamieson S. The validity of using the mini mental state examination in NICE dementia guidelines. Journal of neurology, neurosurgery, and psychiatry. 2004;75(2):343–4.
  149. 149. Shulman KI, Herrmann N, Brodaty H, Chiu H, Lawlor B, Ritchie K, et al. IPA survey of brief cognitive screening instruments. International psychogeriatrics. 2006;18(2):281–94. pmid:16466586
  150. 150. Nieuwenhuis-Mark RE. The death knoll for the MMSE: has it outlived its purpose? Journal of geriatric psychiatry and neurology. 2010;23(3):151–7. pmid:20231732
  151. 151. Brodaty H, Howarth GC, Mant A, Kurrle SE. General practice and dementia. A national survey of Australian GPs. The Medical journal of Australia. 1994;160(1):10–4. pmid:8271977
  152. 152. Arevalo-Rodriguez I, Smailagic N, Roque IFM, Ciapponi A, Sanchez-Perez E, Giannakou A, et al. Mini-Mental State Examination (MMSE) for the detection of Alzheimer's disease and other dementias in people with mild cognitive impairment (MCI). The Cochrane database of systematic reviews; 2015 [cited 2018 July 14]. Available from:
  153. 153. Velayudhan L, Ryu SH, Raczek M, Philpot M, Lindesay J, Critchfield M, et al. Review of brief cognitive tests for patients with suspected dementia. International psychogeriatrics. 2014;26(8):1247–62. pmid:24685119
  154. 154. Mitchell AJ. A meta-analysis of the accuracy of the mini-mental state examination in the detection of dementia and mild cognitive impairment. Journal of psychiatric research. 2009;43(4):411–31. pmid:18579155
  155. 155. Bravo G, Hebert R. Age- and education-specific reference values for the Mini-Mental and modified Mini-Mental State Examinations derived from a non-demented elderly population. International journal of geriatric psychiatry. 1997;12(10):1008–18. pmid:9395933
  156. 156. Crum RM, Anthony JC, Bassett SS, Folstein MF. Population-based norms for the Mini-Mental State Examination by age and educational level. Jama. 1993;269(18):2386–91. pmid:8479064
  157. 157. Grigoletto F, Zappala G, Anderson DW, Lebowitz BD. Norms for the Mini-Mental State Examination in a healthy population. Neurology. 1999;53(2):315–20. pmid:10430420
  158. 158. Katzman R, Brown T, Fuld P, Peck A, Schechter R, Schimmel H. Validation of a short Orientation-Memory-Concentration Test of cognitive impairment. The American journal of psychiatry. 1983;140(6):734–9. pmid:6846631
  159. 159. Blessed G, Tomlinson BE, Roth M. The association between quantitative measures of dementia and of senile change in the cerebral grey matter of elderly subjects. The British journal of psychiatry. 1968;114(512):797–811. pmid:5662937
  160. 160. Milne A, Culverwell A, Guss R, Tuppen J, Whelton R. Screening for dementia in primary care: a review of the use, efficacy and quality of measures. International psychogeriatrics. 2008;20(5):911–26. pmid:18533066
  161. 161. Etgen T, Sander D, Huntgeburth U, Poppert H, Forstl H, Bickel H. Physical activity and incident cognitive impairment in elderly persons: the INVADE study. Archives of internal medicine. 2010;170(2):186–93. pmid:20101014
  162. 162. Goring H, Baldwin R, Marriott A, Pratt H, Roberts C. Validation of short screening tests for depression and cognitive impairment in older medically ill inpatients. International journal of geriatric psychiatry. 2004;19(5):465–71. pmid:15156548
  163. 163. Brooke P, Bullock R. Validation of a 6 item cognitive impairment test with a view to primary care usage. International journal of geriatric psychiatry. 1999;14(11):936–40. pmid:10556864
  164. 164. Williams MM, Roe CM, Morris JC. Stability of the Clinical Dementia Rating, 1979–2007. Archives of neurology. 2009;66(6):773–7. pmid:19506139
  165. 165. Villareal DT, Grant E, Miller JP, Storandt M, McKeel DW, Morris JC. Clinical outcomes of possible versus probable Alzheimer's disease. Neurology. 2003;61(5):661–7. pmid:12963758
  166. 166. Ballard C., Burns A, Corbett A, Livingston G. Helping You to Assess Cognition. A practical toolkit for clinicians. Alzheimer's Society; 2013 [cited 2019 January 12]. Available from:
  167. 167. National Collaborating Centre for Mental Health. A NICE-SCIE Guideline on Supporting People With Dementia and Their Carers in Health and Social Care. The British Psychological Society & The Royal College of Psychiatrists.; 2007 [cited 2019 January 15]. Available from:
  168. 168. O'Sullivan D, O'Regan NA, Timmons S. Validity and Reliability of the 6-Item Cognitive Impairment Test for Screening Cognitive Impairment: A Review. Dementia and geriatric cognitive disorders. 2016;42(1–2):42–9. pmid:27537241
  169. 169. Froehlich TE, Robison JT, Inouye SK. Screening for dementia in the outpatient setting: the time and change test. Journal of the American Geriatrics Society. 1998;46(12):1506–11. pmid:9848810
  170. 170. Reitsma JB, Glas AS, Rutjes AW, Scholten RJ, Bossuyt PM, Zwinderman AH. Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews. J Clin Epidemiol. 2005;58(10):982–90. pmid:16168343
  171. 171. Rutter CM, Gatsonis CA. A hierarchical regression approach to meta-analysis of diagnostic test accuracy evaluations. Statistics in medicine. 2001;20(19):2865–84. pmid:11568945
  172. 172. Anthony JC, LeResche L, Niaz U, von Korff MR, Folstein MF. Limits of the 'Mini-Mental State' as a screening test for dementia and delirium among hospital patients. Psychological medicine. 1982;12(2):397–408. pmid:7100362
  173. 173. Erkinjuntti T, Sulkava R, Wikstrom J, Autio L. Short Portable Mental Status Questionnaire as a screening test for dementia and delirium among the elderly. Journal of the American Geriatrics Society. 1987;35(5):412–6. pmid:3571790
  174. 174. O'keeffe E, Mukhtar O, T O'Keeffe S. Orientation to time as a guide to the presence and severity of cognitive impairment in older hospital patients. Journal of Neurology, Neurosurgery & Psychiatry. 2011;82(5):500–4.
  175. 175. Incalzi RA, Cesari M, Pedone C, Carosella L, Carbonin P. Construct validity of the abbreviated mental test in older medical inpatients. Dementia and geriatric cognitive disorders. 2003;15(4):199–206. pmid:12626852
  176. 176. Jitapunkul S, Pillay I, Ebrahim S. The abbreviated mental test: its use and validity. Age and ageing. 1991;20(5):332–6. pmid:1755388
  177. 177. Leung JL, Lee GT, Lam Y, Chan RC, Wu JY. The use of the Digit Span Test in screening for cognitive impairment in acute medical inpatients. International psychogeriatrics. 2011;23(10):1569–74. pmid:21729426
  178. 178. Harwood DM, Hope T, Jacoby R. Cognitive impairment in medical inpatients. I: Screening for dementia—is history better than mental state? Age and ageing. 1997;26(1):31–5. pmid:9143435
  179. 179. Mathews SB, Arnold SE, Epperson CN. Hospitalization and cognitive decline: Can the nature of the relationship be deciphered? The American journal of geriatric psychiatry. 2014;22(5):465–80. pmid:23567430
  180. 180. Davis DH, Muniz Terrera G, Keage H, Rahkonen T, Oinas M, Matthews FE, et al. Delirium is a strong risk factor for dementia in the oldest-old: a population-based cohort study. Brain: a journal of neurology. 2012;135(Pt 9):2809–16.
  181. 181. Ryan DJ, O'Regan NA, Caoimh RO, Clare J, O'Connor M, Leonard M, et al. Delirium in an adult acute hospital population: predictors, prevalence and detection. BMJ Open. 2013;3(1).
  182. 182. Brunet MD, McCartney M, Heath I, Tomlinson J, Gordon P, Cosgrove J, et al. There is no evidence base for proposed dementia screening. Bmj. 2012;345:e8588. pmid:23271709