Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Feasibility and reliability of online vs in-person cognitive testing in healthy older people

Abstract

Background

Early evidence in using online cognitive assessments show that they could offer a feasible and resource-efficient alternative to in-person clinical assessments in evaluating cognitive performance, yet there is currently little understanding about how these assessments relate to traditional, in-person cognitive tests.

Objectives

In this preliminary study, we assess the feasibility and reliability of NeurOn, a novel online cognitive assessment tool. NeurOn measures various cognitive domains including processing speed, executive functioning, spatial working memory, episodic memory, attentional control, visuospatial functioning, and spatial orientation.

Design

Thirty-two participants (mean age: 70.19) completed two testing sessions, unsupervised online and in-person, one-week apart. Participants were randomised in the order of testing appointments. For both sessions, participants completed questionnaires prior to a cognitive assessment. Test-retest reliability and concurrent validity of the online cognitive battery was assessed using intraclass correlation coefficients (ICCs) and correlational analysis, respectively. This was conducted by comparing performance in repeated tasks across testing sessions as well as with traditional, in-person cognitive tests.

Results

Global cognition in the NeurOn battery moderately validated against MoCA performance, and the battery demonstrated moderate test-retest reliability. Concurrent validity was found only between the online and paper versions of the Trail Making Test -A, as well as global cognitive performance between online and in-person testing sessions.

Conclusions

The NeurOn cognitive battery provides a promising tool for measuring cognitive performance online both longitudinally and across short retesting intervals within healthy older adults. When considering cost-effectiveness, flexible administration, and improved accessibility for wider populations, online cognitive assessments show promise for future screening of neurodegenerative diseases.

Introduction

Cognitive functioning is vital for everyday behaviour. It is well established that within ageing, many aspects of cognition typically decline [1,2]. It is also increasingly evident that subtle cognitive changes appear long before clinical manifestations of neurodegenerative diseases such as dementia become apparent [3]. Therefore, it is important to understand whether these cognitive changes are indicative of neurodegenerative disease symptomatology or are typical of healthy ageing. Earlier diagnosis of cognitive impairment is invaluable for patients and caregivers as it can inform management of patient wellbeing and provide targeted measures for lifestyle modifications to potentially reverse cognitive decline [4,5].

Neuropsychological testing is therefore required to measure changes in cognitive functioning [6,7]. Nonetheless, routine cognitive assessments in healthy ageing are rarely conducted and typically rely upon quick to administer paper-based tests [8]. The current gold-standard tests for assessing cognitive impairment, such as the Mini-Mental State Examination (MMSE), were developed to screen for dementia, but are less sensitive in identifying milder cognitive impairment [9,10]. Furthermore, clinic assessments involving paper-based tests are limited as they are prone to practice effects [11] and cognitive changes may be masked by fluctuations in cognitive performance or differences in cognitive reserve [12].

In recent years, significant developments in online cognitive testing have increased its usage in both research and clinical environments [13]. Notably, online assessments can be performed remotely to improve accessibility and frequency of online cognitive testing, enabling the identification of more subtle changes in cognitive decline [14,15]. Digital assessments also provide enhanced precision in data measurement, standardised presentation, pseudorandomisation to reduce practice effects, and greater cost-efficiency [13,16]. Most computerised testing to date has focussed upon processing speed and attention tasks, with many demonstrating promising results [1719].A recent systematic review found early evidence suggesting that computerised cognitive testing shows potential clinical utility in diagnosing neurocognitive disorders. However, there has been limited validation work in cognitive batteries, which is necessary to establish whether they are feasible for clinical applications [20]. To date, there has been mixed evidence in the agreement between digitalised and traditional, paper-based neuropsychological tests–with some studies showing considerable agreement [21,22] and others showing little agreement [17,23]. Additionally, the test-retest reliability of performance on these digital tests is not yet well-established.

Comprehensive neuropsychological test batteries that assess a variety of cognitive domains are required to detect early cognitive changes that may manifest in older age. Normative data for healthy older adults is necessary to enable for interpreting cognitive performance in context of sociodemographic factors, so that at-risk populations can be accurately identified. Our group recently developed NeurOn, a novel cognitive battery, as part of the DECISION study [24]. NeurOn is a comprehensive cognitive battery testing a variety of cognitive domains and is novel in that it also assesses spatial orientation ability–of which previous work from our group has shown to be a key signature for preclinical dementia [25].

In the present study, we aim to evaluate the psychometric properties of the NeurOn battery by measuring the reliability and validity in both supervised in-person and unsupervised online settings against established traditional neuropsychological assessments. It is hypothesised that online cognitive tasks will demonstrate test-retest reliability over a one-week period; online/remote cognitive tasks will demonstrate concurrent validity with in-person/traditional cognitive task equivalents; and that cognitive performance in the neuropsychological battery will validate against established clinical tests in measuring cognitive performance.

Methods

Recruitment

Thirty-three older adults (65+) were recruited from the community via online and offline advertisements to take part in the study. All participants were pre-screened to assess whether they were cognitively and physically healthy; had any history of psychiatric or neurological disease; history of substance abuse disorder; drive once per week or more; and whether they had previously taken part in a study using the online cognitive platform. Recruitment and testing of participants took place between 1st October 2022 and 30th March 2023. Written informed consent was obtained from each participant and data was attributed anonymously. Ethical approval for the study was provided by the Faculty of Medicine and Health Sciences Research Ethics Committee at the University of East Anglia (FMH2019/20-134).

To ensure adequate statistical power, a power analysis was conducted for evaluating the test-retest reliability and concurrent validity of the cognitive testing battery. A total sample size of 32 (degrees of freedom = 31) was determined for the test-retest reliability analysis, using a matched paired t-test, with a power of 0.95 and a critical t score of 1.70. The analysis was powered at a 0.95 alpha error probability, assuming a moderate effect size of 0.6.

This sample size was deemed sufficient for also powering the analysis of concurrent validity, assuming a large effect size of 0.50, an alpha error of 0.05, a power of 0.94, and a critical t score of 1.70.

Procedure

Screening was carried out via online video call (32) and telephone (1) by the study team prior to baseline cognitive assessment. One participant was excluded from the study as they only completed one testing session due to illness, and therefore 32 participants were retained for analysis (mean age: 70.19). Participants were randomised to the order in which they completed testing sessions. Prior to the baseline appointment, participants were asked with which device they would most comfortably complete the remote assessment appointment (desktop, laptop, tablet) and the device was matched for the in-person testing appointment. Both testing sessions started with completion of questionnaires pertaining to demographics, subjective cognition, and driving history. Each participant completed the follow-up testing session one week from the baseline testing session at the same time of day.

Development and description of the online cognitive testing platform

Questionnaires and cognitive tasks were hosted on NeurOn–an online platform. The novel cognitive battery was developed by a professional programmer alongside the project team. Online neuropsychological tests were based on a combination of established, traditional neuropsychological tests and established novel tasks (Virtual Supermarket Task) and was developed for unsupervised assessment. Tests were designed to be completed in unmonitored conditions. Tasks were accompanied with written instructions and video tutorials with a voice-over (except for the Go-No/Go test) prior to test completion to promote multimodal learning. After receiving instructions, practice sessions for each task followed to ensure participants were prepared for the actual test. Participants were encouraged to complete the main test battery in one session without breaks but were advised to take a break prior to the Virtual Supermarket Task due it having a significantly longer duration and greater task difficulty. If the cognitive test battery was interrupted (i.e. by participants taking a break/ internet disconnection), participants resumed the task from their current progress upon logging back in. All tasks were pseudorandomised to enable for repeated testing. All participant input was saved on a protected server throughout each test element.

Online cognitive tasks

The NeurOn battery consisted of a variety of digitalised tasks that measure cognition across a variety of domains that are sensitive to age-related cognitive impairment. A Reaction Time task, whereby participants responded as quickly as possible to a repeating on-screen stimulus, measured visuomotor speed (milliseconds). Trail-Making Test -A, involving the connecting of 25 numerically arranged points in ascending order as quickly as possible, measured processing speed (seconds). Trail-Making Test -B, involving the connecting of 25 points of alternating numbers and letters in ascending order as quickly as possible, measured executive functioning (seconds). Episodic memory involved a stimulus encoding phase of everyday objects appearing consecutively in varying screen locations, followed by a delayed testing phase where participants decided whether a stimulus was shown previously (measuring recognition memory—% correct), and, if so, its screen position (measuring source memory—% correct). A Spatial Span–Backwards task measured spatial working memory (maximum number correctly recalled), whereby participants recall and reverse an array of lit-up boxes ranging from 2–9 sequences. The Go/No-Go task measured attentional control (number of errors) by asking participants to respond to a specific stimulus (Go) and inhibit responses to other stimuli (No-go). The Fragmented Letters task assessed visuospatial functioning (% correct) by asking participants to identify a singular letter from the alphabet which is fragmented through a visual mask. Finally, the Virtual Supermarket Task, previously described in detail [26], measured allocentric and egocentric orientation (both deviation error from correct location) by asking participants to orient a trolley in a virtual supermarket according to a previously presented video clip. Detailed task descriptions are available in S1 Table in S1 File.

Remote cognitive testing

Participants completed the remote cognitive testing session from their own home. Initially, participants completed demographics and novel subjective cognition questionnaires (Spatial Memory & Driving, Orienteering, and Navigation). Participants then completed the online cognitive test battery, consisting of the Reaction Time task, Trail Making Test -A, Trail Making Test -B, Picture Recognition, Spatial Span Backwards, Go/No-Go test, Fragmented Letters, and Virtual Supermarket Test.

In-person cognitive testing

The in-person cognitive testing session took place in a quiet testing facility and involved a combination of traditional neuropsychological tests, requiring face-to-face assessment, with our novel online tasks. Participants initially completed established questionnaires measuring subjective cognition (Cognitive Change Index (CCI) [27] and Santa Barbara Sense of Direction (SBSOD) [28]. Participants then completed the Montreal Cognitive Assessment (MoCA) [29], Reaction Time task (Online), paper versions of the Trail-Making Test A & B [30], Rey Osterrieth Complex Figure Test (ROCF)–delayed recall [31], Corsi Block Tapping Test [32], Go-No/Go (Online), a paper version of the Fragmented Letters test [33], and finally the Virtual Supermarket Task (Online).

Statistical analyses

Neuropsychological test measures.

To create an episodic memory measure for the online cognitive battery, an average score was found between recognition and source memory percentages for each participant. Outliers were identified through boxplot analysis, and participants were excluded from a test if their average values deviated more than 3 standard deviations from the mean. For the remote session, outliers were removed for Reaction Time (1), Trail Making Test -A (1), Spatial & Memory—Lifetime (1). For the in-person testing session, outliers were removed for the CCI (1), Reaction Time (1), Trail Making Test -A (1), Trail Making Test -B (1), ROCF recall (1), and Go-No/Go (1). Two participants did not complete the Virtual Supermarket Task in either test session due to either a technical error or finding the task too difficult, and therefore were removed from analysis. One participant did not complete the Picture Recognition task due to a technical error. A Bonferroni adjusted significance level of 0.00625 (0.05/8) was used to assess statistical significance in correlations between the CCI and online cognitive assessments. Raw cognitive test scores were standardised for regression analysis, except for episodic memory which was converted into a proportion as this score measured for accuracy in percentage. Appropriate diagnostic tests and visual inspections were carried out to assess regression assumptions, including linearity and homoscedasticity, normality of residuals, independence of residuals, and multicollinearity. All analysis was carried out in R (version 4.4.0).

Concurrent validity (remote vs in-person testing).

Concurrent validity was measured by Spearman or Pearson correlations (depending on variable distribution) to assess the consistency between remote/online and in-person/traditional neuropsychological tests. A correlation threshold of ≥0.40 was used to establish acceptable concurrent validity [34]. The online Trail Making Tests were compared to the paper Trail-Making Tests; the Spatial Span-Backwards task was compared with the Corsi Block Tapping test; the Picture Recognition task was compared with the ROCF-delayed recall task; Fragmented Letters was compared with the paper Fragmented Letters task; and Global Cognition was compared with MoCA score.

Test-retest reliability.

To examine test-retest reliability of the repeated online cognitive tasks from baseline to retest sessions, two complimentary approaches were conducted:

  1. Two-way mixed effects intraclass correlation coefficients (ICCs) with measures of absolute agreement (95% CI) according to McGraw & Wong [35].
  2. Paired samples t tests assessed performance differences. A significant (p < .05) improvement over time was used as a threshold to indicate practice effects.

Global cognition.

To establish a global cognition score for each testing session, Z-scores for each neuropsychological measure within each testing session were averaged to create a composite score. Z-scores were reversed to ensure consistent directionality within each task.

Results

Demographics and cognitive battery characteristics

To complete the NeurOn cognitive battery, 41% of participants used desktops, 41% used laptops, and 18% used tablet devices to complete the study. On average, the online testing session took 58 minutes and 50 seconds whilst the in-person testing session took 66 minutes and 50 seconds. No significant differences were found between age, education, MoCA score, CCI score, or time taken to complete online and in-person testing batteries between males and females (see Table 1).

thumbnail
Table 1. Validation study participant demographic characteristics.

https://doi.org/10.1371/journal.pone.0309006.t001

Concurrent validity (remote vs. in-person testing)

To determine how online cognitive tests validated against traditional cognitive tests, concurrent validity was measured for online tasks with traditional cognitive test equivalents. Only Trail Making Test -B met the acceptable correlation threshold value to demonstrate acceptable concurrent validity between tasks, r(28) = 0.615, p < .001. Low correlations were established for Trail Making Test -A (r(29) = 0.255, p = 0.17), Spatial Working Memory (r(30) = 0.268, p = 0.14), and Episodic Memory (ρ = 0.269, p = 0.16, N = 29). A ceiling effect was observed for the Fragmented Letters task in both paper and online versions across both testing sessions (see Table 2).

thumbnail
Table 2. Concurrent validity between online tasks and traditional neuropsychological tests.

https://doi.org/10.1371/journal.pone.0309006.t002

Test-retest reliability and practice effects

Across all four repeated tasks, intraclass correlation coefficients demonstrated moderate test-retest reliability (0.50–0.80). Correlation coefficients ranged from 0.51 (Go/No-Go) to 0.75 (Egocentric Orientation). No practice effects were found for Reaction Time (online: M = 344.62 ± 69.02; in-person: M = 359.64 ± 95.18; t(29) = -0.503, p = 0.619, d = 0.18); Go/No-Go (online: M = 1.22 ± 1.64, in-person: M = 1.42 ± 1.36; t(30) = -0.596, p = 0.556, d = 0.13); Allocentric Orientation (online: M = 3.11 ± 1.57, in-person: M = 3.11 ± 1.62; t(26) = 0.107, p = 0.915, d = 0.01); or Egocentric Orientation (online: M = 51.31 ± 34.16, in-person: M = 56.52 ± 36.37; t(28) = -0.684, p = 0.500, d = 0.15), as all t test values were insignificant (p >.05) (see Table 3).

thumbnail
Table 3. Test-retest reliability of cognitive tasks between online and in-person testing sessions.

https://doi.org/10.1371/journal.pone.0309006.t003

Association with established cognitive assessments

To determine how the online cognitive testing battery is associated with established cognitive assessments, correlation analysis was carried out between individual cognitive tests and total CCI score. Spearman rank correlation analysis found that higher CCI score was positively associated with worse egocentric orientation performance, r(27) = -.453, p = .014, however this was not statistically significant after Bonferroni correction. No other cognitive assessments were found to correlate with the CCI (see Table 4).

thumbnail
Table 4. Correlation analysis between online cognitive testing and CCI score.

https://doi.org/10.1371/journal.pone.0309006.t004

Correlation analysis was then conducted to establish whether global cognitive performance from the online cognitive battery validated against the MoCA. A Pearson’s correlation found that global cognition performance showed a moderate negative correlation with MoCA performance, r(24) = .598, p = .001 (Fig 1).

thumbnail
Fig 1. Regression plot showing the relationship between MoCA score and global cognitive performance in the NeurOn battery.

https://doi.org/10.1371/journal.pone.0309006.g001

Influence of factors on neuropsychological testing

To explore how demographic factors influenced performance in the online cognitive battery, a multiple linear regression analysis was conducted using the variables of traditional test scores, age, sex, and education. Traditional test scores significantly predicted performance in Trail Making Test -B performance (β = 0.49, p = 0.002, CI[0.20, 0.78]) and Global Cognition (β = 0.09, p = 0.003, CI[0.04, 0.15]). Older age was associated with worse performance in Trail Making Test -A (β = -0.12, p = 0.030, CI[-0.23, -0.01]), allocentric orientation (β = -0.10, p = .007, CI[-0.16, -0.03]), and global cognition (β = -0.05, p = .007, CI[-0.08, -0.01]). Sex was positively associated with episodic memory, with being female predicting better episodic memory performance (β = 0.07, p = .03, CI[0.01, 0.13]). More years in education was associated only with allocentric orientation (β = 0.09, p < .05., CI[0.00, 0.17]).

Discussion

With the rising aging population, there is an urgent need to establish screening tools for early identification of cognitive decline during ageing. This preliminary study assessed the feasibility, reliability, and validity of a novel online cognitive testing battery in an older adult population to establish its applicability in acquiring cognitive performance data in a healthy older adult population within unsupervised, remote settings. Importantly, we demonstrate that global performance in the cognitive battery validates against the MoCA–one of the most popular tests for screening for mild cognitive impairment (MCI). We also demonstrate that egocentric orientation performance was the only cognitive domain associated with ratings on the CCI. As predicted, we establish test-retest reliability of the battery as all repeated tests showed moderate test-retest reliability and no practice effects were present after a one-week washout period between testing sessions. Finally, we explore factors that influence performance in online cognitive assessments and find that older age is associated with worse processing speed and allocentric orientation performance.

Due to individual differences in cognitive trajectories during ageing, composite measures assessing a range of cognitive domains have been suggested as the most appropriate approach to screen for and track cognitive impairment over time [36]. Within the present study, we demonstrate that global cognitive performance in the online cognitive battery shows a strong correlation with global cognition measured by the MoCA. To date, very few studies have validated online cognitive assessments in older adults [37], and fewer still have shown that online cognitive assessments provide comparable diagnostic accuracy to the MoCA [38]. Our results indicate that the NeurOn battery provides a promising instrument for measuring cognitive performance remotely at a similar accuracy with clinical testing appointments.

Many traditional cognitive assessments, such as the MoCA and MMSE, are limited by practice effects which may compromise the ability to interpret whether cognitive change is due to task experience rather than ageing effects [39,40]. Practice effects are more likely to occur within shorter testing intervals and are prominent across one week re-testing intervals [41,42]. In the present study, despite a short retesting period of one week, no statistically significant improvement was found across any of the repeated tasks. This lack of improvement may be due to the pseudorandomisation of task material within the NeurOn battery, which prevents participants from learning task specific content. Although a one-week retesting period is not typically used for clinical relevance for neuropsychological testing [42], it can be valuable for assessing cognitive changes after short-term intervention studies [43]. Reduced practice effects also enable for identification of subtle changes in cognitive trajectories longitudinally, which are rarely conducted in routine clinical appointments due to being resource intensive. Contrary to our hypotheses, we found that only Trail Making Test-A and global cognitive performance demonstrated concurrent validity to traditional paper-based tasks, respectively. Previous research shows that concurrent validity of online cognitive testing is typically low (median 0.49) [17], and therefore correspondence between online cognitive tests and paper-based tasks is typically moderate at best. It is possible that digitalising some traditional paper-based tasks influences test performance, and therefore comparing online cognitive test performance to non-computerised normative data may be less valid in assessing cognitive impairment. Nevertheless, due to the enhanced precision, standardisation, and objectivity in data measurement offered by online cognitive testing, computerised cognitive tasks can be used to develop new normative data thresholds that can assess more sensitively for cognitive changes. Furthermore, online testing opens the possibility of testing a significantly larger and more diverse population demographic who may not have access to clinical assessments. By establishing extensive normative datasets, it is possible to establish how cognitive changes over time differ across specific subpopulations, which enable for more accurate diagnostic markers [44]. Given that age-related variability in cognitive performance increases rapidly after age 60 [45], it is essential to account for sociodemographic factors that may influence interpretation of cognitive trajectories. Whilst all repeated computerised tasks in the NeurOn battery demonstrated moderate test-retest reliability, some cognitive tests parameters showed more reliability than others, with the Go/No-Go task demonstrating the lowest test-retest reliability in the battery. This aligns with previous studies showing that the Go/No-Go task performs with modest test-retest reliability compared to other impulsivity measurements in reliability measures [46] and changes in performance have been noted across testing sessions [47]. Lower reliability in the Go/No-Go task relative to other tasks may be resultant of the nature of the task, with attentional control and motor disinhibition being inherently variable to impacts of mental fatigue–leading to increased errors and longer response times [48]. Older adults also typically exhibit more variability in motor control [49], and therefore the task may be more susceptible to these errors than tasks that do not require rapid response times. This is supported by our findings that the highest test-retest reliability scores were for egocentric orientation, which requires less immediate motor activity and may be less affected by short-term fluctuations in attentional control. Indeed, previous research has shown that the egocentric orientation has the highest test-retest reliability across spatial orientation tasks (ICC = 0.72, similar to our finding of ICC = 0.75) [50]. Future cognitive battery studies should consider these factors when selecting and designing tasks. It may be beneficial to explore methods to enhance the reliability of tasks like the Go/No-Go. Strategies could include optimising task design, implementing more robust practice trials, and controlling for external factors that impact attentional and motor performance. Additionally, future research should focus on developing and validating new tasks that balance sensitivity to cognitive changes with high reliability, particularly in diverse and older adult populations.

In the present study, we found that egocentric orientation was the only cognitive test found to correlate with CCI score, which is commonly used to identify subjective cognitive decline (SCD) [51]. Previous research has established that individuals with SCD typically show spatial orientation deficits [52,53], although little is known about how this relates to performance across other cognitive tasks. The present findings indicate that egocentric orientation deficits may be a key signature for SCD, supporting growing findings that spatial orientation performance as a marker for early cognitive impairment [25]. SCD typically manifests prior to preclinical dementia [54], yet there is large heterogeneity in the outcomes of SCD, with many individuals experiencing SCD without objective cognitive impairments [55]. As the only test associated with worse subjective cognition was egocentric orientation, future research may look to establish whether individuals with SCD who exhibit worse egocentric orientation abilities may be more at-risk for future cognitive impairment. However, as the finding was no longer significant after correction for multiple comparisons, further investigation is required to establish its association with early cognitive impairment. Overall, the novel cognitive battery demonstrates the usability and feasibility in measuring cognitive performance remotely, as all participants were able to complete the assessment unsupervised at home using a variety of devices. Our battery has also previously demonstrated feasibility and internal consistency in collecting large longitudinal normative cognitive data across regions (unpublished data). Many cognitive assessments, such as the MoCA, are limited in their generalisability across different cultures due to their reliance on language and cultural understanding [56]. A strength in the NeurOn battery is that its tasks are visual and do not require language, allowing for greater cross-cultural generalisation in cognitive performance and advancing global dementia screening efforts [57]. Although online cognitive testing has several advantages over in-person clinical assessments, diagnosing cognitive impairments, such as MCI, requires functional and clinical evaluations [58] and therefore should not take place outside of a clinical setting. Currently, online cognitive testing may provide a pre-screening tool for more extensive clinical assessments, such as neuroimaging and biomarker testing. Additionally, online cognitive testing can advance research by increasing the scale of epidemiological studies [59] and screening participants for eligibility in clinical trials [60].

Although our results are promising, this study has some limitations. First, we did not account for computer skill, which has previously been found to relate to better cognitive task performance [18,61]. Secondly, the present study did not proactively target for a diverse population demographic in recruitment, which is important for the validation of cognitive testing. Lastly, our sample size of 32 was relatively small and therefore more research is necessary to comprehensively understand how sociodemographic factors influence neuropsychological tests within the NeurOn battery. Our sample consisted of healthy individuals, and therefore there is currently little understanding as to the feasibility of the NeurOn battery within patient population groups. Research is currently ongoing to examine how NeurOn test performance differs across healthy ageing, preclinical dementia, and early dementia. Finally, unsupervised cognitive testing has inherent drawbacks, such as a lack of standardisation in home testing. Consequently, it is possible that participant performance may be influenced by confounding factors i.e., distraction. However, participants were provided with clear instructions to mitigate these issues.

In conclusion, the NeurOn cognitive assessment battery demonstrates a promising instrument for assessing cognitive performance within healthy older adult populations. In the present study, the NeurOn battery compared well with MoCA performance; showed negligible practice effects; and was easily administered in unsupervised remote testing environments. Future research in online cognitive assessments should look to establish appropriate testing timepoints to sensitively measure longitudinal changes in cognitive functioning in wider sociodemographic samples.

Supporting information

S1 File.

S1 Table. Cognitive battery tasks. S2A Table: Full model of MRA between Reaction Time and demographic characteristics. S2B Table: Full model of MRA between TMT-A performance and demographic characteristics. S2C Table: Full model of MRA between TMT-B performance and demographic characteristics. S2D Table: Full model of MRA between Spatial Working Memory performance and demographic characteristics. S2E Table: Full model of MRA between Episodic Memory performance and demographic characteristics. S2F Table: Full model of MRA between Go/No-Go performance and demographic characteristics. S2G Table: Full model of MRA between Allocentric Orientation performance and demographic characteristics. S2H Table: Full model of MRA between Egocentric Orientation and demographic characteristics. S2I Table: Full model of MRA between global cognitive performance and demographic characteristics. S3 Figs: Residuals distribution for significant multiple regression results. S4 Table: Cognitive task performance compared across devices used for testing. S5 Table: Navigation variables correlation with the Driving, Orientation, and Navigation score.

https://doi.org/10.1371/journal.pone.0309006.s001

(ZIP)

Acknowledgments

We would like to thank the participants in this study for devoting their time for this research.

References

  1. 1. Salthouse TA. Selective review of cognitive aging. Journal of the International Neuropsychological Society [Internet]. 2010 [cited 2023 Jun 5];16(5):754–60. Available from: https://www.cambridge.org/core/journals/journal-of-the-international-neuropsychological-society/article/selective-review-of-cognitive-aging/03B208872643C5C9A00CFFB356AA9BF4. pmid:20673381
  2. 2. Boyle PA, Wang T, Yu L, Wilson RS, Dawe R, Arfanakis K, et al. To what degree is late life cognitive decline driven by age-related neuropathologies? Brain [Internet]. 2021 Aug 17 [cited 2023 Jun 5];144(7):2166–75. Available from: https://academic.oup.com/brain/article/144/7/2166/6179102. pmid:33742668
  3. 3. Silverberg NB, Ryan LM, Carrillo MC, Sperling R, Petersen RC, Posner HB, et al. Assessment of cognition in early dementia. Alzheimers Dement [Internet]. 2011 May [cited 2023 Jun 5];7(3). Available from: https://pubmed.ncbi.nlm.nih.gov/23559893/. pmid:23559893
  4. 4. Weintraub S, Carrillo MC, Farias ST, Goldberg TE, Hendrix JA, Jaeger J, et al. Measuring cognition and function in the preclinical stage of Alzheimer’s disease. Alzheimer’s & Dementia: Translational Research & Clinical Interventions. 2018 Jan 1;4:64–75. pmid:29955653
  5. 5. Rasmussen J, Rasmussen J, Langerman H. Alzheimer’s Disease–Why We Need Early Diagnosis. Degener Neurol Neuromuscul Dis [Internet]. 2019 Dec [cited 2023 Jun 5];9:123–30. Available from: https://www.tandfonline.com/action/journalInformation?journalCode=dnnd20. pmid:31920420
  6. 6. Gates NJ, Kochan NA. Computerized and on-line neuropsychological testing for late-life cognition and neurocognitive disorders: Are we there yet? Curr Opin Psychiatry [Internet]. 2015 Mar 6 [cited 2023 Jun 5];28(2):165–72. Available from: https://journals.lww.com/co-psychiatry/Fulltext/2015/03000/Computerized_and_on_line_neuropsychological.15.aspx. pmid:25602241
  7. 7. Ashford JW, Schmitt FA, Bergeron MF, Bayley PJ, Clifford JO, Xu Q, et al. Now is the Time to Improve Cognitive Screening and Assessment for Clinical and Research Advancement. J Alzheimers Dis [Internet]. 2022 [cited 2023 Jun 5];87(1):305–15. Available from: https://pubmed.ncbi.nlm.nih.gov/35431257/. pmid:35431257
  8. 8. Alzheimer, Association. 2019 Alzheimer’s disease facts and figures. Alzheimer’s & Dementia [Internet]. 2019 Mar 1 [cited 2023 Jun 5];15(3):321–87. Available from: https://onlinelibrary.wiley.com/doi/full/10.1016/j.jalz.2019.01.010.
  9. 9. Rentz DM, Parra Rodriguez MA, Amariglio R, Stern Y, Sperling R, Ferris S. Promising developments in neuropsychological approaches for the detection of preclinical Alzheimer’s disease: a selective review. Alzheimers Res Ther [Internet]. 2013 [cited 2023 Jun 6];5(6). Available from: https://pubmed.ncbi.nlm.nih.gov/24257331/. pmid:24257331
  10. 10. Scott J, Mayo AM. Instruments for detection and screening of cognitive impairment for older adults in primary care settings: A review. Geriatr Nurs (Minneap). 2018 May 1;39(3):323–9. pmid:29268944
  11. 11. Goldberg TE, Harvey PD, Wesnes KA, Snyder PJ, Schneider LS. Practice effects due to serial cognitive assessment: Implications for preclinical Alzheimer’s disease randomized controlled trials. Alzheimers Dement (Amst) [Internet]. 2015 [cited 2023 Jun 5];1(1):103–11. Available from: https://pubmed.ncbi.nlm.nih.gov/27239497/. pmid:27239497
  12. 12. Soldan A, Pettigrew C, Albert M. Evaluating Cognitive Reserve Through the Prism of Preclinical Alzheimer Disease. Psychiatr Clin North Am [Internet]. 2018 Mar 1 [cited 2023 Jun 5];41(1):65–77. Available from: https://pubmed.ncbi.nlm.nih.gov/29412849/. pmid:29412849
  13. 13. Bauer RM, Iverson GL, Cernich AN, Binder LM, Ruff & Richard RM, Naugle II. Computerized Neuropsychological Assessment Devices: Joint Position Paper of the American Academy of Clinical Neuropsychology and the National Academy of Neuropsychology. Clin Neuropsychol [Internet]. 2012 [cited 2023 Jun 5];26(2):177–96. Available from: https://www.tandfonline.com/action/journalInformation?journalCode=ntcn20. pmid:22394228
  14. 14. Sliwinski MJ, Mogle JA, Hyun J, Munoz E, Smyth JM, Lipton RB. Reliability and Validity of Ambulatory Cognitive Assessments. Assessment [Internet]. 2018 Jan 1 [cited 2023 Jun 5];25(1):14–30. Available from: https://pubmed.ncbi.nlm.nih.gov/27084835/. pmid:27084835
  15. 15. Öhman F, Hassenstab J, Berron D, Schöll M, Papp K V. Current advances in digital cognitive assessment for preclinical Alzheimer’s disease. Alzheimer’s & Dementia: Diagnosis, Assessment & Disease Monitoring [Internet]. 2021 Jan 1 [cited 2023 Jun 5];13(1):e12217. Available from: https://onlinelibrary.wiley.com/doi/full/10.1002/dad2.12217. pmid:34295959
  16. 16. Miller JB, Barr WB. The Technology Crisis in Neuropsychology. Arch Clin Neuropsychol [Internet]. 2017 Aug 1 [cited 2023 Jun 5];32(5):541–54. Available from: https://pubmed.ncbi.nlm.nih.gov/28541383/. pmid:28541383
  17. 17. Feenstra HEM, Vermeulen IE, Murre JMJ, Schagen SB. Online cognition: factors facilitating reliable online neuropsychological test results. http://dx.doi.org/10.1080/1385404620161190405 [Internet]. 2016 Jan 2 [cited 2023 Jun 5];31(1):59–84. Available from: https://www.tandfonline.com/doi/abs/10.1080/13854046.2016.1190405. pmid:27266677
  18. 18. Feenstra HEM, Murre JMJ, Vermeulen IE, Kieffer JM, Schagen SB. Reliability and validity of a self-administered tool for online neuropsychological testing: The Amsterdam Cognition Scan. https://doi.org/101080/1380339520171339017 [Internet]. 2017 Mar 16 [cited 2023 Jun 5];40(3):253–73. Available from: https://www.tandfonline.com/doi/abs/10.1080/13803395.2017.1339017. pmid:28671504
  19. 19. Domen AC, van de Weijer SCF, Jaspers MW, Denys D, Nieman DH. The validation of a new online cognitive assessment tool: The MyCognition Quotient. Int J Methods Psychiatr Res [Internet]. 2019 Sep 1 [cited 2023 Jun 5];28(3):e1775. Available from: https://onlinelibrary.wiley.com/doi/full/10.1002/mpr.1775. pmid:30761648
  20. 20. Tsoy E, Zygouris S, Possin KL. Current State of Self-Administered Brief Computerized Cognitive Assessments for Detection of Cognitive Disorders in Older Adults: A Systematic Review. J Prev Alzheimers Dis [Internet]. 2021 Jul 1 [cited 2023 Jun 5];8(3):267–76. Available from: https://pubmed.ncbi.nlm.nih.gov/34101783/. pmid:34101783
  21. 21. Williams JE, McCord DM. Equivalence of standard and computerized versions of the Raven Progressive Matrices Test. Comput Human Behav. 2006 Sep;22(5):791–800.
  22. 22. Parsey CM, Schmitter-Edgecombe M. Applications of technology in neuropsychological assessment. Clin Neuropsychol [Internet]. 2013 Nov 1 [cited 2023 Nov 29];27(8):1328–61. Available from: https://pubmed.ncbi.nlm.nih.gov/24041037/. pmid:24041037
  23. 23. Carpenter R, Alloway T. Computer Versus Paper-Based Testing: Are They Equivalent When it Comes to Working Memory? J Psychoeduc Assess [Internet]. 2018 Mar 14 [cited 2023 Nov 29];37(3):382–94. Available from: https://journals.sagepub.com/doi/full/10.1177/0734282918761496.
  24. 24. Morrissey S, Jeffs S, Gillings R, Khondoker M, Patel M, Fisher-Morris M, et al. The Impact of Spatial Orientation Changes on Driving Behavior in Healthy Aging. J Gerontol B Psychol Sci Soc Sci [Internet]. 2024 Mar 1 [cited 2024 Mar 6];79(3). Available from: https://pubmed.ncbi.nlm.nih.gov/38134234/. pmid:38134234
  25. 25. Coughlan G, Laczó J, Hort J, Minihane AM, Hornberger M. Spatial navigation deficits—overlooked cognitive marker for preclinical Alzheimer disease? Nature Reviews Neurology 2018 14:8 [Internet]. 2018 Jul 6 [cited 2022 Nov 20];14(8):496–506. Available from: https://www.nature.com/articles/s41582-018-0031-x. pmid:29980763
  26. 26. Tu S, Wong S, Hodges JR, Irish M, Piguet O, Hornberger M. Lost in spatial translation—A novel tool to objectively assess spatial disorientation in Alzheimer’s disease and frontotemporal dementia. Cortex [Internet]. 2015 Jun 1 [cited 2022 Nov 20];67:83–94. Available from: https://pubmed.ncbi.nlm.nih.gov/25913063/. pmid:25913063
  27. 27. Rattanabannakit C, Risacher SL, Gao S, Lane KA, Brown SA, McDonald BC, et al. The Cognitive Change Index as a Measure of Self and Informant Perception of Cognitive Decline: Relation to Neuropsychological Tests. J Alzheimers Dis [Internet]. 2016 Feb 2 [cited 2023 Jun 7];51(4):1145. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4833578/. pmid:26923008
  28. 28. Hegarty M, Richardson AE, Montello DR, Lovelace K, Subbiah I. Development of a self-report measure of environmental spatial ability. Intelligence. 2002 Sep 1;30(5):425–47.
  29. 29. Nasreddine ZS, Phillips NA, Bédirian V, Charbonneau S, Whitehead V, Collin I, et al. The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc [Internet]. 2005 [cited 2023 Jun 5];53(4):695–9. Available from: https://pubmed.ncbi.nlm.nih.gov/15817019/. pmid:15817019
  30. 30. Reitan RM. Validity of the Trail Making Test as an indicator of organic brain damage. Percept Mot Skills. 1958;8(3):271–6.
  31. 31. Shin MS, Park SY, Park SR, Seol SH, Kwon JS. Clinical and empirical applications of the Rey–Osterrieth Complex Figure Test. Nature Protocols 2006 1:2 [Internet]. 2006 Jul 27 [cited 2023 Jun 6];1(2):892–9. Available from: https://www.nature.com/articles/nprot.2006.115. pmid:17406322
  32. 32. Corsi PM. Human memory and the medial temporal region of the brain. 1972 [cited 2023 Jun 6]; Available from: https://escholarship.mcgill.ca/downloads/4m90dw30g.
  33. 33. Warrington EK, James M. The visual object and space perception battery. Bury St. Edmunds: Thames Valley Test Company; 1991. 20 pages.
  34. 34. Trustram Eve C, De Jager CA. Piloting and validation of a novel self-administered online cognitive screening tool in normal older persons: the Cognitive Function Test. Int J Geriatr Psychiatry [Internet]. 2014 Feb [cited 2023 Jun 5];29(2):198–206. Available from: https://pubmed.ncbi.nlm.nih.gov/23754255/. pmid:23754255
  35. 35. McGraw KO, Wong SP. Forming Inferences about Some Intraclass Correlation Coefficients. Psychol Methods. 1996;1(1):30–46.
  36. 36. Burnham SC, Raghavan N, Wilson W, Baker D, Ropacki MT, Novak G, et al. Novel Statistically-Derived Composite Measures for Assessing the Efficacy of Disease-Modifying Therapies in Prodromal Alzheimer’s Disease Trials: An AIBL Study. J Alzheimers Dis [Internet]. 2015 Jun 26 [cited 2023 Jun 6];46(4):1079–89. Available from: https://pubmed.ncbi.nlm.nih.gov/26402634/. pmid:26402634
  37. 37. De Roeck EE, De Deyn PP, Dierckx E, Engelborghs S. Brief cognitive screening instruments for early detection of Alzheimer’s disease: a systematic review. Alzheimers Res Ther [Internet]. 2019 Feb 28 [cited 2023 Jun 6];11(1). Available from: https://pubmed.ncbi.nlm.nih.gov/30819244/. pmid:30819244
  38. 38. Paterson TSE, Sivajohan B, Gardner S, Binns MA, Stokes KA, Freedman M, et al. Accuracy of a Self-Administered Online Cognitive Assessment in Detecting Amnestic Mild Cognitive Impairment. J Gerontol B Psychol Sci Soc Sci [Internet]. 2022 Feb 1 [cited 2023 Jun 6];77(2):341–50. Available from: https://pubmed.ncbi.nlm.nih.gov/34333629/. pmid:34333629
  39. 39. Galasko D, Abramson I, Corey-Bloom J, Thal LJ. Repeated exposure to the Mini‐Mental State Examination and the Information‐Memory‐Concentration Test results in a practice effect in Alzheimer’s disease. Neurology [Internet]. 1993 Aug 1 [cited 2023 Jun 6];43(8):1559–1559. Available from: https://n.neurology.org/content/43/8/1559. pmid:8351011
  40. 40. Cooley SA, Heaps JM, Bolzenius JD, Salminen LE, Baker LM, Scott SE, et al. Longitudinal change in performance on the Montreal Cognitive Assessment in older adults. Clin Neuropsychol [Internet]. 2015 Aug 18 [cited 2023 Jun 6];29(6):824–41. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4644436/. pmid:26373627
  41. 41. Calamia M, Markon K, Tranel D. Scoring higher the second time around: meta-analyses of practice effects in neuropsychological assessment. Clin Neuropsychol [Internet]. 2012 May 1 [cited 2023 Jun 6];26(4):543–70. Available from: https://pubmed.ncbi.nlm.nih.gov/22540222/. pmid:22540222
  42. 42. Duff K. One week practice effects in older adults: Tools for assessing cognitive change. Clin Neuropsychol [Internet]. 2014 Jul 4 [cited 2023 Jun 6];28(5):714. Available from: /pmc/articles/PMC4134358/. pmid:24882553
  43. 43. Bell L, Lamport DJ, Field DT, Butler LT, Williams CM. Practice effects in nutrition intervention studies with repeated cognitive testing. Nutr Healthy Aging [Internet]. 2018 [cited 2023 Jun 6];4(4):309. Available from: /pmc/articles/PMC6004918/. pmid:29951591
  44. 44. Coughlan G, Coutrot A, Khondoker M, Minihane AM, Spiers H, Hornberger M. Toward personalized cognitive diagnostics of at-genetic-risk Alzheimer’s disease. Proc Natl Acad Sci U S A [Internet]. 2019 May 7 [cited 2023 Jun 6];116(19):9285–92. Available from: https://www.pnas.org/doi/abs/10.1073/pnas.1901600116. pmid:31015296
  45. 45. Laplume AA, Anderson ND, McKetton L, Levine B, Troyer AK. When I’m 64: Age-Related Variability in Over 40,000 Online Cognitive Test Takers. The Journals of Gerontology: Series B [Internet]. 2022 Jan 12 [cited 2023 Jun 6];77(1):104–17. Available from: https://academic.oup.com/psychsocgerontology/article/77/1/104/6331109.
  46. 46. Hamilton KR, Littlefield AK, Anastasio NC, Cunningham KA, Fink LHL, Wing VC, et al. Rapid-Response Impulsivity: Definitions, Measurement Issues, and Clinical Implications. Personal Disord [Internet]. 2015 Apr 1 [cited 2024 Jul 17];6(2):168. Available from: /pmc/articles/PMC4476624/. pmid:25867840
  47. 47. Weafer J, Baggott MJ, De Wit H. Test-retest reliability of behavioral measures of impulsive choice, impulsive action, and inattention. Exp Clin Psychopharmacol [Internet]. 2013 Dec [cited 2024 Jul 17];21(6):475–81. Available from: https://pubmed.ncbi.nlm.nih.gov/24099351/. pmid:24099351
  48. 48. Guo Z, Chen R, Liu X, Zhao G, Zheng Y, Gong M, et al. The impairing effects of mental fatigue on response inhibition: An ERP study. PLoS One [Internet]. 2018 Jun 1 [cited 2024 Jul 17];13(6). Available from: /pmc/articles/PMC5983454/. pmid:29856827
  49. 49. Seidler RD, Bernard JA, Burutolu TB, Fling BW, Gordon MT, Gwin JT, et al. Motor Control and Aging: Links to Age-Related Brain Structural, Functional, and Biochemical Effects. Neurosci Biobehav Rev [Internet]. 2010 Apr [cited 2024 Jul 17];34(5):721. Available from: /pmc/articles/PMC2838968/. pmid:19850077
  50. 50. Coughlan G, Puthusseryppady V, Lowry E, Gillings R, Spiers H, Minihane AM, et al. Test-retest reliability of spatial navigation in adults at-risk of Alzheimer’s disease. PLoS One [Internet]. 2020 Sep 1 [cited 2024 Jul 17];15(9). Available from: /pmc/articles/PMC7508365/. pmid:32960930
  51. 51. Ohlhauser L, Parker AF, Smart CM, Gawryluk JR. White matter and its relationship with cognition in subjective cognitive decline. Alzheimer’s & Dementia: Diagnosis, Assessment & Disease Monitoring. 2019 Dec 1;11:28–35. pmid:30581973
  52. 52. Cerman J, Ross A, Laczo J, Martin V, Zuzana N, Ivana M, et al. Subjective Spatial Navigation Complaints—A Frequent Symptom Reported by Patients with Subjective Cognitive Decline, Mild Cognitive Impairment and Alzheimer’s Disease. Curr Alzheimer Res. 2017 Nov 22;15(3):219–28.
  53. 53. Chen Q, Chen F, Long C, Zhu Y, Jiang Y, Zhu Z, et al. Spatial navigation is associated with subcortical alterations and progression risk in subjective cognitive decline. Alzheimers Res Ther [Internet]. 2023 Dec 1 [cited 2023 Jun 6];15(1):86. Available from: /pmc/articles/PMC10127414/. pmid:37098612
  54. 54. Reisberg B, Gauthier S. Current evidence for subjective cognitive impairment (SCI) as the pre-mild cognitive impairment (MCI) stage of subsequently manifest Alzheimer’s disease. Int Psychogeriatr [Internet]. 2008 May [cited 2023 Jun 6];20(1):1–16. Available from: https://www.cambridge.org/core/journals/international-psychogeriatrics/article/current-evidence-for-subjective-cognitive-impairment-sci-as-the-premild-cognitive-impairment-mci-stage-of-subsequently-manifest-alzheimers-disease/E1B0AA39B852B69C16699325E9C52A43. pmid:18072981
  55. 55. Jessen F, Amariglio RE, Buckley RF, van der Flier WM, Han Y, Molinuevo JL, et al. The characterisation of subjective cognitive decline. Lancet Neurol [Internet]. 2020 Mar 1 [cited 2023 Jun 6];19(3):271. Available from: /pmc/articles/PMC7062546/. pmid:31958406
  56. 56. Ng KP, Chiew HJ, Lim L, Rosa-Neto P, Kandiah N, Gauthier S. The influence of language and culture on cognitive assessment tools in the diagnosis of early cognitive impairment and dementia. https://doi.org/101080/1473717520181532792 [Internet]. 2018 Nov 2 [cited 2023 Jun 6];18(11):859–69. Available from: https://www.tandfonline.com/doi/abs/10.1080/14737175.2018.1532792. pmid:30286681
  57. 57. Sexton C, Snyder HM, Chandrasekaran L, Worley S, Carrillo MC. Expanding Representation of Low and Middle Income Countries in Global Dementia Research: Commentary From the Alzheimer’s Association. Front Neurol. 2021 Mar 15;12:271. pmid:33790849
  58. 58. Albert MS, DeKosky ST, Dickson D, Dubois B, Feldman HH, Fox NC, et al. The diagnosis of mild cognitive impairment due to Alzheimer’s disease: recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimers Dement [Internet]. 2011 [cited 2023 Jun 6];7(3):270–9. Available from: https://pubmed.ncbi.nlm.nih.gov/21514249/. pmid:21514249
  59. 59. Lupton MK, Robinson GA, Adam RJ, Rose S, Byrne GJ, Salvado O, et al. A prospective cohort study of prodromal Alzheimer’s disease: Prospective Imaging Study of Ageing: Genes, Brain and Behaviour (PISA). Neuroimage Clin [Internet]. 2021 Jan 1 [cited 2023 Jun 6];29:102527. Available from: /pmc/articles/PMC7750170/. pmid:33341723
  60. 60. Fawns-Ritchie C, Deary IJ. Reliability and validity of the UK Biobank cognitive tests. PLoS One [Internet]. 2020 Apr 1 [cited 2023 Jun 6];15(4):e0231627. Available from: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0231627. pmid:32310977
  61. 61. Lee Meeuw Kjoe PR, Agelink van Rentergem JA, Vermeulen IE, Schagen SB. How to Correct for Computer Experience in Online Cognitive Testing? Assessment [Internet]. 2021 Jul 1 [cited 2023 Jun 6];28(5):1247–55. Available from: https://journals.sagepub.com/doi/10.1177/1073191120911098. pmid:32148072