Figures
Abstract
Background
Attention/ processing speed deficits with or without executive function and episodic memory deficits have been suggested as a relatively characteristic cognitive profile of people with post-COVID condition (PCC). Most studies have been performed using standardized paper and pencil neuropsychological assessment. Sensitive and applicable tests are needed to improve the diagnostic capacity of patients with PCC.
Objectives
In this study, we aimed to investigate the dimensions of a comprehensive computerized neuropsychological battery and to characterize the cognitive characteristics of patients with PCC.
Materials and methods
Five hundred and eight participants were enrolled in the study (PCC = 227, Healthy Controls, HC = 281) and underwent cognitive assessment focused on attention, concentration, executive functions, and episodic memory. We conducted a multi-group confirmatory factor analysis. Factor scores were obtained to compare the PCC and HC groups and partial invariance analysis was performed to identify relevant cognitive processes that differentiate the two groups.
Results
The proposed four-factor model showed adequate fit indices. There were differences in attention, concentration, and executive functions factor scores with small to moderate effect sizes and with a particular implication of attention processes based on measurement invariance analysis. Impairments in reaction times and divided attention were especially relevant in patients with PCC.
Conclusions
The battery revealed four factors representing attention, concentration, executive functions, and episodic memory. The PCC group performed worse than the HC group in attention, concentration, and executive functions. These findings suggest the validity of computerized neuropsychological assessment, which could be particularly useful in PCC.
Citation: Delgado-Alonso C, Matias-Guiu JA, Alvarado JM, Diez-Cirarda M, Oliver-Mas S, Valiente-Gordillo E, et al. (2025) Computerized neuropsychological assessment in post-COVID condition. PLoS One 20(5): e0322304. https://doi.org/10.1371/journal.pone.0322304
Editor: Lucette A. Cysique, The University of New South Wales, Neuroscience Research Australia, AUSTRALIA
Received: October 18, 2024; Accepted: March 19, 2025; Published: May 9, 2025
Copyright: © 2025 Delgado-Alonso et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The conditions of our ethics approval do not permit public archiving of anonymized study data. Readers seeking access to the data should contact the corresponding or the local ethics committee of Hospital Clinico San Carlos, Madrid (E-mail: ceic.hcsc@salud.madrid.org). Access can be granted only to named individuals in accordance with ethical procedures governing the reuse of sensitive clinical data.
Funding: The study was partially supported by Schuhfried though the project 20/022-E of normative data collection. This research has also received funding from the Nominative Grant FIBHCSC 2020 COVID-19 (Department of Health, Community of Madrid). Jordi A. Matias-Guiu is supported by Instituto de Salud Carlos III through the project INT20/00079 and INT23/00017 (co-funded by European Union). Maria Diez-Cirarda is supported by Instituto de Salud Carlos III (ISCIII) through Sara Borrell postdoctoral fellowship Grant CD22/00043) and co-funded by the European Union. Silvia Oliver-Mas is supported by the Fundación para el Conocimiento madri+d through project G63-HEALTHSTARPLUS-HSP4. The sponsors of the study did not take part in the design and conduct of the study; collection, management, analysis, and interpretation of the data; writing and review of the report; or the decision to submit the article for publication.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Post-COVID condition (PCC) has been defined as the development of symptoms during or after the infection of SARS-CoV-2 that persist over 12 weeks [1]. PCC symptomatology consists of a wide range of symptoms with a relevant impact on functioning, quality of life, and important socio-economic consequences [2]. According to previous meta-analyses, some of the most reported symptoms are chronic fatigue, dyspnea, pain, depression, anxiety, headache, insomnia, smell or taste disorders, and subjective cognitive complaints (e.g., brain fog, memory issues) [3,4].
Objective cognitive impairment has been previously shown and include mild to moderate deficits in attention, executive function, episodic memory, visuospatial function, and language [5–8]. Some studies have suggested a relatively characteristic cognitive profile of PCC, consisting of attention/processing speed deficits with or without episodic memory and executive function impairment [9,10].
The cognitive assessment of people with PCC is key to defining an individual cognitive profile that guides cognitive training/ rehabilitation, supports enrollment in clinical trials, and justifies possible job adaptations and/ or disability benefits [9].
Paper-and-pencil tests have been most commonly used to assess the cognitive symptoms of PCC [11]. Different strategies have been adopted for the cognitive assessment of people with PCC, including screening tests (e.g., Montreal Cognitive Assessment, MoCA), telephone cognitive evaluations (e.g., Telephone Interview for Cognitive Status, TICS), or internet-based assessments (e.g., iPad-based online neuropsychological tests) [8,12–14]. However, cognitive screening tests do not allow an in-depth description of the cognitive processes underlying cognitive impairment, and dementia screening tests might not be suitable for people with PCC, whose deficits require tests with more sensitivity than people with dementia. In addition, conducting a cognitive assessment in uncontrolled settings could bias the results obtained. Similar to other conditions, remote assessments (telephone or internet) may include factors such as variability in the environment affecting performance or lack of direct supervision. The use of computerized tasks when cognitive impairment is suspected should be considered for their impact on cognitive performance and the presence of an examiner may be necessary [15,16]. For these reasons, a comprehensive neuropsychological battery conducted in-person and in a controlled and structured setting seems more recommended.
While paper and pencil-based assessments are frequently administered during a comprehensive neuropsychological evaluation, computerized assessments could have important advantages. First, computerized assessments are particularly useful for time-dependent variables that cannot be easily and reliably collected by traditional paper and pencil-based evaluations (e.g., reaction times). Thus, computerized tests may have more sensitivity for those cognitive processes where time measures play a significant role, such as attention processes [17]. Second, the administration is automated. Although a supervisor is always highly recommended during the assessment, it is possible to conduct more than one assessment at the same time. Third, they facilitate real-time scoring, minimizing errors, and generating reports immediately for faster clinical decision-making. Fourth, as a standardized protocol, it reduces intra- and inter-rater variability. Finally, they offer different parallel versions to prevent practice effects in case of repeated measures, improving accuracy and validity [18]. Moreover, computerized tools can also be useful in the rehabilitation of cognitive deficits in various neurological disorders. Online interventions have been shown to be useful in the treatment of prolonged cognitive impairment (e.g., Long COVID) [19]. Despite these advantages, it is necessary to analyze the validity of these batteries assessing their dimensionality and factorial structure, because the differences in the administration procedure could produce different outcomes.
In the context of PCC, computerized tests have been shown to be useful for evaluating patients on a large scale [20], and also may be more sensitive than classic paper-and-pencil tasks and appropriate to assess specific domains such as attentional o processing speed [17,21].
An inter-group comparison between cognitively healthy control (HC) and PCC groups in a test set or neuropsychological battery is commonly used for the detection of cognitive deficits. However, this can be discouraged when several scores are available for comparison, increasing type I error or applying corrections for multiple comparisons, for instance, Bonferroni correction, which tends to be very conservative when many comparisons have been made [22]. In this regard, confirmatory factor analysis (CFA) could be especially helpful, as a technique that condenses several test scores into a smaller number of factors (also called latent variables) following the theoretical framework that suggests the way test scores are grouped. CFA can explore the latent structure of a neuropsychological battery by describing the number of factors and the pattern of test-factor relationship under certain conditions [23,24]. For instance, how scores in divided, alert, and selective attention tests are related to the factor “attention”. Thus, CFA is understood as a source of validity evidence based on internal structure [25].
In this study, we conducted a computerized neuropsychological assessment based on a comprehensive neuropsychological assessment administered using the Vienna Test System (VTS) (Schuhfried GmbH; Moedling, Austria) [26] in HC and PCC groups, covering the cognitive domains characteristic of the PCC profile.
The primary aim was to explore the factor structure of VTS in a combined sample of PCC and HC, and the secondary aim was examine specific deficits in PCC. We aimed (i) to conduct a multi-group CFA, (ii) to compare HC and PCC groups in their factor scores, (iii) to describe the cognitive processes underlying measurement invariance based on factorial invariance analysis.
We hypothesize that a four-factor model could show adequate fit indices and be shared by both groups with differences in their factor scores. A failure to find metric and scalar invariance is expected, implying measures of impaired cognitive processes in PCC.
Methods
Participants
Five hundred and eight participants, whose first language was Spanish, were recruited from the Department of Neurology at Hospital Clinico San Carlos. The total sample consisted of cognitively healthy control participants (HC) (n = 281) and participants with post-COVID condition (PCC) (n = 227). The mean time from COVID-19 onset to assessment was 17.59 ± 8.86 months.
Inclusion criteria for participants with PCC were as follows: (1) a diagnosis of COVID-19 (positive RT-PCR) at least 6 months before inclusion in the study, and (2) a diagnosis of PCC according to the World Health Organization criteria [1], and (3) cognitive complaints related with the SARS-CoV-2 infection. The HC group met the following criteria: (1) Age, sex, and educational level according to the study population, (2) absence of cognitive impairment (CDR score of 0) [27], and (3) absence of functional impairment based on scores of 0 in the Functional Activities Questionnaire [28]. The exclusion criteria for all participants were: (1) currently no medical treatment for or no diagnosis of any neurological, psychiatric, or medical disorder with potential impact on cognition (e.g., epilepsy, major depression), (2) prior history of substance abuse (e.g., alcohol), and (3) any physical difficulty leading to potential bias in scores (e.g., hearing deficits).
All patients included in the study met the WHO criteria for PCC [1]. For patient selection, both physical and neurological examinations were performed. Patients with any neurological disorder involving cognitive impairment (including mild cognitive impairment) or other systemic or psychiatric disorders potentially impacting the findings were excluded. All patients underwent MRI to exclude other causes, and additional tests in selected cases (e.g., CSF biomarkers or FDG-PET) were performed based on clinical suspicion. Information on the clinical characteristics of the disease was cross-checked with the hospital and regional medical chart (e.g., hospital admission or ICU admission). Patients were selected with a 6-month criterion to separate possible symptoms that may occur after the acute or subacute phase of the disease (inflammation) and persistent symptoms. This criterion was established to be able to separate those possible cases where there may be a spontaneous recovery and to recognize those patients in whom the symptoms have become chronic [29].
Neuropsychological measures
All participants completed the computerized neuropsychological battery Vienna Test System (VTS) (GmbH; Schuhfried & Moedling, Austria) [26] assess attention, concentration, executive functions, and episodic memory using the following tests: the Figural Memory Test (FGT) (S11 form), the Tower of London (TOL-F) (S1 form), the Trail Making Test (TMT) (S1 form), a variant of the go/no-go task (INHIB) (S13 form), and the N-back verbal task (NBV) (S1 form), as part of the Cognitive Basic Testing (COGBAT) battery [30]. Additionally, both groups were assessed using the Reaction Test (RT) (S3 form) [31] and the Perception and Attention Function Battery (WAF) (S1 form) [32] to evaluate specific attention processes, such as alertness, vigilance and sustained attention, divided attention, selective attention, smooth pursuit eye movements, and spatial attention. To assess concentration, the Cognitrone test (COG) (S11 form) [33] and the Determination Test (DT) (S1 form) [34] were administered. The computerized battery was administered in person at our hospital by trained neuropsychologists. It was conducted in two-day sessions. Each session lasted approximately 80 minutes. All measures for each test are shown in Table 1. Furthermore, we administered the Beck Depression Inventory (BDI) [35] and the Pittsburgh Sleep Quality Index (PSQI) [36], to evaluate depressive symptoms and sleep quality. Additionally, the Modified Fatigue Impact Scale (MFIS) [37] was administered to assess fatigue at the time of assessment. The assessment evaluates the impact of fatigue in the past 4 weeks.
We distinguished between ‘objective cognitive impairment,’ which refers to impairment confirmed through neuropsychological assessment, and ‘subjective cognitive complaints,’ which refer to self-reported cognitive issues irrespective of test results.
Procedure
A cross-sectional study was conducted with the approval of the Ethics Committee of Hospital Clinico San Carlos and all participants provided written informed consent. Participants were consecutively recruited through the Department of Neurology at the Hospital Clinico San Carlos between 1th November 2020 and 31th July 2022.
The cognitive assessment was conducted in Spanish by trained neurolopsychologists in two independent sessions lasting approximately 80 minutes each.
We conducted a multi-group confirmatory factor analysis (CFA) considering the main cognitive variables of the VTS battery to study the validity evidence based on internal structure. Factor scores were obtained to compare the PCC and HC groups (Fig 1) and partial invariance analysis was performed to identify relevant cognitive processes that differentiate the two groups.
Different groups may be considered in the same CFA. Under this condition, a multi-group CFA is conducted to check if the neuropsychological tests represent the same theoretical constructs between groups (measurement invariance) [38]. The different levels of measurement invariance can contribute to a better understanding of the cognitive processes underlying PCC. In this regard, HC and PCC groups can share the same latent structure of a particular neuropsychological battery (configural invariance). But not necessarily all the tests have the same relationship strength with their factors (metric invariance or weak invariance). For instance, the strength of the relationship between specific measures of attention and the attention factor may vary between groups. At the same level of attention factor, both HC and PCC groups could not be at the same level of each administered attention test (scalar invariance or strong invariance). Other invariance levels could be of interest but overly restrictive, such as residual invariance [38]. A failure to find a certain level of invariance and the identification of responsible test scores can help to understand the cognitive profile behind PCC. Additionally, this approach is especially relevant considering that there is not a gold standard for the neuropsychological diagnosis of PCC.
Determination of the most likely SARS-CoV-2 variants and vaccination status
Based on the first date of symptom, onset of the acute infection was used to determine the waves of infection: one hundred forty patients (61.7%) belonged to the first wave (March-May 2020), thirty-eight (16.7%) to the second wave (alpha variant, August-November 2020), seventeen (7.5%) to the third (delta, December 2020-July 2021), and thirty-two (14%) to the fourth or subsequent waves. Data from the Spanish Ministry of Health, 2024 [39].
Regarding the vaccination status: 221 (96.4%) patients were not vaccinated on the first date of infection. At the time of cognitive assessment, 156 (68.7%) of patients were vaccinated.
Statistical analysis
Statistical analysis was performed using RStudio 4.3.1, with alpha set at 0.05. Descriptive data are presented as mean ± standard deviation or frequencies.
For comparisons between PCC and HC groups, the t-tests were performed, and Pearson’s chi-squared test was used to assess the independence of categorical variables. Spearman’s correlation was used to measure relationships between quantitative variables and was categorized as very low (0-.29), low (.30-.49), moderate (.50-.69), high (.70-.89), or very high (>.89).
The CFA was performed using the “lavaan” package [40]. We considered four latent factors and robust maximum likelihood estimation. Model fit was evaluated based on the following thresholds: Comparative Fit Index (CFI), Tucker-Lewis index (TLI) ≥.0.95, standardized root mean squared residual (SRMR) ≤ 0.08, and root mean square error of approximation (RMSEA) ≤ 0.06 [41]McDonald’s omega (ω) was reported as a reliability estimator.
A two-way mixed ANOVA was calculated to compare HC and PCC, considering factor scores as the within-subjects variable and group as the between-subjects variable. We checked the multi-sample sphericity assumption using Mauchly’s sphericity test and Box’s test, and the homogeneity of variance assumption was checked using Levene’s test. In the case of not assuming sphericity, the multivariate F test was used. Post hoc comparisons were adjusted using Bonferroni’s correction for multiple comparisons. Partial eta squared was used as a measure of effect size, considered as small (partial η2 = 0.01), medium (partial η2 = 0.06), and large (partial η2 = 0.14).
We conducted a partial factorial invariance analysis to investigate the role of specific cognitive processes, based on VTS scores, in measurement invariance. Particularly, we focused on configural, metric (weak), and scalar (strong) invariance. Chi-squared tests and the difference in the comparative fit index (CFI) (cutoff point ΔCFI < .01) were used to determine substantial increases in fit [42]. The proposed factorial model was fitted independently for PCC and HC to study configural invariance, reporting similar model fit indices. The identification of potential invariance scores was performed by inspecting the modification indices for metric and scalar invariance.
Results
Characteristics of the sample
The main demographic and clinical characteristics are described in Table 2. There were no significant differences between PCC and HC groups in age (t = 0.25, p = .80) or years of education (t = 1.40, p = .16). The percentage of females was higher in the PCC group (78% vs 56.9%, χ2(1) = 24.3, p < .001) (Table 2).
Confirmatory factor analysis
A CFA was conducted based on the main cognitive variables of the computerized neuropsychological battery VTS (Fig 2). The four-factors model showed adequate fit measures: χ2(183) = 362.32, p < .001; CFI = 0.95, TLI = 0.94, SRMR = 0.05, and RMSEA = 0.044 (IC: 0.039–0.049). McDonald’s omega was 0.92 for memory, 0.87 for attention processes, 0.70 for executive functions, and 0.65 for concentration.
Comparisons between PCC and HC
Factor scores were obtained to compare PCC and HC groups in attention, concentration, executive functions, and episodic memory. We found a significant interaction effect (group x factor scores) (multivariate F (3,504) = 16.96, p < .001, partial η2 = .092). Post-hoc comparisons showed a worse performance by PCC group compared to HC in Attention Processes (F(1,506) = 37.29, p < .001, partial η2 = .069), Concentration (F(1,506) = 14.58, p < .001, partial η2 = .028), and Executive Function (F(1,506) = 5.6, p < .018, partial η2 = .011), but not in the Memory score (F(1,506) = 0.29, p = .59, partial η2 = .001).
Correlations between factor scores and clinical characteristics of the PCC sample are shown in Fig 3. There were no interaction effects in hospital admission (hospital admission x factor scores) (multivariate F(3,222) = 0.70, p < .555, partial η2 = .009) or a main effect (F(1,224) = 0.99, p < .32, partial η2 = .004). Similarly, we did not find a significant interaction effect in ICU admission (ICU admission x factor scores) (multivariate F(3,210) = 0.12, p < .94, partial η2 = .002) or a main effect (F(1,212) = 0.043, p < .83, partial η2 < .001).
MFIS: Modified Fatigue Impact Scale; BDI: Beck Depression Inventory; PSQI: Pittsburgh Sleep Quality Index. *: significant correlations after Bonferroni’s correction.
Cognitive processes underlying measurement invariance
The proposed model was fitted for HC and PCC groups independently. Both groups showed similar model fit indices, considering the reduction in the sample size. In the HC group, χ2(183) = 320.41, p < .001; CFI = 0.95, TLI = 0.94, SRMR = 0.05, and RMSEA = 0.052 (IC: 0.043–0.060). In the PCC group, χ2(183) = 277.64, p < .001; CFI = 0.93, TLI = 0.92, SRMR = 0.07, and RMSEA = 0.048 (IC: 0.039–0.056).
The results of the measurement invariance and partial measurement invariance are shown in Table 3. We found significant contributions of RT-motor scores in metric invariance (loadings) and WAF-divided and RT-reaction scores in scalar invariance (intercepts).
Discussion
In the present study, we aimed to explore the internal structure of a computerized neuropsychological battery administered using the VTS in participants with PCC and HC. Based on the composition of the computerized battery, we proposed a four-factor model encompassing attention processes, concentration, executive functions, and memory, which demonstrated adequate fit indices.
The four-factor model aligns with the cognitive processes assessed by the computerized battery. The Perception and Attention Function Battery (WAF) and the Reaction Test (RT) contribute notably, facilitating the evaluation of different attentional processes and reaction times that are challenging to measure with traditional paper and pencil-based assessments. The Cognitrone (COG) test and the Determination Test (DT) test were specially designed to assess concentration as a single dimension closely related to attention [33,34]. Executive function, a broad domain encompassing high-level processes essential for adapting to novel or complex situations [43], is assessed through several tests, including the Trail Making Test (TMT), the Tower of London (TOL-F), a variant of the go/ no-go task (INHIB), and the N-Back Verbal task (NBV). These tests cover a range of cognitive processes, such as mental flexibility, planning, inhibition, and working memory. Finally, the Figural Memory Test (FGT) was designed to assess episodic memory and its observed variables showed the highest loadings in the model. As expected, the four factors were highly correlated, reflecting the interrelatedness of the cognitive domains assessed.
The computerized battery showed evidence of partial strong invariance after three adjustments. In addition, the study of measurement invariance revealed specific cognitive impairments in the PCC group. Both PCC and HC groups shared the same factor structure, supporting configural invariance. The strength of relationships between tests and factors was the same across groups, except for RT-motor scores, suggesting partial metric invariance and different loadings between RT-motor scores and the factor Attention Processes depending on whether the condition was PCC or HC. With the exception of WAF-divided and RT-reaction scores, the VTS battery showed partial scalar invariance when the model constraint for equivalent test intercepts across groups for all indicators. WAF-divided and RT-reaction indicators were linked to the Attention Processes factor similarly in both PCC and HC groups. However, intercepts differed, and observed scores would vary at different levels of Attention Processes [38].
Significant differences were found between PCC and HC groups in Attention Processes, Concentration, and Executive Functions with worse performance in the PCC group. The effect sizes were medium for Attention Processes and small-to-medium for Concentration and Executive Functions. Interestingly, no significant difference was found in Memory scores. Previous studies have highlighted attentional deficits in PCC [4,5] and have suggested attentional deficits as a characteristic feature of PCC [9,10] when other cognitive domains, such as memory, executive function, language, and visuospatial skills are also studied. Attention has been previously reported as the most frequent cognitive domain impaired, followed by episodic memory, executive function, visuospatial function, and language. In our study, divided attention and reaction time measures had important implications in the study of measurement invariance. The differences observed in divided attention and reaction time measures are in line with the commonly reported “brain frog” and a decrement in reaction time tasks has been described previously in PCC [44]. Interestingly, cognitive slowing and difficulties in multitasking are common symptoms reported by patients with PCC [45]. In contrast to some studies, we did not find any difference in episodic memory [4,5]. This finding could be explained because the decrements in episodic memory in PCC are generally lower than in attention, and we included a visual memory task, which could be less sensitive than verbal episodic memory tests. Recent studies have shown that visual memory may be less sensitive for detect cognitive deficits compared to the more prevalent verbal memory impairment [46,47].
Another important finding is that the factor scores showed low, mostly nonsignificant correlations with other non-cognitive characteristics of the PCC sample (e.g., depression, anxiety, fatigue). The influence of neuropsychiatric symptoms in cognitive performance in PCC is still a controversial issue in the literature [6,9]. Future studies comparing different cognitive tasks and procedures of administration could be of interest to evaluate whether computerized assessments could show different vulnerabilities to the effects of anxiety or depression in this setting compared with the standard cognitive assessment conducted by a neuropsychologist.
Our study has some limitations that should be considered. First, both PCC and HC groups had an unequal sex distribution, with more females in the PCC groups, reflecting the prevalence of PCC [48]. Second, we did not consider other cognitive domains, such as language or visuospatial skills. However, these domains seem to have a less prominent role in PCC cognitive dysfunction. Third, data on antiretroviral treatment during the acute phase were not included because they were not available. However, in our sample, only 23.9% of patients were hospitalized. Thus, we believe that this should not impact on the results because these therapies were mainly used in hospitalized patients. Fourth, we did not include performance validity tests [49], which could be of interest in future studies.
In conclusion, the internal structure of the computerized battery revealed four factors representing attention, concentration, executive functions, and episodic memory. The PCC group performed worse than the HC group in attention, concentration, and executive function. Attentional deficits, particularly in divided attention and reaction times, were prominent findings, suggesting that these specific cognitive deficits could be hallmarks of cognitive dysfunction in PCC. Overall, our study suggests the validity of computerized neuropsychological assessment, which could be particularly useful in PCC for diagnosis, monitoring, and clinical trials.
References
- 1. Soriano JB, Murthy S, Marshall JC, Relan P, Diaz JV, WHO Clinical Case Definition Working Group on Post-COVID-19 Condition. A clinical case definition of post-COVID-19 condition by a Delphi consensus. Lancet Infect Dis. 2022;22(4):e102–7. pmid:34951953
- 2. Delgado-Alonso C, Cuevas C, Oliver-Mas S, Díez-Cirarda M, Delgado-Álvarez A, Gil-Moreno MJ, et al. Fatigue and cognitive dysfunction are associated with occupational status in Post-COVID syndrome. Int J Environ Res Public Health. 2022;19(20):13368. pmid:36293950
- 3. Lopez-Leon S, Wegman-Ostrosky T, Perelman C, Sepulveda R, Rebolledo PA, Cuapio A, et al. More than 50 long-term effects of COVID-19: a systematic review and meta-analysis. Sci Rep. 2021;11(1):16144. pmid:34373540
- 4. Premraj L, Kannapadi NV, Briggs J, Seal SM, Battaglini D, Fanning J, et al. Mid and long-term neurological and neuropsychiatric manifestations of post-COVID-19 syndrome: a meta-analysis. J Neurol Sci. 2022;434:120162. pmid:35121209
- 5. Ceban F, Ling S, Lui LMW, Lee Y, Gill H, Teopiz KM, et al. Fatigue and cognitive impairment in Post-COVID-19 Syndrome: a systematic review and meta-analysis. Brain Behav Immun. 2022;101:93–135. pmid:34973396
- 6. Delgado-Alonso C, Valles-Salgado M, Delgado-Álvarez A, Yus M, Gómez-Ruiz N, Jorquera M, et al. Cognitive dysfunction associated with COVID-19: a comprehensive neuropsychological study. J Psychiatr Res. 2022;150:40–6. pmid:35349797
- 7. Krishnan K, Miller AK, Reiter K, Bonner-Jackson A. Neurocognitive profiles in patients with persisting cognitive symptoms associated with COVID-19. Arch Clin Neuropsychol. 2022;37(4):729–37. pmid:35136912
- 8. Megari K, Thomaidou E, Chatzidimitriou E. Highlighting the neuropsychological consequences of COVID-19: evidence from a narrative review. Inquiry. 2024;61:469580241262442. pmid:39286926
- 9. Matias-Guiu JA, Herrera E, González-Nosti M, Krishnan K, Delgado-Alonso C, Díez-Cirarda M, et al. Development of criteria for cognitive dysfunction in post-COVID syndrome: the IC-CoDi-COVID approach. Psychiatry Res. 2023;319:115006. pmid:36521337
- 10. Quan M, Wang X, Gong M, Wang Q, Li Y, Jia J. Post-COVID cognitive dysfunction: current status and research recommendations for high risk population. Lancet Reg Health West Pac. 2023;38:100836. pmid:37457901
- 11. Tavares-Júnior JWL, de Souza ACC, Borges JWP, Oliveira DN, Siqueira-Neto JI, Sobreira-Neto MA, et al. COVID-19 associated cognitive impairment: a systematic review. Cortex. 2022;152:77–97. pmid:35537236
- 12. Crivelli L, Palmer K, Calandri I, Guekht A, Beghi E, Carroll W, et al. Changes in cognitive functioning after COVID-19: a systematic review and meta-analysis. Alzheimers Dement. 2022;18(5):1047–66. pmid:35297561
- 13. Raman B, Cassar MP, Tunnicliffe EM, Filippini N, Griffanti L, Alfaro-Almagro F, et al. Medium-term effects of SARS-CoV-2 infection on multiple vital organs, exercise capacity, cognition, quality of life and mental health, post-hospital discharge. EClinicalMedicine. 2021;31:100683. pmid:33490928
- 14. Woo MS, Malsy J, Pöttgen J, Seddiq Zai S, Ufer F, Hadjilaou A, et al. Frequent neurocognitive deficits after recovery from mild COVID-19. Brain Commun. 2020;2(2):fcaa205. pmid:33376990
- 15. Tsoy E, Zygouris S, Possin KL. Current state of self-administered brief computerized cognitive assessments for detection of cognitive disorders in older adults: a systematic review. J Prev Alzheimers Dis. 2021;8(3):267–76. pmid:34101783
- 16. Wild K, Howieson D, Webbe F, Seelye A, Kaye J. Status of computerized cognitive testing in aging: a systematic review. Alzheimers Dement. 2008;4(6):428–37. pmid:19012868
- 17. Martin EM, Srowig A, Utech I, Schrenk S, Kattlun F, Radscheidt M, et al. Persistent cognitive slowing in post-COVID patients: longitudinal study over 6 months. J Neurol. 2024;271: 46–58.
- 18. Parsons TD, McMahan T, Kane R. Practice parameters facilitating adoption of advanced technologies for enhancing neuropsychological assessment paradigms. Clin Neuropsychol. 2018;32(1):16–41. pmid:28590154
- 19. Despoti A, Megari K, Tsiakiri A, Toumaian M, Koutzmpi V, Liozidou A, et al. Effectiveness of remote neuropsychological interventions: a systematic review. Appl Neuropsychol Adult. 2024:1–9. pmid:39067003
- 20. Hampshire A, Azor A, Atchison C, Trender W, Hellyer PJ, Giunchiglia V, et al. Cognition and memory after Covid-19 in a large community sample. N Engl J Med. 2024;390(9):806–18. pmid:38416429
- 21. Diana L, Regazzoni R, Sozzi M, Piconi S, Borghesi L, Lazzaroni E, et al. Monitoring cognitive and psychological alterations in COVID-19 patients: a longitudinal neuropsychological study. J Neurol Sci. 2023;444:120511. pmid:36473347
- 22. HOCHBERG Y. A sharper Bonferroni procedure for multiple tests of significance. Biometrika. 1988;75(4):800–2.
- 23.
Brown TA. Confirmatory factor analysis for applied research. Guilford publications; 2015.
- 24. Ondé D, Alvarado JM. Reconsidering the conditions for conducting confirmatory factor analysis. Span J Psychol. 2020;23:e55. pmid:33272349
- 25. Rios J, Wells C. Validity evidence based on internal structure. Psicothema. 2014;26(1):108–16. pmid:24444738
- 26.
GmbH; Schuhfried, Moedling A. Vienna Test System.
- 27. Morris JC. The Clinical Dementia Rating (CDR): current version and scoring rules. Neurology. 1993;43(11):2412–4. pmid:8232972
- 28. Olazarán J, Hoyos-Alonso MC, del Ser T, Garrido Barral A, Conde-Sala JL, Bermejo-Pareja F, et al. Practical application of brief cognitive tests. Neurologia. 2016;31(3):183–94. pmid:26383062
- 29. Correa EM, Vallespín GT. COVID persistente. Elementos básicos para el médico de atención primaria. FMC. 2022;29(9):481–9. pmid:36338437
- 30.
Aschenbrenner S, Kaiser S, Pfüller U, Roesch-Ely D, Weisbrod M. Testset COGBAT. Mödling: Schuhfried; 2012.
- 31.
Schuhfried G. Reaction Test. Mödling: Schuhfried; 1996.
- 32.
Sturm W. Manual Wahrnehmungs-und Aufmerksamkeitsfunktionen-Batterie-WAF (manual perception and attention function battery). Mödling: Schuhfried. 2018.
- 33.
Schuhfried G. Cognitrone. Mödling: Schuhfried; 2013.
- 34.
Schuhfried G. Determination Test. Mödling: Schuhfried; 2020.
- 35.
Beck AT, Steer RA, Brown GK. Beck depression inventory. 1996.
- 36. Buysse DJ, Reynolds CF 3rd, Monk TH, Berman SR, Kupfer DJ. The Pittsburgh Sleep Quality Index: a new instrument for psychiatric practice and research. Psychiatry Res. 1989;28(2):193–213. pmid:2748771
- 37. Kos D, Kerckhofs E, Carrea I, Verza R, Ramos M, Jansa J. Evaluation of the Modified Fatigue Impact Scale in four different European countries. Mult Scler. 2005;11(1):76–80. pmid:15732270
- 38. Avila JF, Rentería MA, Witkiewitz K, Verney SP, Vonk JMJ, Manly JJ. Measurement invariance of neuropsychological measures of cognitive aging across race/ethnicity by sex/gender groups. Neuropsychology. 2020;34(1):3–14. pmid:31464473
- 39. Data from the Spanish Ministry of Health. SARS-CoV-2 disease in Spain. panish Ministry of Health. 2024. Available from: https://www.sanidad.gob.es/areas/alertasEmergenciasSanitarias/alertasActuales/nCov/situacionActual.htm
- 40. Rosseel Y. lavaan: AnRPackage for Structural Equation Modeling. J Stat Soft. 2012;48(2).
- 41. Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct Equation Model A Multidisciplinary J. 1999;6(1):1–55.
- 42. Hirschfeld G, Von Brachel R. Improving Multiple-Group confirmatory factor analysis in R–A tutorial in measurement invariance with continuous and ordinal indicators. PractAssess Res Eval. 2014;19.
- 43. Collette F, Hogge M, Salmon E, Van der Linden M. Exploration of the neural substrates of executive functioning by functional neuroimaging. Neuroscience. 2006;139(1):209–21. pmid:16324796
- 44. Zhao S, Toniolo S, Hampshire A, Husain M. Effects of COVID-19 on cognition and brain health. Trends Cogn Sci. 2023;27(11):1053–67. pmid:37657964
- 45. Callan C, Ladds E, Husain L, Pattinson K, Greenhalgh T. “I can’t cope with multiple inputs”: a qualitative study of the lived experience of “brain fog” after COVID-19. BMJ Open. 2022;12(2):e056366. pmid:35149572
- 46. Ariza M, Cano N, Segura B, Adan A, Bargalló N, Caldú X, et al. Neuropsychological impairment in post-COVID condition individuals with and without cognitive complaints. Front Aging Neurosci. 2022;14:1029842. pmid:36337708
- 47. Llana T, Zorzo C, Mendez-Lopez M, Mendez M. Memory alterations after COVID-19 infection: a systematic review. Appl Neuropsychol Adult. 2024;31(3):292–305. pmid:36108666
- 48. Bai F, Tomasoni D, Falcinella C, Barbanotti D, Castoldi R, Mulè G, et al. Female gender is associated with long COVID syndrome: a prospective cohort study. Clin Microbiol Infection. 2022;28(4):611.e9-611.e16.
- 49. Koterba CH, Considine CM, Becker JH, Hoskinson KR, Ng R, Vargas G, et al. Neuropsychology practice guidance for the neuropsychiatric aspects of Long COVID. Clin Neuropsychol. 2024:1–29. pmid:39177216