Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Cognitive function in multiple sclerosis improves with telerehabilitation: Results from a randomized controlled trial

Correction

30 Jan 2018: Charvet LE, Yang J, Shaw MT, Sherman K, Haider L, et al. (2018) Correction: Cognitive function in multiple sclerosis improves with telerehabilitation: Results from a randomized controlled trial. PLOS ONE 13(1): e0192317. https://doi.org/10.1371/journal.pone.0192317 View correction

Abstract

Cognitive impairment affects more than half of all individuals living with multiple sclerosis (MS). We hypothesized that training at home with an adaptive online cognitive training program would have greater cognitive benefit than ordinary computer games in cognitively-impaired adults with MS. This was a double-blind, randomized, active-placebo-controlled trial. Participants with MS were recruited through Stony Brook Medicine and randomly assigned to either the adaptive cognitive remediation (ACR) program or active control of ordinary computer games for 60 hours over 12 weeks. Training was remotely-supervised and delivered through a study-provided laptop computer. A computer generated, blocked stratification table prepared by statistician provided the randomization schedule and condition was assigned by a study technician. The primary outcome, administered by study psychometrician, was measured by change in a neuropsychological composite measure from baseline to study end. An intent-to-treat analysis was employed and missing primary outcome values were imputed via Markov Chain Monte Carlo method. Participants in the ACR (n = 74) vs. active control (n = 61) training program had significantly greater improvement in the primary outcome of cognitive functioning (mean change in composite z score±SD: 0·25±0·45 vs. 0·09±0·37, p = 0·03, estimated difference = 0·16 with 95% CI: 0·02–0·30), despite greater training time in the active control condition (mean±SD:56·9 ± 34·6 vs. 37·7 ±23 ·8 hours played, p = 0·006). This study provides Class I evidence that adaptive, computer-based cognitive remediation accessed from home can improve cognitive functioning in MS. This telerehabilitation approach allowed for rapid recruitment and high compliance, and can be readily applied to other neurological conditions associated with cognitive dysfunction.

Trial Registration: Clinicaltrials.gov NCT02141386

Introduction

Cognitive impairment occurs in up to 70% of all patients with multiple sclerosis (MS) affecting information processing, attention and learning [1, 2]. Despite longstanding recognition, it remains a troubling symptom without adequate treatment.

There has been limited study of cognitive rehabilitation in MS. Traditional approaches (e.g., compensatory strategies and drill-and-practice training) are costly and difficult to uniformly implement, but with some trials indicating benefit [36]. However, any cognitive training program requires multiple sessions administered across weeks or months. Due to the burdens of time and travel, the requirement for traveling to clinic for training often prevents access to treatment for many patients.

Recent computer-based cognitive training CT approaches use technological advances to deliver learning trials that are adapted to the individual user in real-time [79]. Rather than focusing on compensation, intensive repetitive exercise may actually improve cognitive ability at the processing level [810]. Further, participants may train on the computer from home[11], providing access for the many patients for whom repeated outpatient visits are not feasible. Adaptive cognitive remediation (ACR) programs have shown benefit in normal aging [12] and schizophrenia [13]. In MS, a large but underpowered controlled trial reported promising cognitive improvements [14]. Similarly, we found preliminary benefit in a small controlled pilot study to establish our protocol for this trial [11].

In this study, we tested the efficacy of a computer-based ACR program in MS against an active control comparison of ordinary computer games. Participants with MS and cognitive impairment were screened, enrolled, and randomized to either the ACR or active control training conditions. Training was completed from home for 12 weeks using a telerehabilitation protocol based on remote monitoring and supervision.

Materials and methods

Study design

This study was a double-blind, randomized, active-placebo-controlled trial. Institutional review board (IRB) approval was provided through Stony Brook University Hospital in Stony Brook, New York. Initial IRB approval was obtained April 11, 2013. Recruitment began September 10, 2013 through June 5, 2015 with last data collection September 9, 2015. The authors confirm that all ongoing and related trials for this drug/intervention are registered at clinicaltrials.gov, number: NCT02141386. Due to an administration error, registration occurred after enrollment was initiated.

Participants

Enrollment criteria were designed to be as inclusive as possible given that the computer-based CT programs may be available to a wide range of participants through prescription or commercial access. Participants included those meeting diagnostic criteria for MS [15] (any subtype) and scoring one or more standard deviations below published normative data on the Symbol Digit Modalities Test or SDMT [16]. The SDMT is considered a sensitive measure of cognitive involvement in MS with performance acting as an accurate predictor of generalized neuropsychological functioning [17].

To ensure adequate understanding of the CT instructions and valid administration of the neuropsychological testing (currently available in English language only), participants were required to have a reading recognition standard score of 85 or above (Wide Range Achievement Test Third Edition or WRAT-3) (24), and have learned English by age 12 years. Participants were also required to have adequate visual, auditory, and motor capacity to operate computer software. Additional inclusion criteria were no anticipated medication changes during the course of the three-month study period, and no relapses or steroids in the previous month.

Exclusion criteria were defined as history of any developmental disorders, conditions other than MS associated with cognitive impairment, a primary psychiatric disorder, any serious medical conditions, alcohol or substance use disorder, and also history of use of computer-based CT developed by Posit Science (the developer of our study program). Eligible participants provided written, informed consent prior to all study procedures.

Randomization and masking

Both the participant and study psychometricians were blinded to treatment condition. As any training could be potentially beneficial, participants were told they would be randomly assigned to one of two training programs that were being compared. A study technician, separate from study psychometrician, followed the random allocation sequence to assign participants to a study condition and prepare study equipment and materials.

Eligible and consented participants were randomly assigned to the ACR or active control condition using stratified, permuted, block randomization generated by the study statistician (Dr. Yang). Strata were based on three levels for the factors of age (<35, 35 to 50, and >50 years), WRAT-3 (24) reading recognition standard score (as an estimate of premorbid intellectual functioning (25): <85, 85 to 115,>115) and SDMT age-normative z score (as an estimate of current cognitive impairment: ≤-3•00–2•00 to -2•99, and -1•00 to -1•99). A designated study technician enrolled and assigned participants to either condition and prepared all study laptops. The study technician that assigned a participant’s condition was not involved in the collection of data at baseline or study end visits. Study psychometricians collected the outcome data and were blinded to participant condition.

The ACR condition was an online adaptive cognitive training program developed by Posit Science Corporation [18]. The program was a research version of the BrainHQ program, and offered a portal dedicated to the study, central management of study participation and metrics, and a set of 15 exercises targeting speed, attention, working memory, and executive function through the visual and auditory domains.

Each exercise was adaptive, employing a Bayesian algorithm operating on a trial-by-trial basis to increase the challenge as participants performed correctly and to reduce challenge as participants performed incorrectly, consequently participants generally performed ~80% of trials correct. For example, a processing speed exercise adapts presentation time to slower or faster rates, while a working memory exercise adapts items in the working memory span higher or lower. This design allows for an initial low level of challenge, with adjustments applied on an individualized basis as learning and abilities improve over time. This feature maintains a high level of challenge without reaching a level of failure or frustration, and consistently engages the user in task performance.

Each exercise employed multiple stimulus sets designed to span relevant dimensions of real-world stimuli. For example, auditory exercises employed stimuli related to human speech perception that were initially slowed and later speeded, while visual exercises initially employed simple high contrast stimuli and later provided stimuli that were naturalistic and low contrast.

Over the course of daily trials, a participant is required to attend to stimuli, detect novel stimuli, and generally receive a reward (after a correct trial). These aspects of training were designed to repetitively engage cholinergic selective attention systems, noradrenergic novelty detection systems, and dopaminergic reward systems.

The goal of the training exercises is to improve the speed and accuracy of brain information processing while engaging neuromodulatory systems, and in doing so, allow the generalization of training to improvement cognitive performance in real-world situations. The exercises were sequenced across the duration of a participant's involvement such that auditory exercises were delivered first, with visual exercises coming later. Each daily training session consisted of four exercises chosen from an active set of six; when all of the content in an exercise was completed (typically over a number of days), that exercise was withdrawn from the schedule and the next exercise added to the active set of six.

The active control condition was a software gaming suite developed by Hoyle Puzzle and Board Games (2008 version) [19]. These games served as an active placebo control, designed to account for nonspecific treatment effects including interactions with research personnel, and computer-based game-playing. Previous trials have used similar games as an active control condition to demonstrate the specific effects of the targeted adaptive training program [2022]. Participants were provided a set gaming schedule and were instructed to play games in an arrangement that mirrored to the active condition, with a schedule of four games per session for 15 minutes each following a set rotational sequence [11]. The games were selected for “face validity” as having cognitive benefit (e.g., word puzzles) but did not include the active condition’s program design features to drive learning or maintain user challenge.

Procedures

Participants were instructed to train in their assigned condition for one hour per day, five days per week, over 12 weeks (targeting 60 hours of total program use). The training schedule for both conditions was predetermined, with both the ACR and active control condition having rotating sets of training games. Participants had ongoing access to technical support as well as a scheduled weekly check-in phone call. The unblinded study technician conducted these weekly check-in phone calls, as they were not involved in the administration of study outcome measures.

All participants used a study-provided 17” laptop computer, peripheral equipment including headphones, and a user guide with directions for the use of their assigned program, following procedures previously described [11]. All laptops were configured with a secure monitoring software program (“WorkTime” developed by NesterSoft, Inc) to monitor and record program compliance in real-time throughout the study. This software tracked and recorded all computer activity in real-time. Therefore, ongoing acquisition of data, such as the amount of time spent on games, informed the weekly phone contact.

Baseline and Study End Assessments: Neuropsychological measures were administered at baseline and repeated at study end. At the study end visit, the participants returned the study equipment. Participants were reimbursed $100 for completion of each of the two study assessment visits for a total of $200.

Outcomes

Primary Outcome—Neuropsychological Composite Score: A battery of neuropsychological tests [2328] was administered at baseline and study end visits (shown in Table 1) consisting of key tests that are commonly used to measure MS-related cognitive impairment. Alternate forms were used for each of the measures, with the order counterbalanced across participants. To provide a composite score of cognitive functioning, the main representative measure from each test (e.g., total learning across trials) was transformed to a z score based on published or manual-provided age-normative data. Only one key score per test was included (defined a priori) to avoid overrepresentation of any one test in the composite. The z scores for each test measure were averaged to result in a composite z score. Finally, for each participant, the difference between their baseline and study end composite z scores were calculated to serve as the primary outcome of change in cognitive performance.

To explore if any cognitive change appeared to be specific to any one measure, groups were also compared on change in the individual measures of the composite.

Program Compliance: As a secondary outcome, compliance was measured as a by two approaches: total time and number of compliant weeks. Compliance was defined as program use of 50% or more of target (i.e., 30 hours total) and secondarily as compliant weeks, or having at least 50% of the total study period (6 weeks or more) where there was at least 50% compliance for that week (2·5 hours or more) [11].

Self-Reported Change in Cognitive Functioning: As a secondary outcome measure, at study end, participants rated whether their cognition stayed the same (0), improved (1) or declined (-1) from baseline to study end.

Statistical analysis

Sample size was estimated based on published trials of Posit training programs and from an initial pilot study of n = 10 MS participants. Data were entered using the Research Electronic Database Capture (REDCap) [29] system. A data monitoring committee did not oversee the study due to the lack of risks involved with the remediation. An intent-to-treat analysis was employed and all participants were included in analyses.

As pre-study planned, a linear mixed model adjusted for the three randomization stratification factors (age, WRAT-3 reading standard score, SDMT z-score) as covariates to decide whether the primary outcome score had changed significantly between the two study arms. The dependence structure of the two scores from each participant was modeled as compound symmetry.

There were five (3·74%) patients who had incomplete scores. To be conservative, sensitivity analysis was further performed to check the influence of these missing data. Two methods are used: one is stratified non-parametric test (van Elteren test) using complete cases only and other one is completed with multiple imputation procedure with the Markov Chain Monte Carlo (MCMC) method [30] applied to impute the missing values of the primary outcome (composite cognitive z-score). Variables used to impute the missing values of end point outcomes included all the participant’s demographic information (age, gender, race and ethnicity, year of education) and baseline test scores. These imputed data sets were then analyzed by linear mixed model and the results from these analyses were combined based on Rubin’s rule [31].

We hypothesized that two factors may predict treatment outcome for neuropsychological benefit: degree of estimated cognitive impairment at screening, as measured by the SDMT, and total time played in the assigned condition. Linear mixed models were applied to test if screening SDMT or total time played contributed to the change in the neuropsychological composite at study end visit. All analyses were performed in SAS 9·3 and significance level was set at p-value < 0·05.

Results

A total of n = 135 participants were enrolled between September 10, 2013 and June 5th, 2015 and all study visits were completed as of September, 9th, 2015. Fig 1 shows the enrollment and study flow, with n = 74 assigned to the ACR condition and n = 61 to the active control condition (with unequal samples due to the stratification requirements). Only five participants did not complete the study and all equipment was returned without damage. Reasons for withdrawal of the trial included acute relapse (n = 1) and medical and personal issues that prevented adherence to study procedures (n = 4).

Table 2 shows the demographic and clinical features and baseline neuropsychological performances of the participants in each condition. The participants were generally middle-aged (mean age±SD: 49·80 ± 12·45) and mostly women (77·04%). The majority (67·85%) of the group had relapsing remitting MS (RRMS), overall mild to moderate neurologic disability (median EDSS score of 3·5), and mean disease duration±SD of 12·62 ± 10·46 years. Based on screening SDMT scores, there was a mild-to-moderate degree of overall impairment (mean SDMT z-score±SD of -2·10 ± 0·99).

As also indicated in Table 2, the stratification was successful and the groups were overall well-matched across all variables at baseline, only significantly differing with more men assigned to the ACR versus active control condition (24 of a total of 31 enrolled men).

The two groups did not differ in performance on any measures at baseline, with generally mild to moderate deficits on measures of information processing and memory (Composite score: ACR n = 74 vs. active control n = 61, mean±SD:-0·86±0·77 vs. -0·77±0·73, respectively, p = 0·4569). Change scores varied across both conditions, with a greater range in the active arm: ACR n = 70 vs active control n = 60, mean±SD:-0·25±0·45 vs. 0·09±0·37, Cohen’s d = 0·3883) as shown in Table 3. Under intent-to-treat analysis the active condition had a significantly higher change in the neuropsychological composite from baseline to study end using linear mixed model (estimated difference = 0·16 with 95% CI: 0·02–0·30, p = 0·0286). Such statistical significance was also found by using stratified nonparametric test (p = 0·0073) and multiple imputation (estimated difference = 0·16 with 95% CI: 0.02–0.30, p = 0·0299).

Among the individual measures, very few were found to have statistically significant improvement between the two groups (Fig 2 displays the change in performance on each measure in both groups). The active group was found to have significant improvement on the 2 second PASAT via the van Elteren test and a significant improvement was also found on the DKEFS as accounted by the linear mixed model, neither measure was found to be significantly improved by both models, unlike the larger composite improvement. Without the concordance of both models, we discarded these findings.

thumbnail
Fig 2. Change in z score across measures by condition.

*Indicates a significant difference corresponding to a p value<0.05. Higher scores indicate improvement.

https://doi.org/10.1371/journal.pone.0177177.g002

Compliance was high for the full sample, with n = 91 (67·4%) and n = 92 (68·15%) participants playing at least 50% of the goal depending on how compliance was defined (at least 6 compliant weeks or meeting or exceeding 30 hours of training time). However, versus those in the ACR condition, the active control condition group had greater compliance (active control n = 48 vs. ACR n = 43, 78·69% vs. 58·11% or active control n = 48 vs ACR n = 44, 78 ·69% vs. 59·46%) and spent significantly greater time in program than the active condition (Fig 3: ACR vs. active control, mean±SD = 37·74±23 ·78 vs. 56·95±34·53, p = 0·0056).

thumbnail
Fig 3. Total time spent in program by condition.

*Greater time was spent in program by the active control condition (p = 0.006).

https://doi.org/10.1371/journal.pone.0177177.g003

Neither total program time nor screening SDMT contributed to the change at study end performance. However, when considering different conditions separately there was a positive marginal correlation between the composite z-score change and total program time for the active condition (r = 0·25, p = 0·03) indicating a modest link between time spent playing the active program and magnitude of improvement on the neuropsychological composite score.

At study end, more active condition participants (56·7% vs. 31·0%) reported experiencing an improvement in cognition over the 12-week duration of the study (indicating a rating of 1·0, versus no change of 0·0, and -1·0 for decline: ACR vs. active control, mean±SD = 0·52±0·59 vs. 0·28±0·52, p = 0·007).

Discussion

We found that 12 weeks of training with an ACR training program was superior to an active control of playing ordinary computer games for improving cognitive functioning in participants with MS. The benefit was measured by a change in a composite of neuropsychological tests and was modest overall. No one measure indicated a specific response to the training; instead, the majority of cognitive measures changed in a direction that favored the active program. This lack of specificity may be attributable to the diffuse effects of improved cognitive processing speed across the range of measures, mediated by individual differences in baseline performances.

The significantly greater benefit for the adaptive training program was found despite significantly less program training time. Participants in the active control condition trained an average of 19 hours more than those in the ACR program. Notably, in the active program only, time played was significantly associated with cognitive improvement.

Our findings are consistent with our prior pilot study (using a different adaptive training program for the same time period) [11] as well as a previous trial of a version of the study program in a smaller sample [14]. Additionally, our findings also concur with those reported in a recent meta-analysis that found modest cognitive benefit for healthy aging adults who underwent cognitive remediation, typically seeing the greatest benefit for cognitive domains that were trained most often [7]. The findings in this study are encouraging in that this is the largest clinical trial of cognitive remediation in an MS sample published as of yet and the results support the hypothesis that cognitive impairment in MS may be remediated [6, 11, 14]. Further, as baseline cognitive functioning as measured by the SDMT was not predictive of change in cognitive functioning overall or in response to the intervention, taken together the findings suggests that this intervention may be appropriate for MS participants with a wide range of cognitive problems.

With the advent of modern technological advances in healthcare, approaches towards rehabilitation, treatment, and cognitive remediation are transitioning to an online platform that can be adaptive and personalized. However, computerized cognitive training programs have not yet been evaluated thoroughly, with methodological criticisms raised for much of the prior work in this field [32]. Importantly, the current study overcomes the limitations of many previous studies (including small sample sizes, passive controls, and unrepresentative outcomes) and far exceeds the scope of other remote cognitive training studies in the field of MS.

A major advantage of this approach was providing access to the intervention from home. Our enabling of participants to access treatment from home allowed rapid study enrollment (n = 135 over 12 months), strong program compliance, and relatively low cost when considering for real-world use. For compliance and structured use, we believe that the remote supervision is a critical element. As was also seen in our pilot study [11], our relatively low rate of noncompliance and study withdrawal indicates the success of our remote approach. The remote supervision approach provides the patient with readily available assistance and reinforcement to keep a patient on target, especially in older, aging samples. Additionally, it is notable that all study equipment was returned, preserving overall treatment cost. The remote approach is especially exciting for the potential of opening up participation for studies of this kind for individuals who are largely home-based or who or struggling to maintain employment and reluctant to participate in rehabilitative activities that would interfere with their work and family time commitments.

Methodological challenges to the study included the broad parameters of the active study program and our dependence on the developers for the provision of the study program. Over the course of the study, there were centralized technical difficulties and platform changes and updates that may have affected our study users’ experience to varying degrees. While we included an active control comparison condition to control for computer use and game playing activity, we did not control for the higher-level features of the active program such as adaptive versus non-adaptive features and other design aspects to drive learning. At the end of study, the gender distribution was not balanced between the two study arms, which occurred randomly. However, there is no evidence to support a gender difference in any possible treatment effect and hence this imbalance will not affect our interpretation of the results.

It is not clear which aspects of the active program were therapeutic and how this program would compare to other available programs with similar design features. Future studies should be able to determine the precise nature of training and domain-specific improvement, baseline characteristics to predict response, and further explore titration and treatment time. Another unknown consideration is the duration of benefit. Once established, the cognitive benefit may require continued training to be sustained over time, and there may be a regression in functioning once the training is discontinued.

By purposefully including broad entry criteria we were able to study the home use of cognitive training programs in a real-world setting. The study was designed to approximate an application for individuals with MS interested in participating in a cognitive training program, initiated either through prescription or self-referral. Going forward, more careful study of patients with specific disease features will allow for targeted program adjustments.

An additional limitation is that measures of depression and fatigue were not included. Both symptoms are common in MS and known to influence cognitive functioning. Therefore, it would be important to both characterize the sample at baseline on these symptom features, as well as to measure change in the symptom severity following treatment and in relation to the presence or absence of cognitive benefit.

This study capitalizes on recent technological advances and provides an alternative route for cognitive remediation, through remote-supervision at home. This study supports the feasibility of computer-based cognitive remediation accessed from home, and demonstrates Class 1 efficacy of the treatment. Further trials may seek to determine which members of the MS population are most responding to benefit or, alternatively, how benefit can be enhanced or sustained through techniques such as medication, neuromodulation, or even exercise. The remote delivery and findings of cognitive benefit may be generalizable to other neurological conditions in which cognitive function is compromised and this study can serve as a model for these trials.

Supporting information

S1 Checklist. Consort 2010 checklist for clinical trials.

https://doi.org/10.1371/journal.pone.0177177.s001

(DOC)

S1 Screenshot. Example of the ACR program: Auditory instruction memory.

https://doi.org/10.1371/journal.pone.0177177.s004

(TIFF)

S2 Screenshot. Example of the ACR program: Auditory time order judgement.

https://doi.org/10.1371/journal.pone.0177177.s005

(TIFF)

S3 Screenshot. Example of the ACR program: Multiple object tracking.

https://doi.org/10.1371/journal.pone.0177177.s006

(TIFF)

S1 Video. Example of cognitive remediation conditions.

https://doi.org/10.1371/journal.pone.0177177.s007

(MP4)

Acknowledgments

We would like to thank Maria Amella, Wendy Fang, Ariana Frontario, Patricia Melville, William Scherl and Colleen Schwarz for their support of this study. We acknowledge the biostatistical consultation and support provided by the Biostatistical Consulting Core at School of Medicine, Stony Brook University.

Author Contributions

  1. Conceptualization: LC LK JY.
  2. Data curation: LC MS KS LH JY JX.
  3. Formal analysis: JY JX.
  4. Funding acquisition: LC LK.
  5. Investigation: LC MS LH KS.
  6. Methodology: LC LK.
  7. Project administration: LC MS KS LH.
  8. Resources: LC MS KS LH.
  9. Software: N/A.
  10. Supervision: LC LK.
  11. Validation: LC LK JY.
  12. Visualization: LC JY JX MS.
  13. Writing – original draft: LC MS JY.
  14. Writing – review & editing: LC LK MS KS JY JX LH.

References

  1. 1. Benedict RH, Zivadinov R. Risk factors for and management of cognitive dysfunction in multiple sclerosis. Nature reviews Neurology. 2011;7(6):332–42. pmid:21556031
  2. 2. Rocca MA, Amato MP, De Stefano N, Enzinger C, Geurts JJ, Penner IK,.et al. Clinical and imaging assessment of cognitive dysfunction in multiple sclerosis. The Lancet Neurology. 2015; Published online 2.5.15.
  3. 3. Stuifbergen AK, Becker H, Perez F, Morison J, Kullberg V, Todd A. A randomized controlled trial of a cognitive rehabilitation intervention for persons with multiple sclerosis. Clinical rehabilitation. 2012;26(10):882–93. pmid:22301679
  4. 4. O'Brien AR, Chiaravalloti N, Goverover Y, Deluca J. Evidenced-based cognitive rehabilitation for persons with multiple sclerosis: a review of the literature. Archives of physical medicine and rehabilitation. 2008;89(4):761–9. pmid:18374010
  5. 5. Rosti-Otajarvi EM, Hamalainen PI. Neuropsychological rehabilitation for multiple sclerosis. The Cochrane database of systematic reviews. 2011;(11):CD009131.
  6. 6. Chiaravalloti ND, Moore NB, Nikelshpur OM, DeLuca J. An RCT to treat learning impairment in multiple sclerosis: The MEMREHAB trial. Neurology. 2013;81(24):2066–72. pmid:24212393
  7. 7. Lampit A, Hallock H, Valenzuela M. Computerized cognitive training in cognitively healthy older adults: a systematic review and meta-analysis of effect modifiers. PLoS medicine. 2014;11(11):e1001756. pmid:25405755
  8. 8. Mishra J, de Villers-Sidani E, Merzenich M, Gazzaley A. Adaptive training diminishes distractibility in aging across species. Neuron. 2014;84(5):1091–103. pmid:25467987
  9. 9. Merzenich MM, Van Vleet TM, Nahum M. Brain plasticity-based therapeutics. Frontiers in human neuroscience. 2014;8:385. pmid:25018719
  10. 10. Vinogradov S, Fisher M, de Villers-Sidani E. Cognitive training for impaired neural systems in neuropsychiatric illness. Neuropsychopharmacology: official publication of the American College of Neuropsychopharmacology. 2012;37(1):43–76.
  11. 11. Charvet L, Shaw M, Haider L, Melville P, Krupp L. Remotely-delivered cognitive remediation in multiple sclerosis (MS): protocol and results from a pilot study. Multiple Sclerosis Journal—Experimental, Translational and Clinical. 2015;1.
  12. 12. Mahncke HW, Connor BB, Appelman J, Ahsanuddin ON, Hardy JL, Wood RA, et al. Memory enhancement in healthy older adults using a brain plasticity-based training program: a randomized, controlled study. Proceedings of the National Academy of Sciences of the United States of America. 2006;103(33):12523–8. pmid:16888038
  13. 13. Keefe RS, Vinogradov S, Medalia A, Buckley PF, Caroff SN, D'Souza DC, et al. Feasibility and pilot efficacy results from the multisite Cognitive Remediation in the Schizophrenia Trials Network (CRSTN) randomized controlled trial. The Journal of clinical psychiatry. 2012;73(7):1016–22. pmid:22687548
  14. 14. Hancock LM, Bruce JM, Bruce AS, Lynch SG. Processing speed and working memory training in multiple sclerosis: a double-blind randomized controlled pilot study. Journal of clinical and experimental neuropsychology. 2015;37(2):113–27. pmid:25686052
  15. 15. Polman CH, Reingold SC, Banwell B, Clanet M, Cohen JA, Filippi M, et al. Diagnostic criteria for multiple sclerosis: 2010 revisions to the McDonald criteria. Annals of neurology. 2011;69(2):292–302. pmid:21387374
  16. 16. Smith A. The Symbol Digit Modalities Test (SDMT) Symbol Digit Modalities Test: Manual.: Western Psychological Services; 1982.
  17. 17. Parmenter BA, Weinstock-Guttman B, Garg N, Munschauer F, Benedict RH. Screening for cognitive impairment in multiple sclerosis using the Symbol digit Modalities Test. Multiple sclerosis. 2007;13(1):52–7. pmid:17294611
  18. 18. BrainHQ. Posit Science; 2015.
  19. 19. Gaming H. Hoyle Puzzles and Board Games. 2008 ed: WD Encore Software, LLC.
  20. 20. Fisher M, Holland C, Merzenich MM, Vinogradov S. Using neuroplasticity-based auditory training to improve verbal memory in schizophrenia. The American journal of psychiatry. 2009;166(7):805–11. pmid:19448187
  21. 21. Fisher M, Loewy R, Carter C, Lee A, Ragland JD, Niendam T, et al. Neuroplasticity-based auditory training via laptop computer improves cognition in young individuals with recent onset schizophrenia. Schizophrenia bulletin. 2015;41(1):250–8. pmid:24444862
  22. 22. Loewy R, Fisher M, Schlosser DA, Biagianti B, Stuart B, Mathalon DH, et al. Intensive Auditory Cognitive Training Improves Verbal Memory in Adolescents and Young Adults at Clinical High Risk for Psychosis. Schizophrenia bulletin. 2016;42 Suppl 1:S118–26.
  23. 23. Tombaugh TN. A comprehensive review of the Paced Auditory Serial Addition Test (PASAT). Archives of clinical neuropsychology: the official journal of the National Academy of Neuropsychologists. 2006;21(1):53–76.
  24. 24. Hartman DE. Wechsler Adult Intelligence Scale IV (WAIS IV): return of the gold standard. Applied neuropsychology. 2009;16(1):85–7. pmid:19205953
  25. 25. Scherl WF, Krupp LB, Christodoulou C, Morgan TM, Hyman L, Chandler B, et al. Normative data for the selective reminding test: a random digit dialing sample. Psychological reports. 2004;95(2):593–603. pmid:15587227
  26. 26. Benedict RH. Brief Visuospatial Memory Test—Revised: Professional Manual. Odessa, FL: Psychological Assessment Resources, Inc.; 2007.
  27. 27. Delis DC, Kaplan E, Kramer JH. Delis-Kaplan Executive Function System (D-KEFS) Examiner's Manual. San Antonio, TX: The Psychological Corporation; 2001. 1–218 p.
  28. 28. Parmenter BA, Zivadinov R, Kerenyi L, Gavett R, Weinstock-Guttman B, Dwyer MG, et al. Validity of the Wisconsin Card Sorting and Delis-Kaplan Executive Function System (DKEFS) Sorting Tests in multiple sclerosis. Journal of clinical and experimental neuropsychology. 2007;29(2):215–23. pmid:17365256
  29. 29. University V. REDCap—Research Electronic Database Capture. 6.10.0 ed: Vanderbilt University; 2016.
  30. 30. Rubin DB. Multiple Imputation for Nonresponse in Surveys. New York: John Wiley & Sons, Inc.; 1987.
  31. 31. Little RJ, Rubin DB. Statistical Analysis with Missing Data. New York: John Wiley & Sons, Inc.; 1987.
  32. 32. McCabe JA, Redick TS, Engle RW. Brain-Training Pessimism, but Applied-Memory Optimism. Psychological science in the public interest: a journal of the American Psychological Society. 2016;17(3):187–91.