Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Working Memory, Reasoning, and Task Switching Training: Transfer Effects, Limitations, and Great Expectations?

  • Pauline L. Baniqued ,

    banique1@illinois.edu

    Affiliations Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana Champaign, Urbana, Illinois, United States of America, Department of Psychology, University of Illinois at Urbana Champaign, Urbana, Illinois, United States of America

  • Courtney M. Allen ,

    Contributed equally to this work with: Courtney M. Allen, Michael B. Kranz, Kathryn Johnson, Aldis Sipolins, Charles Dickens

    Affiliation Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana Champaign, Urbana, Illinois, United States of America

  • Michael B. Kranz ,

    Contributed equally to this work with: Courtney M. Allen, Michael B. Kranz, Kathryn Johnson, Aldis Sipolins, Charles Dickens

    Affiliations Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana Champaign, Urbana, Illinois, United States of America, Department of Psychology, University of Illinois at Urbana Champaign, Urbana, Illinois, United States of America

  • Kathryn Johnson ,

    Contributed equally to this work with: Courtney M. Allen, Michael B. Kranz, Kathryn Johnson, Aldis Sipolins, Charles Dickens

    Affiliation Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana Champaign, Urbana, Illinois, United States of America

  • Aldis Sipolins ,

    Contributed equally to this work with: Courtney M. Allen, Michael B. Kranz, Kathryn Johnson, Aldis Sipolins, Charles Dickens

    Affiliations Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana Champaign, Urbana, Illinois, United States of America, Department of Psychology, University of Illinois at Urbana Champaign, Urbana, Illinois, United States of America

  • Charles Dickens ,

    Contributed equally to this work with: Courtney M. Allen, Michael B. Kranz, Kathryn Johnson, Aldis Sipolins, Charles Dickens

    Affiliation Aptima, Inc., Woburn, Massachusetts, United States of America

  • Nathan Ward,

    Affiliation Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana Champaign, Urbana, Illinois, United States of America

  • Alexandra Geyer,

    Affiliation Aptima, Inc., Woburn, Massachusetts, United States of America

  • Arthur F. Kramer

    Affiliations Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana Champaign, Urbana, Illinois, United States of America, Department of Psychology, University of Illinois at Urbana Champaign, Urbana, Illinois, United States of America

Working Memory, Reasoning, and Task Switching Training: Transfer Effects, Limitations, and Great Expectations?

  • Pauline L. Baniqued, 
  • Courtney M. Allen, 
  • Michael B. Kranz, 
  • Kathryn Johnson, 
  • Aldis Sipolins, 
  • Charles Dickens, 
  • Nathan Ward, 
  • Alexandra Geyer, 
  • Arthur F. Kramer
PLOS
x

Abstract

Although some studies have shown that cognitive training can produce improvements to untrained cognitive domains (far transfer), many others fail to show these effects, especially when it comes to improving fluid intelligence. The current study was designed to overcome several limitations of previous training studies by incorporating training expectancy assessments, an active control group, and “Mind Frontiers,” a video game-based mobile program comprised of six adaptive, cognitively demanding training tasks that have been found to lead to increased scores in fluid intelligence (Gf) tests. We hypothesize that such integrated training may lead to broad improvements in cognitive abilities by targeting aspects of working memory, executive function, reasoning, and problem solving. Ninety participants completed 20 hour-and-a-half long training sessions over four to five weeks, 45 of whom played Mind Frontiers and 45 of whom completed visual search and change detection tasks (active control). After training, the Mind Frontiers group improved in working memory n-back tests, a composite measure of perceptual speed, and a composite measure of reaction time in reasoning tests. No training-related improvements were found in reasoning accuracy or other working memory tests, nor in composite measures of episodic memory, selective attention, divided attention, and multi-tasking. Perceived self-improvement in the tested abilities did not differ between groups. A general expectancy difference in problem-solving was observed between groups, but this perceived benefit did not correlate with training-related improvement. In summary, although these findings provide modest evidence regarding the efficacy of an integrated cognitive training program, more research is needed to determine the utility of Mind Frontiers as a cognitive training tool.

Introduction

Cognitive training is not a new concept, despite the surge in “brain training” applications that capitalize on the marketability of programs informed by “neuroplasticity” research [1]. In any activity, prolonged experience or practice leads to proficiency in that specific process, or skilled behavior. More recently, there has been increased interest in developing training programs that lead to improvement in or “transfer” to a wider array of cognitive abilities or exercises beyond the specific task trained. In the psychology literature, this line of research is coined “cognitive training” [24] and is often associated with the goal to enhance cognition or ameliorate the age-related decline of cognitive abilities such as working memory, reasoning, and fluid intelligence (Gf), abilities that have been shown to be predict performance in academic and workplace settings [57]. Developmental researchers also employ computerized training programs in hopes of improving cognitive abilities in children [813], including those from disadvantaged backgrounds [14] and those with learning difficulties [1519].

Improvements in reasoning/Gf have been found in several studies that employ working memory training [20, 21], task switching training [22], and reasoning training [14, 23], while improvements in working memory are primarily found in training studies that use working memory training tasks ([9, 17, 20, 2428]). Although promising, several of these experiments, which were conducted on different age groups from children to older adults, face methodological shortcomings involving small sample sizes, single tests of cognitive transfer, and the lack of a comparable active control group [2931]. Training-related improvement from the dual n-back working memory paradigm for example, has often not been replicated in other laboratories [3235] (but see [36, 37]). Recent meta-analyses and reviews differ in their conclusions on the benefit of working memory training and highlight the implications of the aforementioned methodological issues [3842]. More broadly, computer-based training paradigms, from video games to laboratory-based regimens, yield improvement in the trained tasks but limited transfer to other related abilities, including those similar to the trained tasks [14, 23, 4350]. Thus, although behavioral and neural changes can be observed from training, these changes have not been shown to consistently translate to meaningful improvements outside of the training paradigm.

Several studies employing a multiple-task training approach, often using more complex tasks or games, show promise in engendering transfer beyond the specific trained tasks [14, 5155] (but see [46, 48, 5657]). To maximize training benefits in the current study, we employ working memory, reasoning, and task-switching training tasks similar to those previously mentioned, which have shown promise in enhancing working memory and reasoning/Gf, abilities that highly overlap in the psychometric literature. We integrate six of these tasks into a mobile training platform called “Mind Frontiers,” which modifies the surface features of the training tasks (i.e., their appearance) to unify them into a Wild West-themed game. All tasks were programmed to be adaptive in difficulty, and a scoring/reward system was added to the game to promote engagement for the duration of training, which consisted of 20 hour-and-a-half-long sessions, with each game played for approximately 12 minutes.

To better attribute any training-related improvements to the Mind Frontiers program, the current study employed an active control group that also involved interaction with a mobile device and multiple adaptive training games. For the active control group, we used visual and perceptual training tasks that have been shown to produce improvements in the performance of these tasks but not improvements in working memory and reasoning/Gf tests. This included three variants of a visual search paradigm previously used as an active control task in a working memory training study [32] and three variants of a change detection task that was shown not to transfer to untrained tasks [58].

As expectancy effects are a significant issue in cognitive training studies, we used a questionnaire to assess perceived improvement and other biases that may contribute to a placebo training effect [29, 30]. We also employed multiple transfer tests to allow analysis at the construct level and better generalize findings to improvement in cognitive abilities. We used a set of established measures from the Virginia Cognitive Aging Project Battery [59], which is comprised of tasks validated to assess key cognitive abilities including reasoning/Gf, episodic memory, and perceptual speed. In addition, we administered neurocognitive tests to ensure comprehensive assessment of the training effects, including multiple tests of working memory, selective attention, divided attention, and task switching.

It is to be noted that while improving reasoning/Gf abilities is a main goal of the study, we hypothesize that training with Mind Frontiers may also lead to benefits in related abilities, such as attentional control and perceptual or processing speed. As these abilities are often inter-related in the literature [5962], we hypothesize that the Mind Frontiers group will also show improvements in “lower-level abilities” of selective attention, divided attention, and perceptual speed, especially given the speeded and game-like implementation of the tasks. Furthermore, reasoning/Gf ability has been shown to be relatively stable in young adulthood [6365], whereas other skills that are also recruited in reasoning/Gf games may be more malleable or sensitive to training.

Methods

Participants

Participants were recruited from the University of Illinois campus and Champaign-Urbana community through flyers and online postings advertising participation in a “cognitive training study.” Pre-screening for demographic information (e.g., sex, education, English language proficiency) and game experience was administered using a survey completed over email. A few general game experience questions in the survey were embedded with other activity questions that included the Godin Leisure-Time Exercise Questionnaire [66]. More detailed information about game play experience, history and habits were queried in a post-experiment survey. Upon passing pre-screening, an experimenter followed-up with a phone interview that assessed major medical conditions that may affect neurocognitive testing. Participants eligible for the study fulfilled at least the following major requirements: (1) between 18 and 30 years old, (2) 75% right-handedness according to the Edinburgh Handedness Inventory, (3) normal or corrected-to-normal vision and hearing, (4) no major medical or psychological conditions, (5) no non-removable metal in the body, and (6) played no more than five hours per week of video games in the last six months. All participants signed informed consent forms and completed experimental procedures approved by the University of Illinois Institutional Review Board. One hundred two participants were recruited. Ninety participants completed the study and received compensation of $15/hour. Twelve individuals who dropped out or were disqualified from the study received $7.50/hour. Demographics are summarized in Table 1. More information about study procedures is available in S1 File.

Study Design

Participants completed three cognitive testing sessions and an MRI session before and after the training intervention. The MRI data will not be presented in this paper. Assessments were completed in a fixed order. Participants were randomly assigned to the Mind Frontiers training group or the active control training group. They completed four to five training sessions per week for four to five weeks, a total of 20 sessions; each session involved completing six cognitive training tasks (games) for approximately 12 minutes each. The task order was pseudo-randomized across sessions and all subjects completed the same order during each session. Following the training period, participants completed the same four testing sessions in reverse order. More details about the training protocol can be found in S1 File.

Training Protocol

All participants completed training on portable handheld devices. After the first, tenth and last training sessions, participants completed a training feedback questionnaire electronically.

Mind Frontiers

The Mind Frontiers group completed six adaptive training tasks (Table 2 and Fig 1) in each training session. All games were programmed by Aptima, Inc. using the Unity game engine and were administered using the Samsung Google Nexus 10 tablet. Table 2 provides a summary of each game and its source from previous literature. These games were selected based on their known associations (psychometric properties, training-related improvements) with the following abilities: reasoning/Gf, working memory, visuospatial reasoning, inductive reasoning, and task switching.

thumbnail
Fig 1. Mind Frontiers tasks.

Screenshots of Mind Frontiers games: Top to bottom, left to right: Supply Run, Riding Shotgun, Sentry Duty, Safe Cracker, Irrigator, Pen ‘Em Up.

https://doi.org/10.1371/journal.pone.0142169.g001

Active Control

The active control group also completed six adaptive training tasks in each training session (Table 2 and Fig 2). These included three variants of a visual search task and three variants of a change detection task. The visual search paradigm was derived from Redick et al. [32] and has been shown to not highly overlap (i.e., low correlations) with the working memory, reasoning, and task-switching abilities trained in Mind Frontiers [67, 68]. The change detection paradigm was obtained from Gaspar et al. [58]. Similar to the Mind Frontiers group, the active control group also completed the tasks on a portable device, the Asus Vivotab RT. The visual search tasks were programmed in E-prime 2.0 [69] and the change detection tasks were programmed in MATLAB (MathWorks™) using the Psychophysics Toolbox extensions [70, 71].

thumbnail
Fig 2. Active control tasks.

Left: Screenshots of three versions of the change detection task, from top to bottom: colored shapes, cars, letters. Right: Screenshots of three versions of the visual search tasks, from top to bottom: original visual search in Redick et al. [32], colored Ps, Ls. For publication purposes, stimuli are not drawn to scale (enlarged).

https://doi.org/10.1371/journal.pone.0142169.g002

Training Feedback Questionnaire

At the end of the first, tenth, and twentieth sessions, all participants were asked the following questions about each training game and were instructed to respond on a scale of 1–10: 1) How much did you enjoy/like each game? (1 = did not enjoy/like at all, 10 = enjoyed a lot), 2) How engaging was each game? (1 = least, 10 = greatest), 3) How demanding/effortful was each game? (1 = least, 10 = greatest), 4) How motivated were you to achieve the highest possible score on each game? (1 = least, 10 = greatest), and 5) How frustrating did you find the game? (1 = not at all frustrating, 10 = very frustrating).

Cognitive Assessment Protocol

Before and after 20 training sessions, participants completed a battery of tests and questionnaires to assess cognitive function at pre-test and changes that may have resulted from training. The tests measured a variety of cognitive abilities, including reasoning/Gf, episodic memory, perceptual speed, working memory, and attention (Table 3). Participants also completed questionnaires regarding sleep, personality, fitness, and media usage. Following the final testing session, participants completed a post-experiment survey that assessed their feedback on the cognitive training games, the strategies employed during training, gaming experience, and expectations. The majority of the transfer tasks have been extensively used in the cognitive psychology literature (Table 3), so only brief descriptions are provided. More details about each task can be found in S1 File.

Reasoning, perceptual speed, episodic memory

Except for i-Position, the tests below were obtained from the Virginia Cognitive Aging Project Battery [59], and two different versions were used for pre- and post-testing, with the sequence counterbalanced across subjects.

Shipley Abstraction: Identify missing stimuli in a progressive sequence of letters, words, or numbers. Number of correctly answered items within five minutes is the primary measure.

Matrix Reasoning: Select the pattern that completes a missing space on a 3 x 3 grid. Number of correctly answered items is the primary measure. Reaction time on correct trials was also analyzed.

Paper Folding: Identify pattern of holes that results from a punch through folded paper. Number of correctly answered items is the primary measure. Reaction time on correct trials was also analyzed.

Spatial Relations: Identify 3D object that would match a 2D object when folded. Number of correctly answered items is the primary measure. Reaction time on correct trials was also analyzed.

Form Boards: Choose shapes that will exactly fill a space. Number of correctly answered items is the primary measure.

Letter Sets: Determine which letter set is different from the other four. Number of correctly answered items is the primary measure. Reaction time on correct trials was also analyzed.

Digit Symbol Substitution: Write corresponding symbol for each digit using a coding table. The primary measure is number of correctly answered items within two minutes.

Pattern Comparison: Determine whether pairs of line patterns are the same or different. The primary measure is number of correctly answered items within 30 seconds, averaged across two sets of problems.

Letter Comparison: Determine whether pairs of letter strings are the same or different. The primary measure is number of correctly answered items within 30 seconds, averaged across two sets of problems.

Logical Memory: Listen to stories and recall them in detail. The primary measure is number of correctly recalled story details, summed across three story-tellings.

Paired Associates: Listen to word pairs and recall the second word in a pair. The primary measure is number of correctly recalled items.

i-Position: View an array of images on a computer screen and reproduce the positions of the images. Measures are proportion of swap errors (primary) and mean misplacement in pixels.

Working memory

Running Span: Recall the last n items presented in a letter list that ends unpredictably. The total number of items in perfectly recalled sets is the primary measure. We also analyzed the total number of items recalled in the correct serial order, regardless of whether the set was perfectly recalled.

Operation Span: Remember a sequence of letters while alternately performing arithmetic problems, then recall the sequence of letters. The total number of items in perfectly recalled sets is the primary measure. We also analyzed the total number of items recalled in the correct serial order, regardless of whether the set was perfectly recalled.

Symmetry Span: Remember a sequence of locations of squares within a matrix while alternately judging symmetry, then recall order and locations of the sequence. The total number of items in perfectly recalled sets is the primary measure. We also analyzed the total number of items recalled in the correct serial order, regardless of whether the set was perfectly recalled.

Visual Short-Term Memory (VSTM): Detect color change in an array of colored circles. Data was analyzed in terms of d-prime collapsed across set sizes (2, 4, 6, 8) and Cowan’s k averaged across set sizes [105]. Each set size measure is reported in S2 File.

Single N-back: Determine whether the current letter presented matches the letter presented two or three items back. The primary measure of d-prime was computed separately for the 2-back and 3-back conditions. Reaction times on correct trials were also analyzed.

Dual N-back (administered in the MRI): Determine whether simultaneously presented auditory and visual stimuli match stimuli presented one, two, or three items ago. The primary measure of d-prime was computed separately for the two-back and three-back conditions following procedures in [92]. Reaction times on correct trials were also analyzed.

Divided attention, selective attention, multi-tasking

Trail Making: Connect numbered circles as quickly as possible by drawing a line between them in numerical order (Trails A), then connect numbered and lettered circles by drawing a line between them, alternating between numbers and letters in numerical/alphabetical order (Trails B). The difference in Trails B and Trails A completion time was the primary measure.

Attention Blink: Identify the white letter (target 1) in a sequence of rapidly presented black letters, and identify whether the white letter was followed by a black “X” (target 2). The attentional blink is calculated on trials where target 1 was accurately detected, as the difference in target 2 accuracy when detection is easiest (lag 8 after target 1) and when detection is most difficult (lag 2 after target 1).

Dodge: Avoid enemy missiles and destroy enemies by guiding the missiles into other enemies. Highest level reached within eight minutes of game play was analyzed.

Multi-source interference task (MSIT; administered in the MRI): Determine the stimulus (digits 1, 2, or 3) that is different from the other two in a three-digit number. The flanker effect is derived by taking the difference between reaction times on incongruent and congruent trials. Only correct trials were analyzed.

Flanker: Indicate the direction (right or left) of the middle arrow, which was either flanked by two arrows on each side (incongruent with oppositely oriented arrows, or congruent with similarly oriented arrows) or two horizontal lines on each side (neutral trials, no arrow head). The flanker effect is derived by taking the difference between reaction times on incongruent and congruent trials. Only correct trials were analyzed.

Anti-Saccade: Identify masked letter, cued on opposite or same side. Accuracy on a block of anti-saccade trials is used as the primary measure.

Psychomotor Vigilance Task (PVT): Press key as soon as zeros begin to count up. The average of the 20% slowest RTs (bottom quintile) is used for analysis.

25 boxes (Number Search): Search for stimuli in a matrix and indicate the corresponding location on blank matrix. The average score on levels with matrix rotation (levels 12–20) was analyzed.

Control tower: Search through arrays using different rules (primary task) while performing several distractor tasks. Performance on the primary task (average of symbol, letter and number score minus corresponding errors) was used as the main measure.

Task-Switch, Dual-Task paradigm (TSDT): Respond to simultaneously presented auditory and visual stimuli based on cued task (auditory, visual, or both). Switch costs (reaction time difference between switch and repeat trials—for single task trials only) were analyzed separately for auditory and visual stimuli, and averaged across both.

Self-report instruments

Participants also completed questionnaires during the third session of pre-testing. These included the Big Five Inventory [106] and Grit Scale [107] to assess personality, the Karolinska Sleep Questionnaire [108] and Pittsburgh Sleep Quality Index [109] to gauge sleep quality, the Godin Leisure-Time Exercise Questionnaire [66] to estimate physical activity, several questions on height, weight, resting heart rate and physical activity to estimate cardiorespiratory fitness [110], and a Media Multitasking Index Questionnaire [111] to assess media usage. These questionnaires were also completed post-testing, but were not used for analyses. Analyses of whether these individual differences moderate training effects will be discussed in a separate publication.

Post-experiment questionnaire: Participants completed an online survey that assessed gaming experience prior to and during the study, as well as their experience in the study. They provided feedback about their enjoyment, effort, and difficulty in playing the training games. They also elaborated on strategies they developed while playing the games. Participants provided feedback on game experience, design, and ease of use, and offered their perspective on improvements to their daily life resulting from their participation in the study (perceived self-improvement questions), including: overall intelligence, short-term/working memory, long-term memory, sustained attention, divided attention, visuomotor coordination, perception/visual acuity, multi-tasking, problem-solving, reasoning, spatial visualization, academic performance, emotional regulation, and work/school productivity. The fourteen dimensions queried in the perceived self-improvement questions were also posed in terms of general expectancy or perceived potential benefit. Finally, the survey assessed prior knowledge of cognitive training literature.

Statistical Analyses

Training tasks: Practice effects: To examine improvement on the training tasks, we used a linear mixed effects model for each training task. In each of these models, the dependent variable was average level and the independent variable was session, which was coded as a linear contrast, with random effects of session and intercept for subjects. The change detection task had two conditions (set sizes three and five) which we analyzed separately.

Training tasks: Composite scores: For each training task, we computed a gain score by taking the difference between average level on the last two sessions of training and average level on the first two sessions [12, 112, 113]. To obtain a measure of overall training gain, we standardized the gain score for each relevant task and averaged the resulting values.

Training feedback: For each group, we averaged the training ratings across the six different tasks and analyzed each dimension using a repeated-measures ANOVA with group as between-subjects factor and training session as within-subjects factor. We report results of the multivariate tests since not all analyses met the assumption of sphericity. We do not analyze the ratings for each task, but report the means in S2 File.

Transfer tests: Measures: Primary measures for each transfer test were determined using conventional analysis procedures (S1 File). When relevant, reaction times (RTs) were also analyzed as secondary measures. In the n-back paradigm, RTs typically show a pattern that is complementary with the accuracy effects [91, 114], and each trial in both n-back training and transfer tasks required a response within a short time interval. In addition, the two reasoning games in Mind Frontiers (Irrigator, Safe Cracker) emphasized speed, such that each level needed to be completed within a limited period of time. Reasoning/Gf tests typically have a completion time limit, but speed is usually not stressed. As strategies developed over training may be reflected in post-test performance, we also analyzed RTs for each reasoning test to determine whether training may have had a unique or differential effect on this aspect of performance.

Transfer tests: Data quality and gain scores: If participants scored more than three standard deviations from the mean of any measure (computed separately for pre- and post-test), their data was excluded from analysis of that test and its relevant composite score. This was a relatively liberal criterion applied uniformly to the measures to ensure data quality. For the letter n-back and the VSTM (only) however, this procedure identified three individuals with high d-prime values. These data points were not discarded. To reduce the influence of remaining extreme but usable values such as these, the data was then Winsorized: mean and standard deviation were recomputed for the “cleaned” dataset (separately for pre- and post-test), and any value three standard deviations away from the mean was replaced with the appropriate cut-off value (value 3 SD above the mean, or value 3 SD below the mean).

For each measure that would be analyzed at a construct level (more details in next section), we computed a standardized gain score by taking the difference between post- and pre-test scores, and dividing this by the standard deviation of the pre-test score (collapsed across groups). We also inspected gain score data quality using a more liberal criterion of four standard deviations from the mean gain score, and discarded two data points found in two subjects’ PVT gain scores (extremely negative gain scores). The task-level analysis was also not performed on the pre-subtraction measures for these excluded gain scores.

The total number of participants differed across tests due to missing or unusable data. More details regarding data quality procedures and exclusions are provided in S2 File. The raw aggregate data for each subject including outliers is provided in S3 File, together with the final data used for analyses.

Transfer effects: Linear mixed model analysis: Standardized gain scores from the transfer tests were then used for linear mixed-effects models (LME) to analyze training-related improvements at a construct level [115, 116]. A separate LME model was run for each set (i.e., construct) of gain scores, though note that not all tests were grouped into a construct. We grouped gain scores into eight constructs: working memory n-back (2-back d’, 3-back d’, Dual 2-back d’, Dual 3-back d’), working memory span (Operation Span, Running Span, Symmetry Span), reasoning/Gf accuracy (Shipley Abstraction, Matrix Reasoning, Letter Sets, Paper Folding, Form Boards, Spatial Relations), reasoning/Gf reaction time (Matrix, Letter Sets, Paper Folding, Spatial Relations), selective attention (Flanker, PVT, Anti-Saccade), divided attention (Dodge, Attentional Blink, Trail Making), perceptual speed (Digit-Symbol Substitution, Pattern Comparison, Letter Comparison) and episodic memory (Logical Memory, Paired Associates, i-Position). Gain scores for reaction time were multiplied by negative one, such that positive scores indicate faster performance after training. Each model consisted of a fixed effect variable of training group and crossed random effects of subject and task for the intercept [117]. These models were implemented with the “lme4” package [118]. Significance testing was performed using the standard normal distribution as well as the more conservative Kenward Rogers approximation for degrees of freedom using the “pbkr” package [119] in the R statistical program (R Core Team, 2014).

Transfer effects: Composite-level analyses: To create composite scores for use in subsequent analyses, the standardized gain scores were averaged according to the aforementioned groupings. One subject’s extremely high gain score (>4 SD) for Selective Attention was Winsorized. With these composite gain scores, we used a multivariate ANOVA to verify training group effects and their consistency with the results from the linear mixed-effects analysis. Bayes Factor was calculated using tools provided at http://pcl.missouri.edu/bf-two-sample [120].

Transfer effects: Task-level analyses: We also conducted analyses at the task level to investigate the specificity and consistency of the composite-level findings. Not all tests were integrated into a composite score or construct in the linear mixed effects analysis, and were analyzed only at the task-level. Only significant interactions at p < .05 are reported in the text. For brevity, we discuss significant group x time interaction results in terms of “transfer effects.” Due primarily to technical issues in the recording of responses, only 24 subjects in the Mind Frontiers group and 29 subjects in the active control group have usable dual n-back data. For each measure, we also tested whether the groups differed at baseline, and found no significant differences (S2 File).

Perceived improvement: Surveys with Likert-type single questions were analyzed using Mann-Whitney U tests. In S2 File, medians were used to summarize results as appropriate for ordinal data. Responses were coded as numbers prior to analysis (e.g., 1–7 for very strongly disagree to very strongly agree).

Results

Practice effects: Game performance across sessions

The main effect of session was robust at p < .0001 for all tasks in both groups (Fig 3). The analysis of the change detection task for the set size 5 condition excluded three subjects run at the beginning of the experiment; this was due to experimenter error causing extremely high average maximum duration values in several of these subjects’ training sessions (S2 File). When these subjects were included in the set size 5 analysis, the shapes training effect was still significant (p = .03), the cars training effect was no longer significant (p = .15), and the letters training effect remained significant (p < .0001).

thumbnail
Fig 3. Practice effects.

Panel 1: Average level across sessions for each Mind Frontiers task. Panels 2–3. Average level across sessions for each active control task. Panel 2: Change detection average maximum duration according to session and set size. Panel 3: Visual search average level according to session and task (color). Error bars are SEM.

https://doi.org/10.1371/journal.pone.0142169.g003

Training Feedback

First we tested whether training feedback differed between groups after the first training session. Only motivation F(1,85) = 8.466, p = .005 and demand (F(1,85) = 8.858, p = .004 showed significant group effects, with higher overall motivation in the active control group and higher demand in the Mind Frontiers group.

We then examined whether these ratings changed over time and differed between groups. In enjoyment, there was no main effect of time, but a significant group by time interaction F(2,84) = 5.193, p = .007, ηp2 = .110, driven by changes from the first to the tenth session. Specifically, enjoyment increased for the Mind Frontiers group and decreased for the active control group. Motivation decreased over time F(2,84) = 12.734, p < .001, ηp2 = .233, and showed a group by time interaction F(2,84) = 4.580, p = .013, ηp2 = .098, which was driven by greater overall motivation at session one for the active control group; there was no interaction when session one was excluded from analysis. Frustration increased mid-training, (F(2,84) = 6.435, p = .003, ηp2 = .133) and did not differ between groups. There was no significant main effect of time and no group by time interaction in demand and engagement ratings.

Participants were not given an opportunity to rate the training tasks that they did not complete; thus the ratings provided may reflect relative differences in the six games played and not necessarily differences between training regimens. Mean ratings for each task are plotted in S2 File.

Transfer of training: Linear mixed model analysis and composite-level analysis

As shown in Table 4, the linear mixed model analysis revealed significant transfer effects in working memory n-back and reasoning/Gf reaction time, and a marginal effect in perceptual speed.

We verified these transfer effects using the composite gain scores, which will be used in succeeding analyses. The MANOVA on the composite gain scores showed a significant training group effect F(8,78) = 2.633, p = .013, ηp2 = .213 (Fig 4), with the pattern of results mirroring the linear mixed model analysis. The Mind Frontiers group outperformed the active control group in the working memory composite measure of n-back tests F(1,85) = 10.106, p = .002, ηp2 = .106, which is expected given that Sentry Duty was patterned after the dual n-back. Compared to the active control group, the Mind Frontiers group also showed significantly greater gains in composite measures of reasoning/Gf reaction time F(1,85) = 5.408, p = .022, ηp2 = .060, and perceptual speed F(1,85) = 4. 007, p = .049, ηp2 = .045. Due to missing data in several composite measures which overall resulted in three fewer subjects in the Mind Frontiers group, we ran separate t-tests on the composite measures and calculated Bayes Factor for each analysis. The three transfer/group effects found in the MANOVA were still significant with Bayes factor in favor of the alternative hypothesis for working memory at t(86) = 3.289, p = .001, JZS BF = 21.772, UI BF = 31.750, reasoning/Gf RT at t(87) = 2.501, p = .014, JZS BF = 3.291, UI BF = 4.698, and perceptual speed at t(87) = 2.023, p = .046, JZS BF = 1.313, UI BF = 1.831. The results were not significant for all other composite measures, with JZS Bayes Factor values greater than three in favor of the null hypothesis.

thumbnail
Fig 4. Transfer effects.

Displayed are means from the MANOVA (N = 42 for Mind Frontiers, N = 45 for Active Control). Error bars are SEM.

https://doi.org/10.1371/journal.pone.0142169.g004

In perceptual speed, the Mind Frontiers group outperformed the active control group, with the Mind Frontiers group correctly completing more items within each test’s time limit—although this effect was weaker in the linear mixed model analysis. While no group by time interaction was observed in the accuracy or total correct composite measure of reasoning/Gf, there was a significant group x time interaction in the reaction time composite measure for reasoning/Gf, with the Mind Frontiers group displaying faster reaction times on correct trials at post-test compared to the active controls. Working memory span, episodic memory, selective attention, and divided attention did not show training-related effects; there were no improvements or decrements that significantly differed between groups.

Correlation between baseline performance and transfer gain

To determine whether transfer effects observed in the Mind Frontiers group vary according to baseline cognitive ability, we correlated the composite reasoning/Gf score at baseline (pre-test) with transfer gain for the composite measures that showed a group effect. None of the correlations were significant. Since working memory ability may also predict individual differences in transfer, we correlated baseline working memory scores with transfer gain. There was no significant correlation between working memory n-back baseline score and transfer gain. Baseline working memory span score was correlated with transfer gain in perceptual speed (r(42) = .274, p = .036, one-tailed), but this is not significant after Bonferroni correction for multiple comparisons.

Correlation between training gain and transfer gain

Next we tested whether training-related improvements related to gains observed in the transfer tests. Table 5 reports the Pearson correlation coefficients and the confidence intervals from 2000 bootstrapped samples using the adjusted bootstrap percentile (BCa) method [121]. For the Mind Frontiers training group, overall training gain was significantly related only to working memory n-back transfer gain. There was no significant relationship between training gain and the perceptual speed and reasoning RT gain scores (Table 5 and S2 File). For the active control group, no significant relationship was observed, which is not surprising given that no transfer effects were observed for this group (Table 5).

We also examined the relationship between transfer gain and training gain on each Mind Frontiers game, as averaging across training games may dilute task-specific effects. Consistent with the composite training gain results, working memory n-back gain was significantly related to training gains in Supply Run, Sentry Duty, and Safe Cracker. Moreover, gain in working memory span was significantly correlated with training gain in Riding Shotgun, which was based on a matrix span task. Given the number of correlations tested however, these results are not significant at p < .05 after Bonferroni correction for multiple comparisons.

Correlation between training feedback and transfer gain

To determine whether subjects’ experience and involvement in the games factored into the transfer effects, we correlated the three composite scores that showed transfer effects and their ratings of the training games after the last session of training. Reported below are correlations significant at p < .05 and whose bootstrapped confidence intervals (2000 samples) do not include zero. First, we averaged ratings across the six Mind Frontiers games. None of the correlations were significant. Since the relationships may differ across the training games, we also conducted analyses at the task level.

The majority of the results were not robust or in the expected direction (greater gains with more positive ratings or experience), thus we refrain from interpreting them here. Reasoning RT gain was negatively correlated with demand in Sentry Duty (Kendall τB (n = 42) = -.283, p = .013 two-tailed, BCa 95% CI [-.484 -.069]) and with enjoyment in Pen ‘Em Up (Kendall τB (n = 42) = -.268, p = .016 two-tailed, BCa 95% CI [-.506 -.031]). Meanwhile, perceptual speed gain was negatively correlated with motivation in Safe Cracker (Kendall τB (n = 42) = -.251, p = .027 two-tailed, BCa 95% CI [-.453 -.044]). None of these correlations, however, pass a Bonferroni-corrected threshold and thus overall indicate no effect of gaming experience on transfer.

Task-level analysis

Results for each test are summarized in Table 6 and briefly discussed below. We also tested pre-test scores and did not find significant group differences at baseline (S2 File).

Working memory (n-back tasks)

Compared to the active control group, the Mind Frontiers group improved significantly on three out of four accuracy measures in the dual and single n-back tests. This is not surprising given that the Sentry Duty game in Mind Frontiers is based on the dual n-back task. Although the 2-back condition in the dual n-back did not reach significance, there was a trend of higher scores in the Mind Frontiers group at post-test. Reaction time improvements were also observed in the single letter n-back task.

Working memory (span tasks and VSTM)

While there was evidence of near-transfer to the n-back tasks in the Mind Frontiers group, no transfer effects were found in other common measures of working memory such as the Operation Span, Running Span, Symmetry Span, and VSTM tasks.

Reasoning/Fluid Intelligence

In the Matrix Reasoning task, there was a significant group by time interaction, with the active control group showing higher accuracy at post-test compared to the Mind Frontiers group. Follow-up t-tests show that this was driven by better overall post-test performance in the active control participants. Despite the differences in means at pre-test, there was no significant group effect for this baseline measure F(1,85) = 1.747, p = .190. After factoring in pre-test accuracy with post-test accuracy performance as a dependent variable, the group effect was smaller but still significant F(1,84) = 4.043, p = .048, ηp2 = .046. There was no significant training-related effect in Matrix Reasoning reaction times, although Matrix Reasoning RT gain was negatively correlated with Matrix Reasoning accuracy gain at r(85) = -.269, p = .006, one-tailed.

Although there were no other significant group x time interactions for reasoning/Gf total correct measures, the Mind Frontiers group at post-test had significantly faster reaction times for Letter Sets. In the other reasoning tasks, there was a trend for faster RTs in the Mind Frontiers group, but these effects were not significant at the task-level. It is important to note that RTs are not typically used as measures for reasoning/Gf tests. We chose to analyze RTs in this study due to the speeded nature of the reasoning training tasks included in Mind Frontiers.

Perceptual Speed

The composite gain analyses revealed a significant transfer effect for perceptual speed. Task analyses show that this was driven by a significant group x time interaction in Letter Comparison, with more correct responses answered within the time limit for the Mind Frontiers group compared to the active control group. The interactions for Pattern Comparison and Digit-Symbol Substitution were not significant, but showed the same trend for improved performance in the Mind Frontiers group.

Episodic Memory

Logical Memory, Paired Associates and i-Position showed no significant transfer effects.

Selective Attention

Compared to active controls, the Mind Frontiers group had marginally improved accuracy in the Anti-saccade task. No transfer effects were found in the PVT, Flanker Test, and visual/number search game (25 Boxes). Although there was a significant group x time interaction in the MSIT RT congruency effect, there were no differences in the pre-subtraction measures of incongruent and congruent RTs.

Divided Attention

The Mind Frontiers group had significantly smaller trail-making costs at post-test, reflecting faster completion times for the alternating Trails B test. There were no transfer effects in Attentional Blink (lag 8 –lag 2 accuracy) and Dodge.

Multi-tasking and Task Switching

No training-related effects were found in Control Tower and TSDT.

Perceived benefit effects

Mann-Whitney U tests on the perceived self-improvement questions did not reveal significant group differences (S2 File), suggesting that the transfer effects are unlikely to be influenced by perceived improvement differences across groups. However, the same set of questions phrased in terms of general expectancy or potential benefits (not necessarily applicable to self) revealed significant group differences (S2 File) with the active control group expecting better sustained attention (U = 741.0, p = .019) and perception (U = 751.0, p = .023), and the Mind Frontiers group expecting better multi-tasking (U = 789.50, p = .045), problem-solving (U = 632.50, p = .001), and reasoning (U = 717.0, p = .012) performance. After Bonferroni correction for the fourteen multiple comparisons however, only the problem-solving effect holds. There was no significant relationship between problem-solving expectancy and transfer gain. More details about training feedback and analyses can be found in S2 File.

Discussion

Participants who played the Mind Frontiers game showed near-transfer to the single and dual n-back tasks, which were similar in design to one of the trained working memory tasks, Sentry Duty. Training-related transfer effects were also observed for composite measures of perceptual speed and reasoning/Gf reaction times. These speed-reaction time findings support the hypothesis that varied and integrated cognitive training in Mind Frontiers can lead to improvements in “lower-level” abilities of perceptual speed and attention—which may reflect more efficient processing of stimuli to support performance in more complex tasks. Although reasoning/Gf improvements were not found in primary accuracy measures, improvement in reasoning/Gf reaction times provides some promise for the plasticity of this higher-level ability. It is important to note, however, that no training-related effects were observed in five out of the eight composite measures tested here, and that differential expectancy regarding the nature of training suggests some caution in the interpretation of the results.

Transfer effects

Baseline cognitive performance as measured by reasoning/Gf had no effect on transfer gains in the Mind Frontiers group, which suggests that the training had a relatively uniform effect on participants. This is likely due to the adaptive and relative difficulty of the training tasks, which decreased the likelihood of performance plateaus. Other computer-based training studies have found that baseline ability measures either negatively or positively predict improvements [49, 54, 122], with results varying depending on the nature of training tasks. The null effect of baseline reasoning/Gf in the current study may also reflect lack of power or variability, though it is also possible that the heterogeneous and adaptive games employed here decreased the likelihood of floor or ceiling effects in overall training improvement.

Similar to Irrigator, Safe Cracker, and Pen ‘Em Up in Mind Frontiers, the reasoning/Gf transfer tests required task execution within a very limited time frame. Although the perceptual speed tasks were not reasoning in nature per se, they also involved completion of as many items as possible within a certain time limit. Did experience with these time-limited games specifically drive the transfer gains observed in the speed and RT measures? Several training studies find that only those who improve on the training tasks (“responders”) also show transfer on untrained tests (e.g., [12, 113]). To test this, we correlated overall training gain and transfer gain as measured by composite scores. Only the working memory n-back composite score was significantly and reliably related to improvement in Mind Frontiers, similar to previous findings in working memory training [34]. As expected, no significant relationship was observed for the active control group.

Apart from transfer to another working memory n-back test, Thompson and colleagues [34] did not find any transfer to untrained paradigms. In the current study, which involved a larger sample size, we found some evidence for transfer in reasoning RT and speed measures. However, these performance gains were not related to training gains and hint that the benefits observed may be due to a factor or combination of factors common to the Mind Frontiers tasks and not necessarily attributable to processes such as working memory, reasoning, or attentional control. It is plausible that rather than developing these skills per se, the overarching time-limited nature of the tasks made participants better prepared for the speed-intensive tests at post-test.

No training-related effects were observed in the working memory span tests, despite the inclusion of “Riding Shotgun,” a Mind Frontiers game that is similar to a simple working memory span task (Symmetry Span) employed in a training study that found transfer to untrained span tests [28]. These incompatible results may arise from differences in training methodology; the Mind Frontiers group spent less time overall training on the span task (20 12-minute sessions) compared to Harrison et al. [28], where only span tasks were performed for the duration of 20 45-minute sessions. In addition, transfer effects may be very specific to the type of training received. Similar to the simple span training group of [28], the Mind Frontiers group did not improve in tests of complex working memory span (Operation Span, Symmetry Span). While the previous study found improvements in Running Span for both the simple and complex working memory training groups, the training and test stimuli were the same. The absence of a Running Span effect in the current study can be attributed to the specificity of stimuli—in that the Riding Shotgun game involved spatial locations while the Running Span test involved letter stimuli. Unlike the current study, the previous experiment also incorporated performance-based bonus compensation, which may have also led to differences in motivation. Nonetheless, an examination of individual game performance and transfer gain revealed a modest yet positive relationship between working memory-span gain and training gain on Riding Shotgun. While this effect no longer holds after multiple comparison correction, it is consistent with the n-back finding of transfer gains in tests similar to the training tasks, as well as other studies that find transfer to various working memory span tests after adaptive training on verbal and visuo-spatial span tasks [17, 123].

Expectancy and placebo effects

The responses to the perceived improvement and expectancy questions were not consistent, with no significant group differences in questions of perceived self-improvement, but slight group effects when the same questions were phrased in terms of general potential improvement. These findings, however, are not necessarily contradictory in terms of expectancy biases and may instead reveal that the participants accurately assessed the properties of their training tasks. Nonetheless, this awareness has been argued to potentially lead to sub-conscious expectations and thus placebo effects] 29, 30]. Although the transfer effects were not correlated with the improvement or expectancy ratings, we cannot conclusively rule out that the benefits observed in the Mind Frontiers group reflect placebo effects to some extent. The results obtained here also highlight the importance of wording in self-report assessments, such that subtle changes in question framing may reveal different patterns of results.

Given these findings, a more careful examination of placebo effects is warranted. One approach involves comparison to a survey-based study where participants learn about specific interventions and evaluate intervention-related outcomes [124]. Another involves having participants specifically rate perceived improvement and expectancy for specific tasks [29], rather than general abilities as implemented in the current study.

Limitations and Future Directions

The improvement observed in reasoning/Gf is promising, but modest; effects were found in reaction time and not accuracy, which is the more established measure for estimating reasoning/Gf [125127]. Future research should involve administration of more sensitive accuracy measures that may better capture any subtle changes in processing efficiency. The tests used in the current study were derived from previous studies demonstrating sensitivity to age-related differences or changes. The Matrix Reasoning test used, for example, is a modified and abbreviated computerized version of the more extensive 60-item Raven’s Advanced Progressive Matrices [126]. Although easier to administer, these abridged tests may be less suitable for detecting subtle effects or changes [128, 129], especially in relatively high-functioning young adults and in the presence of practice (test re-test) effects. Moreover, it is possible that longer and more intense training, as well as conducting a study with a larger sample size, may lead to more measurable gains in higher-level abilities of reasoning/Gf and working memory.

Although there were no significant differences in engagement and frustration between groups, an important limitation of this study is the different training experience between training groups. Group by time interactions were found in enjoyment and motivation, with increasing ratings for the Mind Frontiers group and decreasing ratings for the active control group. While the Mind Frontiers group experienced a “gamified” experience of the tasks, the active control participants completed less visually engaging laboratory tasks without explicit progress tracking, unlike the Mind Frontiers group that received information on points accrued from gameplay and game levels attained. Unfortunately, thus far, very few training/transfer studies have collected such ratings. Therefore, it is impossible to know whether previous observations of transfer effects have been confounded by subjects’ expectancies about benefits. A follow-up study should equate the active control group on these motivational aspects of training and usability, with comparable presentation and progress tracking of the control training tasks.

As this study involves relatively high-functioning young adults, future directions include investigating whether individual differences in physical fitness and personality can moderate training and transfer benefits. Physical fitness has been shown to be highly related to executive control abilities [130, 131], while personality factors have been found to play a role in training improvement [34, 128, 132, 133]. Moreover, brain volume in specific cortical and subcortical regions have been shown to predict training and transfer benefits from videogame training [134, 135]. Analyzing structural and functional brain profiles may provide further insight into why specific interventions may be more successful for certain individuals, and help characterize the overlap between training tasks and tests that show training-related transfer.

Acknowledgments

The authors thank Tyler Harrison, Randall Engle, and colleagues for providing copies of the visual search training task and operation span tests, Timothy Salthouse for providing the reasoning, perceptual speed and episodic memory tests, Nash Unsworth for providing tests used for selective attention, Wei Ji Ma for providing the VSTM test, Kate Fitzgerald for providing a copy of the event-related MSIT, Jay Zhang for providing Dodge for testing purposes, Mircea Petza for providing 25 Boxes (Number Search) for testing purposes, Walter Boot for assistance in developing the post-experiment questionnaires, and John Gaspar and Ronald Carbonari for assistance with the change detection training task. For their work in building the Mind Frontiers video game, we thank our colleagues at Aptima, Inc. and Breakaway. For database and technical assistance, we thank members of the Beckman ITS Department especially Timothy Roth and Marc Taylor. Although the MRI data is not reported here, we thank Nancy Dodge, Holly Tracy, Tracy Henigman, Anya Knecht, and Rochelle Yambert for assistance with data collection. We are also very grateful to the lab managers and research assistants of the Kramer lab for their invaluable contribution to data collection.

Author Contributions

Conceived and designed the experiments: PLB CMA MBK AS CD NW AG AFK. Performed the experiments: PLB CMA MBK KJ AS. Analyzed the data: PLB CMA MBK KJ AS. Contributed reagents/materials/analysis tools: PLB MBK AS CD AG NW. Wrote the paper: PLB CMA MBK AFK. Designed the software for the Mind Frontiers games: CD.

References

  1. 1. Boot WR, Kramer AF. The brain-games conundrum: Does cognitive training really sharpen the mind? Cerebrum. 2014; 2014: 15. pmid:26034522
  2. 2. Schubert T, Strobach T, Karbach J. New directions in cognitive training: on methods, transfer, and application. Psychol Res. 2014; 78(6): 749–755. pmid:25304045
  3. 3. Jolles DD, Crone EA. Training the developing brain: a neurocognitive perspective. Front Hum Neurosci. 2012; 6: 76. pmid:22509161
  4. 4. Hertzog C, Kramer AF, Wilson RS, Lindenberger U. Enrichment effects on adult cognitive development can the functional capacity of older adults be preserved and enhanced? Psychol Sci Public Interest. 2008; 9(1): 1–65. pmid:26162004
  5. 5. Gray JR, Thompson PM. Neurobiology of intelligence: science and ethics. Nat Rev Neurosci. 2004; 5(6): 471–482. pmid:15152197
  6. 6. Gottfredson LS. Why g matters: The complexity of everyday life. Intelligence. 1997; 24: 79–132.
  7. 7. Colom R, Escorial S, Shih PC, Privado J. Fluid intelligence, memory span, and temperament difficulties predict academic performance of young adolescents. Pers Individ Dif. 2007; 42(8): 1503–1514.
  8. 8. Rueda MR, Rothbart MK, McCandliss BD, Saccomanno L, Posner MI. Training, maturation, and genetic influences on the development of executive attention. Proc Natl Acad Sci U S A. 2005; 102(41): 14931–14936. pmid:16192352
  9. 9. Thorell LB, Lindqvist S, Bergman Nutley S, Bohlin G, Klingberg T. Training and transfer effects of executive functions in preschool children. Dev Sci. 2009; 12(1): 106–113. pmid:19120418
  10. 10. Rueda MR, Checa P, Cómbita LM. Enhanced efficiency of the executive attention network after training in preschool children: immediate changes and effects after two months. Dev Cogn Neurosci. 2012; 2: S192–S204. pmid:22682908
  11. 11. Bergman Nutley S, Söderqvist S, Bryde S, Thorell LB, Humphreys K, Klingberg T. Gains in fluid intelligence after training non-verbal reasoning in 4-year-old children: a controlled, randomized study. Dev Sci. 2011; 14(3): 591–601. pmid:21477197
  12. 12. Jaeggi SM, Buschkuehl M, Jonides J, Shah P. Short- and long-term benefits of cognitive training. Proc Natl Acad Sci U S A. 2011; 108(25): 10081–10086. pmid:21670271
  13. 13. Loosli SV, Buschkuehl M, Perrig WJ, Jaeggi SM. Working memory training improves reading processes in typically developing children. Child Neuropsychol. 2012; 18(1): 62–78. pmid:21623483
  14. 14. Mackey AP, Hill SS, Stone SI, Bunge SA. Differential effects of reasoning and speed training in children. Dev Sci. 2011; 14(3): 582–590. pmid:21477196
  15. 15. Kirk H, Gray K, Riby D, Cornish K. Cognitive training as a resolution for early executive function difficulties in children with intellectual disabilities. Res Dev Disabil. 2015; 38: 145–160. pmid:25561358
  16. 16. Klingberg T, Fernell E, Olesen PJ, Johnson M, Gustafsson P, Dahlström K, et al. Computerized training of working memory in children with ADHD-a randomized, controlled trial. J Am Acad Child Adolesc Psychiatry. 2005; 44(2): 177–186. pmid:15689731
  17. 17. Holmes J, Gathercole SE, Dunning DL. Adaptive training leads to sustained enhancement of poor working memory in children. Dev Sci. 2009; 12(4): F9–F15. pmid:19635074
  18. 18. Dunning DL, Holmes J, Gathercole SE. Does working memory training lead to generalized improvements in children with low working memory? A randomized controlled trial. Dev Sci. 2013; 16(6): 915–925. pmid:24093880
  19. 19. Klingberg T, Forssberg H, Westerberg H. Training of working memory in children with ADHD. J Clin Exp Neuropsychol. 2002; 24(6): 781–791. pmid:12424652
  20. 20. Olesen PJ, Westerberg H, Klingberg T. Increased prefrontal and parietal activity after training of working memory. Nat Neurosci. 2004; 7(1): 75–79. pmid:14699419
  21. 21. Jaeggi SM, Buschkuehl M, Jonides J, Perrig WJ. Improving fluid intelligence with training on working memory. Proc Natl Acad Sci U S A. 2008; 105(19): 6829–6833. pmid:18443283
  22. 22. Karbach J, Kray J. How useful is executive control training? Age differences in near and far transfer of task-switching training. Dev Sci. 2009; 12(6): 978–990. pmid:19840052
  23. 23. Willis SL, Schaie KW. Training the elderly on the ability factors of spatial orientation and inductive reasoning. Psychol Aging. 1986; 1(3): 239–247. pmid:3267404
  24. 24. Dahlin E, Neely AS, Larsson A, Bäckman L, Nyberg L. Transfer of learning after updating training mediated by the striatum. Science. 2008; 320(5882): 1510–1512. pmid:18556560
  25. 25. Dahlin E, Nyberg L, Bäckman L, Neely AS. Plasticity of executive functioning in young and older adults: immediate training gains, transfer, and long-term maintenance. Psychol Aging. 2008; 23(4): 720–730. pmid:19140643
  26. 26. Klingberg T. Training and plasticity of working memory. Trends Cogn Sci. 2010; 14(7): 317–324. pmid:20630350
  27. 27. Morrison AB, Chein JM. Does working memory training work? The promise and challenges of enhancing cognition by training working memory. Psychon Bull Rev. 2011; 18(1): 46–60. pmid:21327348
  28. 28. Harrison TL, Shipstead Z, Hicks KL, Hambrick DZ, Redick TS, Engle RW. Working memory training may increase working memory capacity but not fluid intelligence. Psychol Sci. 2013; 24(12): 2409–2419. pmid:24091548
  29. 29. Boot WR, Blakely DP, Simons DJ. Do action video games improve perception and cognition? Front Psychol. 2011; 2: 226. pmid:21949513
  30. 30. Boot WR, Simons DJ, Stothart C, Stutts C. The pervasive problem with placebos in psychology: why active control groups are not sufficient to rule out placebo effects. Perspect Psychol Sci. 2013; 8(4): 445–454. pmid:26173122
  31. 31. Shipstead Z, Redick TS, Engle RW. Is working memory training effective? Psychol Bull. 2012; 138(4): 628–654. pmid:22409508
  32. 32. Redick TS, Shipstead Z, Harrison TL, Hicks KL, Fried DE, Hambrick DZ, et al. No evidence of intelligence improvement after working memory training: a randomized, placebo-controlled study. J Exp Psychol Gen. 2013; 142(2): 359–379. pmid:22708717
  33. 33. Kundu B, Sutterer DW, Emrich SM, Postle BR. Strengthened effective connectivity underlies transfer of working memory training to tests of short-term memory and attention. J Neurosci. 2013; 33(20): 8705–8715. pmid:23678114
  34. 34. Thompson TW, Waskom ML, Garel KA, Cardenas-Iniguez C, Reynolds GO, Winter R, et al. Failure of working memory training to enhance cognition or intelligence. PLoS One. 2013; 8(5): e63614. pmid:23717453
  35. 35. Sprenger AM, Atkins SM, Bolger DJ, Harbison JI, Novick JM, Chrabaszcz JS, et al. Training working memory: limits of transfer. Intelligence. 2013; 41(5): 638–663.
  36. 36. Schweizer S, Hampshire A, Dalgleish T. Extending brain-training to the affective domain: increasing cognitive and affective executive control through emotional working memory training. PLoS One. 2011; 6(9): e24372. pmid:21949712
  37. 37. Stephenson CL, Halpern DF. Improved matrix reasoning is limited to training on tasks with a visuospatial component. Intelligence. 2013; 41(5): 341–357.
  38. 38. Au J, Sheehan E, Tsai N, Duncan GJ, Buschkuehl M, Jaeggi SM. Improving fluid intelligence with training on working memory: a meta-analysis. Psychon Bull Rev. 2014; 22(2): 366–377.
  39. 39. Karbach J, Verhaeghen P. Making working memory work: a meta-analysis of executive-control and working memory training in older adults. Psychol Sci. 2014; 25(11): 2027–2037. pmid:25298292
  40. 40. Lampit A, Hallock H, Valenzuela M. Computerized cognitive training in cognitively healthy older adults: a systematic review and meta-analysis of effect modifiers. PLoS Med. 2014; 11(11): e1001756. pmid:25405755
  41. 41. Dougherty MR, Hamovitz T, Tidwell JW. Reevaluating the effectiveness of n-back training on transfer through the Bayesian lens: Support for the null. Psychon Bull Rev. 2015; In press.
  42. 42. Melby-Lervåg M, Hulme C. There is no convincing evidence that working memory training is effective: a reply to Au et al. (2014) and Karbach and Verhaeghen (2014). Psychon Bull Rev. 2015; In press.
  43. 43. Ball K, Berch DB, Helmers KF, Jobe JB, Leveck MD, Marsiske M, et al. Effects of cognitive training interventions with older adults: a randomized controlled trial. JAMA. 2002; 288(18): 2271–2281. pmid:12425704
  44. 44. Willis SL, Tennstedt SL, Marsiske M, Ball K, Elias J, Koepke KM, et al. Long-term effects of cognitive training on everyday functional outcomes in older adults. JAMA. 2006; 296(23): 2805–2814. pmid:17179457
  45. 45. Green CS, Bavelier D. Exercising your brain: a review of human brain plasticity and training-induced learning. Psychol Aging. 2008; 23(4): 692–701. pmid:19140641
  46. 46. Ackerman PL, Kanfer R, Calderwood C. Use it or lose it? Wii brain exercise practice and reading for domain knowledge. Psychol Aging. 2010; 25(4): 753–766. pmid:20822257
  47. 47. Boot WR, Basak C, Erickson KI, Neider M, Simons DJ, Fabiani M, et al. Transfer of skill engendered by complex task training under conditions of variable priority. Acta Psychol (Amst). 2010; 135(3): 349–357.
  48. 48. Owen AM, Hampshire A, Grahn JA, Stenton R, Dajani S, Burns AS, et al. Putting brain training to the test. Nature. 2010; 465(7299): 775–778. pmid:20407435
  49. 49. Lee H, Boot WR, Basak C, Voss MW, Prakash RS, Neider M, et al. Performance gains from directed training do not transfer to untrained tasks. Acta Psychol (Amst). 2012; 139(1): 146–158.
  50. 50. Schubert T, Finke K, Redel P, Kluckow S, Müller H, Strobach T. Video game experience and its influence on visual attention parameters: an investigation using the framework of the Theory of Visual Attention (TVA). Acta Psychol (Amst). 2015; 157: 200–214.
  51. 51. Schmiedek F, Lovden M, Lindenberger U. Hundred days of cognitive training enhance broad cognitive abilities in adulthood: findings from the COGITO study. Front Aging Neurosci. 2010; 2: 27. pmid:20725526
  52. 52. Bavelier D, Green CS, Pouget A, Schrater P. Brain plasticity through the life span: learning to learn and action video games. Annu Rev Neurosci. 2012; 35: 391–416. pmid:22715883
  53. 53. Oei AC, Patterson MD. Enhancing cognition with video games: a multiple game training study. PLoS One. 2013; 8(3): e58546. pmid:23516504
  54. 54. Baniqued PL, Kranz MB, Voss MW, Lee H, Cosman JD, Severson J, et al. Cognitive training with casual video games: points to consider. Front Psychol. 2014; 4: 1010. pmid:24432009
  55. 55. Lampit A, Ebster C, Valenzuela M. Multi-domain computerized cognitive training program improves performance of bookkeeping tasks: a matched-sampling active-controlled trial. Front Psychol. 2014; 5: 794. pmid:25120510
  56. 56. van Muijden J, Band GP, Hommel B. Online games training aging brains: limited transfer to cognitive control functions. Front Hum Neurosci. 2012; 6: 221. pmid:22912609
  57. 57. von Bastian CC, Langer N, Jäncke L, Oberauer K. Effects of working memory training in young and old adults. Mem Cognit. 2013; 41(4): 611–624. pmid:23263879
  58. 58. Gaspar JG, Neider MB, Simons DJ, McCarley JS, Kramer AF. Change detection: training and transfer. PLoS One. 2013; 8(6): e67781. pmid:23840775
  59. 59. Salthouse TA. Relations between cognitive abilities and measures of executive functioning. Neuropsychology. 2005; 19(4): 532–545. pmid:16060828
  60. 60. Redick TS, Unsworth N, Kelly AJ, Engle RW. Faster, smarter? Working memory capacity and perceptual speed in relation to fluid intelligence. J Cogn Psychol (Hove). 2012; 24(7): 844–854.
  61. 61. Salthouse TA, Ferrer-Caja E. What needs to be explained to account for age-related effects on multiple cognitive variables? Psychol Aging. 2003; 18(1): 91–110. pmid:12641315
  62. 62. Shipstead Z, Harrison TL, Engle RW. Working memory capacity and visual attention: top-down and bottom-up guidance. Q J Exp Psychol (Hove). 2012; 65(3): 401–407.
  63. 63. Jensen AR. How much can we boost IQ and scholastic achievement? Harv Educ Rev. 1969; 39(1): 1–123.
  64. 64. Schaie KW. What can we learn from longitudinal studies of adult development? Res Hum Dev. 2005; 2(3): 133–158. pmid:16467912
  65. 65. Salthouse TA. When does age-related cognitive decline begin? Neurobiol Aging. 2009; 30(4): 507–514. pmid:19231028
  66. 66. Godin G, Shephard R. Godin leisure-time exercise questionnaire. Med Sci Sports Exerc. 1997; 29(6s): S36–S38.
  67. 67. Kane MJ, Poole BJ, Tuholski SW, Engle RW. Working memory capacity and the top-down control of visual search: exploring the boundaries of "executive attention". J Exp Psychol Learn Mem Cogn. 2006; 32(4): 749–777. pmid:16822145
  68. 68. Poole BJ, Kane MJ. Working-memory capacity predicts the executive control of visual search among distractors: the influences of sustained and selective attention. Q J Exp Psychol (Hove). 2009; 62(7): 1430–1454.
  69. 69. Schneider W, Eschman A, Zuccolotto A. E-Prime user’s guide. Pittsburgh: Psychology Software Tools Inc.; 2002.
  70. 70. Brainard DH. The psychophysics toolbox. Spat Vis. 1997; 10: 433–436. pmid:9176952
  71. 71. Pelli DG. The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat Vis. 1997; 10: 437–442. pmid:9176953
  72. 72. Zachary RA. Shipley institute of living scale: revised manual. Los Angeles: Western Psychological Services; 1986.
  73. 73. Ravens J. Advanced progressive matrices: Set II. London: H.K. Lewis; 1962.
  74. 74. Crone EA, Wendelken C, van Leijenhorst L, Honomichl RD, Christoff K, Bunge SA. Neurocognitive development of relational reasoning. Dev Sci. 2009; 12(1): 55–66. pmid:19120413
  75. 75. Ekstrom RB, French JW, Harman HH, Dermen D. Manual for kit of factor referenced cognitive tests. Princeton: Educational Testing Service; 1976.
  76. 76. Bennett GK, Seashore HG, Wesman AG. Differential aptitude tests. San Antonio: The Psychological Corporation; 1947.
  77. 77. Wechsler D. WAIS-III: Wechsler adult intelligence scale. San Antonio: The Psychological Corporation; 1997.
  78. 78. Salthouse TA, Babcock RL. Decomposing adult age differences in working memory. Dev Psychol. 1991; 27(5): 763–776.
  79. 79. Wechsler D. Wechsler memory scale (WMS-III). San Antonio: The Psychological Corporation; 1997.
  80. 80. Salthouse TA, Fristoe N, Rhee SH. How localized are age-related effects on neuropsychological measures? Neuropsychology. 1996; 10(2): 272–285.
  81. 81. Watson PD, Voss JL, Warren DE, Tranel D, Cohen NJ. Spatial reconstruction by patients with hippocampal damage is dominated by relational memory errors. Hippocampus. 2013; 23(7): 570–580. pmid:23418096
  82. 82. Monti JM, Cooke GE, Watson PD, Voss MW, Kramer AF, Cohen NJ. Relating hippocampus to relational memory processing across domains and delays. J Cogn Neurosci. 2015; 27(2): 234–245. pmid:25203273
  83. 83. Broadway JM, Engle RW. Validating running memory span: Measurement of working memory capacity and links with fluid intelligence. Behav Res Methods. 2010; 42(2): 563–570. pmid:20479188
  84. 84. Turner ML, Engle RW. Is working memory capacity task dependent? J Mem Lang. 1989; 28(2): 127–154.
  85. 85. Unsworth N, Heitz RP, Schrock JC, Engle RW. An automated version of the operation span task. Behav Res Methods. 2005; 37(3): 498–505. pmid:16405146
  86. 86. Redick TS, Broadway JM, Meier ME, Kuriakose PS, Unsworth N, Kane MJ, et al. Measuring working memory capacity with automated complex span tasks. Eur J Psychol Assess. 2012; 28(3): 164–171.
  87. 87. Keshvari S, van den Berg R, Ma WJ. No evidence for an item limit in change detection. PLoS Comput Biol. 2013; 9(2): e1002927. pmid:23468613
  88. 88. Luck SJ, Vogel EK. The capacity of visual working memory for features and conjunctions. Nature. 1997; 390(6657): 279–281. pmid:9384378
  89. 89. Kirchner WK. Age differences in short-term retention of rapidly changing information. J Exp Psychol. 1958; 55(4): 352–358. pmid:13539317
  90. 90. Kane MJ, Conway AR, Miura TK, Colflesh GJ. Working memory, attention control, and the N-back task: a question of construct validity. J Exp Psychol Learn Mem Cogn. 2007; 33(3): 615–622. pmid:17470009
  91. 91. Jaeggi SM, Seewer R, Nirkko AC, Eckstein D, Schroth G, Groner R, et al. Does excessive memory load attenuate activation in the prefrontal cortex? Load-dependent processing in single and dual tasks: functional magnetic resonance imaging study. Neuroimage. 2003; 19(2): 210–225. pmid:12814572
  92. 92. Jaeggi SM, Buschkuehl M, Etienne A, Ozdoba C, Perrig WJ, Nirkko AC. On how high performers keep cool brains in situations of cognitive overload. Cogn Affect Behav Neurosci. 2007; 7(2): 75–89. pmid:17672380
  93. 93. Reitan RM. Validity of the trail making test as an indicator of organic brain damage. Percept Mot Skills. 1958; 8(3): 271–276.
  94. 94. Raymond JE, Shapiro KL, Arnell KM. Temporary suppression of visual processing in an RSVP task: an attentional blink? J Exp Psychol Hum Percept Perform. 1992; 18(3): 849–860. pmid:1500880
  95. 95. Armor Games [Internet]. [Place Unknown]: Armor Games; 2005–2015. Available: http://armorgames.com/play/2963/dodge. Accessed 2015 Oct 25.
  96. 96. Unsworth N, Spillers GJ. Working memory capacity: attention control, secondary memory, or both? A direct test of the dual-component model. J Mem Lang. 2010; 62(4): 392–406.
  97. 97. Eriksen BA, Eriksen CW. Effects of noise letters upon the identification of a target letter in a nonsearch task. Percept Psychophys. 1974; 16(1): 143–149.
  98. 98. Kane MJ, Bleckley MK, Conway AR, Engle RW. A controlled-attention view of working-memory capacity. J Exp Psychol Gen. 2001; 130(2): 169–183. pmid:11409097
  99. 99. Hallett P. Primary and secondary saccades to goals defined by instructions. Vision Res. 1978; 18(10): 1279–1296. pmid:726270
  100. 100. Dinges DF, Powell JW. Microcomputer analyses of performance on a portable, simple visual RT task during sustained operations. Behav Res Methods Instrum Comput. 1985; 17(6): 652–655.
  101. 101. Fitzgerald KD, Perkins SC, Angstadt M, Johnson T, Stern ER, Welsh RC, et al. The development of performance-monitoring function in the posterior medial frontal cortex. Neuroimage. 2010; 49(4): 3463–3473. pmid:19913101
  102. 102. Bush G, Shin L, Holmes J, Rosen B, Vogt B. The multi-source interference task: validation study with fMRI in individual subjects. Mol Psychiatry. 2003; 8(1): 60–70. pmid:12556909
  103. 103. Platina Games [Internet]. [Place Unknown]: platinagames.com; 2010. Available: http://www.platinagames.com/game2.html. Accessed 2015 Oct 25.
  104. 104. Costa R, Medeiros-Ward N, Halper N, Helm L, Maloney A. The shifting and dividing of attention between visual and auditory tasks. J Vis. 2012; 12(9): 1032.
  105. 105. Cowan N. Metatheory of storage capacity limits. Behav Brain Sci. 2001; 24(01): 154–176.
  106. 106. John OP, Donahue EM, Kentle RL. The big five inventory—versions 4a and 54. Berkeley: University of California, Berkeley, Institute of Personality and Social Research; 1991.
  107. 107. Duckworth AL, Peterson C, Matthews MD, Kelly DR. Grit: perseverance and passion for long-term goals. J Pers Soc Psychol. 2007; 92(6): 1087–1101. pmid:17547490
  108. 108. Kecklund G, Åkerstedt T. The psychometric properties of the Karolinska Sleep Questionnaire. J Sleep Res. 1992; 1(Suppl 1): 113.
  109. 109. Buysse DJ, Reynolds CF, Monk TH, Berman SR, Kupfer DJ. The Pittsburgh Sleep Quality Index: A new instrument for psychiatric practice and research. Psychiatry Res. 1989; 28(2): 193–213. pmid:2748771
  110. 110. Jurca R, Jackson AS, LaMonte MJ, Morrow JR, Blair SN, Wareham NJ, et al. Assessing cardiorespiratory fitness without performing exercise testing. Am J Prev Med. 2005; 29(3): 185–193. pmid:16168867
  111. 111. Ophir E, Nass C, Wagner AD. Cognitive control in media multitaskers. Proc Natl Acad Sci U S A. 2009; 106(37): 15583–15587. pmid:19706386
  112. 112. Chein JM, Morrison AB. Expanding the mind’s workspace: training and transfer effects with a complex working memory span task. Psychon Bull Rev. 2010; 17(2): 193–199. pmid:20382919
  113. 113. Novick JM, Hussey E, Teubner-Rhodes S, Harbison JI, Bunting MF. Clearing the garden-path: improving sentence processing through cognitive control training. Lang Cogn and Neurosci. 2014; 29(2): 186–217.
  114. 114. Gray JR, Chabris CF, Braver TS. Neural mechanisms of general fluid intelligence. Nat Neurosci. 2003; 6(3): 316–322. pmid:12592404
  115. 115. Coderre EL, van Heuven WJ. The effect of script similarity on executive control in bilinguals. Front Psychol. 2014; 5: 1070. pmid:25400594
  116. 116. von Bastian CC, Eschen A. Does working memory training have to be adaptive? Psychol Res. 2015; In press.
  117. 117. Baayen RH, Davidson DJ, Bates DM. Mixed-effects modeling with crossed random effects for subjects and items. J Mem Lang. 2008; 59(4): 390–412.
  118. 118. Bates D, Maechler M, Bolker BM, Walker S. Fitting linear mixed-effects models using lme4. J Stat Softw. 2015; 67(1): 1–48.
  119. 119. Halekoh U, Højsgaard S. A Kenward-Roger approximation and parametric bootstrap methods for tests in linear mixed models—the R package pbkrtest. J Stat Softw. 2014; 59(9): 1–32.
  120. 120. Rouder JN, Speckman PL, Sun D, Morey RD, Iverson G. Bayesian t tests for accepting and rejecting the null hypothesis. Psychon Bull Rev. 2009; 16(2): 225–237. pmid:19293088
  121. 121. Efron B. Better bootstrap confidence intervals. J Am Stat Assoc. 1987; 82(397): 171–185.
  122. 122. Lovden M, Brehmer Y, Li SC, Lindenberger U. Training-induced compensation versus magnification of individual differences in memory performance. Front Hum Neurosci. 2012; 6: 141. pmid:22615692
  123. 123. Brehmer Y, Westerberg H, Bäckman L. Working-memory training in younger and older adults: training gains, transfer, and maintenance. Front Hum Neurosci. 2012; 6: 63. pmid:22470330
  124. 124. Stothart CR, Simons DJ, Boot WR, Kramer AF. Is the effect of aerobic exercise on cognition a placebo effect? PLoS One. 2014; 9(10): e109557. pmid:25289674
  125. 125. Horn JL, Cattell RB. Age differences in fluid and crystallized intelligence. Acta Psychol (Amst). 1967; 26: 107–129.
  126. 126. Raven J, Raven J, Court J. Manual for Raven’s Progressive Matrices and Vocabulary Scales. Oxford: Oxford Psychologists Press; 1998.
  127. 127. Salthouse TA, Pink JE, Tucker-Drob EM. Contextual analysis of fluid intelligence. Intelligence. 2008; 36(5): 464–486. pmid:19137074
  128. 128. Jaeggi SM, Studer-Luethi B, Buschkuehl M, Su Y, Jonides J, Perrig WJ. The relationship between n-back performance and matrix reasoning—implications for training and transfer. Intelligence. 2010; 38(6): 625–635.
  129. 129. Moody DE. Can intelligence be increased by training on a task of working memory? Intelligence. 2009; 37(4): 327–328.
  130. 130. Kramer AF, Hahn S, Cohen NJ, Banich MT, McAuley E, Harrison CR, et al. Ageing, fitness and neurocognitive function. Nature. 1999; 400(6743): 418–419. pmid:10440369
  131. 131. Hillman CH, Erickson KI, Kramer AF. Be smart, exercise your heart: Exercise effects on brain and cognition. Nat Rev Neurosci. 2008; 9(1): 58–65. pmid:18094706
  132. 132. Jaeggi SM, Buschkuehl M, Shah P, Jonides J. The role of individual differences in cognitive training and transfer. Mem Cognit. 2014; 42(3): 464–480. pmid:24081919
  133. 133. Studer-Luethi B, Jaeggi SM, Buschkuehl M, Perrig WJ. Influence of neuroticism and conscientiousness on working memory training outcome. Pers Individ Dif. 2012; 53(1): 44–49.
  134. 134. Basak C, Voss MW, Erickson KI, Boot WR, Kramer AF. Regional differences in brain volume predict the acquisition of skill in a complex real-time strategy videogame. Brain Cogn. 2011; 76(3): 407–414. pmid:21546146
  135. 135. Erickson KI, Boot WR, Basak C, Neider MB, Prakash RS, Voss MW, et al. Striatal volume predicts level of video game skill acquisition. Cereb Cortex. 2010; 20(11): 2522–2530. pmid:20089946