Post a new comment on this article
Post Your Discussion Comment
Please follow our guidelines for comments and review our competing interests policy. Comments that do not conform to our guidelines will be promptly removed and the user account disabled. The following must be avoided:
- Remarks that could be interpreted as allegations of misconduct
- Unsupported assertions or statements
- Inflammatory or insulting language
Why should this posting be reviewed?
See also Guidelines for Comments and Corrections.
Thank you for taking the time to flag this posting; we review flagged postings on a regular basis.close
A few comments/questions
Posted by 02 May 2013 at 20:46 GMTon
1. Having looked up the Edwards et al (2005) article on the UFOV task, which was used as the main outcome measure here, I'm struck by how similar the training task was to this outcome task (down to the automobiles as central stimuli and the concentrically presented distractors). It would seem to me that the main finding here should be characterized as *extremely* near transfer (albeit long-lasting extremely near transfer). In fact, I can't imagine a closer analogue to the UFOV outcome task.
2. I thank the authors for re-analyzing the data after dropping those from subjects who completed less than an hour of training. The results, though, give me pause. First, the effect size for the in-house training group actually got notably *weaker* (although still statistically significant) after eliminating the "non-adherence" subjects, suggesting that the non-adherence subjects actually showed a rather strong "training" effect. Second, given that the median training durations across most of your groups was 10 hours, that means that substantial numbers of subjects trained for a considerably more limited amount of time. What is the *cognitive* mechanism that could account for year-long training benefits for non-adherence subjects and for minimally trained subjects? It would seem that non-cognitive/motivational/expectancy/placebo effects would be the most plausible explanation.
3. Although the study included an active control group (crossword puzzles), it does not appear to have been an adaptive training task that got more challenging as subjects trained/improved, and so this seems to be a significant limitation. As well, regarding the primary UFOV outcome variable, at least, the training and the control groups undoubtedly experienced differential placebo effects, with the training group having much stronger expectations of improvement given the similarity of the training and UFOV tasks.
4. The secondary, far transfer results (to speeded neuropsychological tests) are probably more impressive conceptually, insofar as they are less vulnerable to the criticism of differential placebo effects between training and control groups (although the trained groups *were* trained on a speeded task). However, I'm a bit uncomfortable with the speculation/suggestion in the General Discussion that these measures might tell us something about executive functions, as the effects here were primarily in the baseline conditions that don't particularly (or substantially) involve any executive processes.
Thank you, in advance, for considering these comments on your interesting work.
Michael J. Kane
Dept of Psychology
University of North Carolina at Greensboro
RE: A few comments/questions
03 May 2013 at 01:22 GMTreplied to on
Enter your comment...Thanks so very much for comments. I shall try and address them briefly.
1. I agree concerning the similarities of the cognitive training to the UFOV test itself. This was a concern with the ACTIVE study. As noted in the paper, given our results, it is possible that ACTIVE overestimated the speed training effect due (a) to the greater similarity of the UFOV to the older MS DOS version of the cognitive training, and (b) given that there the booster effect only occurs for the UFOV in our IHAMS study. That said, the effects in IHAMS on the Trails A and B, SDMT, and Stroop Word indicate that there is real improvement.
2. For the non-adherence sensitivity analysis, I would not characterize a change in effect size for the on-site 10 hours of Road Tour training group from 0.32 to 0.28 as notably weaker.
3. We selected the crosswords program for our attention control to overcome the limitation of ACTIVE's using a no-contact control group. We wanted a computer based program for a task having no known evidence of leading to cognitive improvement. That said, you are correct that is a not an adaptive task. But that does not lead to a significant limitation.
4. We agree that the transfer effects to non-trained tasks, which are often referred to as tapping executive function, is impressive.
RE: RE: A few comments/questions
03 May 2013 at 12:56 GMTreplied to on
Thank you for your thoughtful reply. Allow me one last round of comment, and you may have the final word, if you like.
1) We seem to agree about the extent to which your primary result reflects very near transfer, given the great similarity between the training task and the UFOV outcome measure. We still disagree, however, on how confident one should be about whether there was a more generalized (or "real") improvement in some fundamental cognitive capabilities, such as general processing speed. My remaining concerns are expressed below.
2) You're actually quite right that the on-site group's effect change, from including versus eliminating "non-adherence" subjects, is not very large (-.322 vs. -.278, respectively). However, my concern about this issue holds up, nonetheless. The fact that the effect declined at all (rather than getting at least somewhat stronger) after eliminating subjects with <1 hr of training is still indicative of substantial "training" gains being shown by the subjects who completed almost no training at all. Given that a non-negligible number of subjects showed an apparent benefit of training even without having really done any training, it opens up the possibility that some/much/most(?) of the training benefits demonstrated in this study were not actually due to some cognitive-ability change. I suspect that if you analyzed only the data from subjects who completed 4 hrs or less of training, across all groups, you'd still see an apparent training benefit at the one-year follow-up. If so, it would be extremely difficult to accept that a fundamental cognitive change had produced those gains.
3) I respectively disagree that comparing an adaptive training regimen to a non-adaptive control task is not a significant limitation (although this is an empirical question that could be addressed in future work). In principle, whereas the training group subjects are being constantly challenged to improve their performance over training, and are getting implicit feedback on their progress by the ramping-up of task demands, the controls have no such experience. Thus, the door is opened to differential placebo (or expectancy, or confidence) effects between the training and control groups.
As well, because the training tasks emphasized performance speed, whereas the control task did not, any subsequent transfer-of-training benefits to other speeded tasks (Stroop, Trails, DST) may reflect not only some real change to cognitive processing speed, but also (or instead) a nudging of subjects to a new point on the speed/accuracy continuum. Older adults are generally more conservative in their response thresholds in speeded choice tasks (see, e.g., Ratcliff's diffusion-modeling studies with older versus younger adults) and so practicing on a task that forces them to go faster and faster may subsequently give them license, as it were, to respond more quickly than they otherwise would on other tasks. In short, then, although the far-transfer effects found in the present study are "more impressive" in many ways than are the very-near-transfer effects, I'm still not convinced that they are as objectively "impressive" as you claim.
4) Yes, some of the neuropsychological far-transfer tasks used here are sometimes argued to reflect "executive functions." However, to the extent that such claims are warranted (and I'd suggest they are only for the Stroop and Trails tasks), they are only so when the dependent measure reflects the *difference* between the baseline condition (color naming of color-bars in Stroop; the connect-only-the-numbers page on Trails A) and the "experimental" condition (color naming of color-words in Stroop; the connect-numbers-and-letters-in-alternation page on Trails B). Because the training gains shown here span the baseline and experimental conditions, it indicates that the training gains are in overall response speed rather than in any executive processes.
RE: "Respectfully" disagree
03 May 2013 at 19:09 GMTreplied to on
I'm sorry for the "respectively" typo in point 3.
RE: RE: RE: A few comments/questions
06 May 2013 at 14:32 GMTreplied to on
Enter your comment...
Thank you for the additional comments and suggestions. My apologies for delaying my response, but I am no vacation, or at least am trying to be. As to your points...
1. It is the case that the results on the UFOV reflect near transfer, and in that we are in agreement. As to more generalized transfer, we see things a bit differently in terms of the strength of the evidence, which is a frequent occurrence in science.
2. As to the analysis restricted to those with at least an hour of training, the main point, as shown in the CONSORT flow chart, is that while the on-site with booster training and on-site attention control groups had minimal non-adherence (2.0% and 3.2% respectively), and the on-site without booster training had more non-adherence (9.8%), the at-home training group had 28.3% non-adherence (by the 1 hour criterion). Thus, the increased effect size for the at-home group after adjustment for non-adherence, and the lack of meaningful change in the other groups after adjustment for non-adherence, reflects differential adherence in the presence of effective training, does it not?
3. The selection of the right attention control group is always open to discussion, especially after the fact. I believe we agree that ours is an advancement over the no-contact control group used in ACTIVE. That said, we again see things a bit differently in terms of the strength of the evidence.
4. The suggestion for post-hoc analyses on differences between the "baseline and experimental" conditions is a good one, and it has been added to our "to do" list.
Again, I thank you for your careful consideration of our work, and your beneficial suggestions.