Last June I posted a blog entry
about training working memory, focusing on a study by Tom Redick and his colleagues, which concluded that training working memory might boost performance on whatever task was practiced, but it would not improve fluid intelligence.(Measures of fluid intelligence are highly correlated with
measures of working memory, and improving intelligence would be most people's purpose in undergoing working memory training.)I recently received an email from
Martin Walker, of MindSparke.com, which offers brain training. Walker sent me a polite email arguing that the study is not ecologically valid: that is, the conclusions may be accurate for the conditions used in the study, but the conditions used in the study do not match those typically encountered outside the laboratory. Here's the critical text of his email, reprinted with his permission: "There is a significant problem with the design of the study that invalidates all of the hard work of the researchers--training frequency. The paper states that the average participant completed his or her training in 46 days. This is an average frequency of about 3 sessions per week. In our experience this frequency is insufficient. The original Jaeggi study enforced a training frequency of 5 days per week. We recommend at least 4 or 5 days per week.
With the participants taking an average of 46 days to complete the training, the majority of the participants did not train with sufficient frequency to achieve transfer. The standard deviation was 13.7 days which indicates that about 80% of the trainees trained less frequently than necessary. What’s more, the training load was further diluted by forcing each session to start at n=1 (for the first four sessions) or n=2, rather than starting where the trainee last left off.
"I forwarded the email to Tom Redick
, who replied: "Your comment about the frequency of training was something that, if not in the final version of the manuscript, was questioned during the review process. Perhaps it would’ve been better to have all subjects complete all 20 training sessions (plus the mid-test transfer session) within a shorter prescribed amount of time, which would have led to the frequency of training sessions being increased per week. Logistically, having subjects from off-campus come participate complicated matters, but we did that in an effort to ensure that our sample of young adults was broader in cognitive ability than other cognitive training studies that I’ve seen. This was particularly important given that our funding came from the Office of Naval Research – having all high-ability 18-22 year old Georgia Tech students would not be particularly informative for the application of dual n-back training to enlisted recruits in the Army and Marines.
However, I don’t really know of literature that indicates the frequency of training sessions is a moderating factor of the efficacy of cognitive training, especially in regard to dual n-back training. If you know of studies that indicate 4-5 days per week is more effective than 2-3 days week, I’d be interested in looking at it.
As mentioned in our article, the Anguera et al. (2012) article that did not include the matrix reasoning data reported in the technical report by Seidler et al. (2010) did not find transfer from dual n-back training to either BOMAT or RAPM [Bochumer Matrices Test and Raven's Advanced Progressive Matrices, both measures of fluid intelligence], despite the fact that “Participants came into the lab 4–5 days per week (average = 4.5 days) for approximately 25 min of training per session” (Anguera et al., 2012), for a minimum of 22 training sessions. In addition, Chooi and Thompson (2012) administered dual n-back to participants for either 8 or 20 days, and “Participants trained once a day (for about 30 min), four days a week”. They found no transfer to a battery of gF and gC tests, including RAPM.
In our data, I correlated the amount of dual n-back practice gain (using the same method as Jaeggi et al) during training and the number of days it took to finish all 20 practice sessions (and 1 mid-test session). I would never really trust a correlation of N = 24 subjects, but the correlation was r = -.05.'. I re-analyzed our data, looking only at those dual n-back and visual search training subjects that completed the 20 training and 1 mid-test session within 23-43 days, meaning they did an average of at least 3 sessions of training per week. For the 8 gF tasks (the only ones I analyzed), there was no hint of an interaction or pattern suggesting transfer from dual n-back.So to boil Redick's response down to a sentence, he's pointing out that other studies have observed no impact on intelligence when using a training regimen closer to that advocated by Walker, and Redick finds no such effect in a follow-up analysis of his own data (although I'm betting he would acknowledge that the experiment was not designed to address this question, and so does not offer the most powerful means of addressing it.)
So it does not seem that training frequency is crucial. A final note: Walker commented in another email that customers of MindSparke consistently feel that the training helps, and Redick remarked that subjects in his experiments have the same impression. It just doesn't bear out in performance.
This is a negative finding, so I'll keep it brief.How do kids acquire new vocabulary?
This process is poorly understood.An influential theory has been that the phonological loop in working memory provides essential support. The phonological loop is like a little tape loop lasting perhaps two seconds; it allows you to keep active a sound you hear. The idea is that a new unfamiliar word can be placed on the loop for practice and to keep it around while the surrounding context helps you figure out the meaning.If so, you'd predict that the larger the capacity of the phonological loop and the greater the fidelity with which it "records" the better children will be able to learn new vocabulary.The efficacy of the phonological loop is measured by having kids repeat nonsense words. Initially they are short--tozzy--but they increase in length to pose greater challenge to the phonological loop--liddynappish.Several studies have shown correlations between phonological loop capacity and vocabulary size in children (for a review, see Melby-Lervag & Lervag, 2012).The problem: it could be that having a big vocabulary makes the phonological loop test easier, because it makes it more likely that some of the nonsense words remind you of a word you already know. (And so you have the semantics of that word helping you remember the to-be-remembered word.)
Indeed, even proponents of the hypothesis argue that's what happens when kids get older.What you really need is a study that measures phonological loop capacity at time 1, and finds that it predicts vocabulary size at time 2. There is one such study (Gathercole et al, 1992) but it used a statistical analysis (cross-lagged correlation) that is now considered less than ideal. A new study (Melby-Lervag et al, 2012) used probably the best methodology of any used to date. It was a longitudinal study that tested nonword repetition ability
and vocabulary once each year between the ages of 3 and 7.They used a different statistical technique--simplex
models--to assess causal relationships. They found that both nonword repetition and vocabulary show growth, both show stability across children, and both are moderately correlated, but there was no evidence that one influenced the growth of the other over time.The group then reanalyzed
the Gathercole et al (1992) data and found the same pattern. This is one depressing paper. Something we thought we knew--the phonological loop contributes to vocabulary learning--may well be wrong.
If anyone is working on a remediation program for young children that centers on improving the working of the phonological loop, it's probably time to rethink that idea.
Gathercole, S. E., Willis, C., Emslie, H., & Baddeley, A. (1992). Phonological memory and vocabulary development during the early school years: A longitudinal study. Developmental Psychology, 28
Melby-Lervåg, M., & Lervåg, A. (2012). Oral language skills mod-erate nonword repetition skills in children with dyslexia: A meta-analysis of the role of nonword repetition skills in dyslexia. Scientific Studies of Reading, 16,
Melby-Lervåg, M., & Lervåg, A., Lyster, S-A H., Klem, M., Hagtvet, B., & Hulme, C. (in press). Nonword-repetition ability does not appear to be a causal influence on children's vocabulary development. Psychological Science.
A lot of data from the last couple of decades shows a strong association between executive functions (the ability to inhibit impulses, to direct attention, and to use working memory) and positive outcomes in school and out of school (see review here
). Kids with stronger executive functions get better grades, are more likely to thrive in their careers, are less likely to get in trouble with the law, and so forth. Although the relationship is correlational and not known to be causal, understandably researchers have wanted to know whether there is a way to boost executive function in kids.Tools of the Mind (Bedrova & Leong, 2007) looked promising.
It's a full preschool curriculum consisting of some 60 activities, inspired by the work of psychologist Lev Vygotsky. Many of the activities call for the exercise of executive functions through play. For example, when engaged in dramatic pretend play, children must use working memory to keep in mind the roles of other characters and suppress impulses in order to maintain their own character identity. (See Diamond & Lee, 2011, for thoughts on how and why such activities might help students.)A few studies of relatively modest scale (but not trivial--100-200 kids) indicated that Tools of the Mind has the intended effect (Barnett et al, 2008; Diamond et al, 2007). But now some much larger scale followup studies (800-2000 kids) have yielded discouraging results.These studies were reported at a symposium this Spring at a meeting of the Society for Research on Educational Effectiveness. (You can download a pdf summary here.) Sarah Sparks covered this story for Ed Week when it happened in March, but it otherwise seemed to attract little notice. Researchers at the symposium reported the results of three studies. Tools of the Mind
did not have an impact in any of the three. What should we make of these discouraging results? It's too early to conclude that Tools of the Mind simply doesn't work as intended. It could be that there are as-yet unidentified differences among kids such that it's effective for some but not others. It may also be that the curriculum is more difficult to implement correctly than would first appear to be the case. Perhaps the teachers in the initial studies had more thorough training. Whatever the explanation, the results are not cheering. It looked like we might have been on to a big-impact intervention that everyone could get behind.
Now we are left with the dispiriting conclusion "More study is needed."
Barnett, W., Jung, K., Yarosz, D., Thomas, J., Hornbeck, A., Stechuk, R., & Burns, S.(2008). Educational effects of the Tools of the Mind curriculum: A randomized trial. Early Childhood Research Quarterly, 23
, 299–313.Bedrova, E. & Leong, D. (2007) Tools of the Mind: The Bygotskian appraoch to early childhood education. Second edition. New York: Merrill.Diamond, A. & Lee, K. (2011). Interventions shown to aid executive function development in children 4-12 years old. Science, 333, 959-964.
Diamond, A., Barnett, W. S., Thomas, J., & Munro, S. (2007). Preschool program improves cognitive control. Science, 318
A few months ago the New York Times
published an article
on the training of working memory titled "Can You Make Yourself Smarter?" I suggested that the conclusions of the article might be a little too sunny--I pointed out
that reviews of the literature by scientists suggested that having subjects practice working memory tasks (like the n-back task, shown below) led to improvement in the working memory task, but not in fluid intelligence.
I also pointed out that a significant limitation of many of these studies was the use of a single measure of intelligence. A new study solves that problem.The study
, by Thomas Redick and seven other researchers, offers a negative result--training doesn't help--which often is not considered news (link is 404 as I write this--I hope it will be back up soon). There are lots of ways of screwing up a study, most of which would lead to null results. But this null result ended up published in the excellent Journal of Experimental Psychology: General because the study is so well-designed.
Researchers gave the training plenty of opportunity to have an impact--subjects underwent 20 sessions. There were enough subjects (N=75) to afford decent statistical power to detect an effect, were one present. Researchers used a placebo control group (visual search) as well as a no-contact control group. They used multiple measures of fluid intelligence, crystallized intelligence, multi-tasking, and perceptual speed. These measures were administered before, during, and after training.
The results: people got better at what they practiced--either n-back or visual search--but there was no transfer to any other task, as shown in the Table (click for larger version).
One study is never fully conclusive on any issue. But given the previous uneven findings of the effects, this study represents another piece of the emerging picture: either fluid intelligence is trainable only in some specialized yet-to-be-defined circumstances, or it's not possible to make a substantial improvement in fluid intelligence through training at all.
These results make me skeptical of commercial programs offering to improve general cognitive processing.
Redick, T. S., Shipstead, Z., Harrison, T. L., Hicks, K. L., Fried, D. E., Hambrick, D. Z., Kane, M. J., & Engle, R. W. (in press). No evidence of intelligence improvement after working memory training: A randomized, placebo-controlled study. Journal of Experimental Psychology: General.
Should kids be allowed to chew gum in class? If a student said "but it helps me concentrate. . ." should we be convinced? If it provides a boost, it's short-lived. It's pretty well established that a burst of glucose provides a brief cognitive boost (see review here), so the question is whether chewing gum in particular provides any edge over and above that, or whether a benefit would be observed when chewing sugar-free gum.
One study (Wilkinson et al., 2002
) compared gum-chewing to no-chewing (and "sham chewing" in which subjects were to pretend to chew gum, which seems awkward). Subjects performed about a dozen tasks, including some of vigilance (i.e., sustaining attention), short-term and long term memory.
Researchers reported some positive effect of gum-chewing for four of the tests. It's a little hard to tell from the brief write-up, but it appears that the investigators didn't correct their statistics for the multiple tests.
This throw-everything-at-the-wall-and-see-what-sticks may be a characteristic of this research. Another study (Smith, 2010
) took that same approach and concluded that there were some positive effects of gum chewing for some of the tasks, especially for feelings of alertness. (This study did not use sugar-free gum so it's hard to tell whether the effect is due to the gum or the glucose.)
A more recent study (Kozlov, Hughes & Jones, 2012
) using a more standard short-term memory paradigm, found no benefit for gum chewing.
What are we to make of this grab-bag of results? (And please note this blog does not offer an exhaustive review.)
A recent paper (Onyper, Carr, Farrar & Floyd, 2011
) offers a plausible resolution. They suggest that the act of mastication offers a brief--perhaps ten or twenty minute--boost to cognitive function due to increased arousal. So we might see benefit (or not) to gum chewing depending on the timing of the chewing relative to the timing of cognitive tasks.
The upshot: teachers might allow or disallow gum chewing in their classrooms for a variety of reasons. There is not much evidence to allow it for a significant cognitive advantage. EDIT: Someone emailed to ask if kids with ADHD benefit. The one study I know of reported a cost to vigilance with gum-chewing for kids with ADHD
Kozlov, M. D., Hughs, R. W. & Jones, D. M. (2012). Gummed-up memory: chewing gum impairs short-term recall. Quarterly Journal of Experimental Psychology, 65, 501-513.
Onyper, S. V., Carr, T. L, Farrar, J. S. & Floyd, B. R. (2011). Cognitive advantages of chewing gum. Now you see them now you don't. Appetite, 57, 321-328.
Smith, A. (2010). Effects of chewing gum on cognitive function, mood and physiology in stressed an unstressed volunteers. Nutritional Neuroscience, 13, 7-16.
Wilkinson, L., Scholey, A., & Wesnes, K. (2002). Chewing gum selectively improves aspects of memory in healthy volunteers. Appetite, 38, 235-236.
The New York Times Magazine has an article
on working memory training and the possibility that it boosts on type of intelligence.I think the article is a bit--but only a bit--too optimistic in its presentation.The article correctly points out that a number of labs have replicated the basic finding: training with one or another working memory task leads to increases in standard measures of fluid intelligence, most notably, Raven's Progressive Matrices.
Working memory is often trained with a N-back task, shown in the figure at left from the NY Times article. You're presented with a series of stimuli, e.g. you're hearing letters. You press a button if a stimulus is the same as the one before (N=1) or the time before last (N=2) or. the time before that (N=3). You start with N=1 and N increases if you are successful. (Larger N makes the task harder.) To make it much harder, researchers can add a second stream of stimuli (e.g., the colored squares shown at left) and ask you to monitor BOTH streams of stimuli in an N-back task. That is the training task that you are to practice. (And although the figure calls it a "game" it's missing one usual feature of a game
; it's no fun at all.)There are two categories of outcome measures taken after training. In a near-transfer task, subjects are given some other measure of working memory
to see if their capacity has increased. In a far-transfer
task, a task is administered that isn't itself a test of working memory, but of a process that we think depends on working memory capacity. All the excitement has been about far-transfer measures, namely that this training boosts intelligence, about which more in a moment. But it's actually pretty surprising and interesting that labs are reporting near-transfer. That's a novel finding, and contradicts a lot of work that's come before, showing that working memory training tends to benefit only the particular working memory task used during training, and doesn't even transfer to other working memory tasks.
The far-transfer claim has been that the working memory training boosts fluid intelligence. Fluid intelligence
is one's ability to reason, see patterns, and think logically, independent of specific experience. Crystallized intelligence, in contrast, is stuff that you know, knowledge that comes from prior experience. You can see why working memory capacity might lead to more fluid intelligence--you've got a greater workspace in which to manipulate ideas.A standard measure of fluid intelligence is the Ravens Progressive Matrices task, in which you see a pattern of figures, and you are to say which of a several choices would complete the pattern, as shown below.
So, is this finding legit? Should you buy an N-back training program for your kids? I'd say the jury is still out.
quotes Randy Engle--a highly regarded working memory researcher--on the subject, and he can hardly conceal his scorn: “May I remind you of ‘cold fusion’?”
Engle--who is not
one of those scientists who has made a career out of criticizing others--has a lengthy review of the working memory training literature which you can read here
Another recent review
(which was invited for the journal Brain & Cognition
) concluded "Sparse evidence coupled with lack of scientific rigor, however, leaves claims concerning the impact and duration of such brain training largely unsubstantiated. On the other hand, at least some scientific findings seem to support the effectiveness and sustainability of training for higher brain functions such as attention and working memory."
My own take is pretty close to that conclusion.
There are enough replications of this basic effect that it seems probable that something
is going on. The most telling criticism of this literature is that the outcome measure is often a single task. You can't use a single task like the Ravens
and then declare that fluid intelligence has increased because NO task is a pure measure of fluid intelligence. There are always going to be other factors that contribute to task performance.
The best measure of an abstract construct like "fluid intelligence" is one that uses several measures of what look to be quite different tasks, but which you have reason to think all call on fluid intelligence. Then you use statistical methods to look for shared variance among the tasks.
So what we'd really like to see is better performance after working memory training on a few tasks.The fact is that in many of these studies, researchers have tried to show transfer to more than one task, and the training transfers to one, but not the other. Here's a table from a 2010 review by Torkel Klingberg showing this pattern. (Click the image to see a larger version.)
This work is really just getting going, and the inconsistency of the findings means one of two things. Either the training regimens need to be refined, whereupon we'll see the transfer effects more consistently, OR the benefits we've seen thus far were mostly artifactual, a consequence of uninteresting quirks in the designs of studies or the tasks
My guess is that the truth lies somewhere between these two--there's something here, but less than many people are hoping. But it's too early to say with much confidence.