Neuroscience--especially human neuroscience, and more especially human functional brain imaging--has had a quite a run in the last twenty years. In the first decade the advances were known mostly to scientists. In the last ten years there have been plenty of articles in the popular press featuring brain images. Many of these articles have been breathless and silly. Some backlash was inevitable and one of the more potent examples was a recent op-ed in the New York Times.
Still, as Gary Marcus
pointed out in a nice blog piece
, we would be wise not to throw the baby out with the bath water.
In that vein, I am following up on a piece I wrote last week,
in which I argued that much of the work on this topic in education is neuro-garbage. Most of the piece was devoted to explaining why it's difficult to apply neuroscience to education. (I left it to the reader to infer that it's correspondingly easy to be glib.)
Toward the end of that piece I suggested that neuroscience can and has been usefully applied to problems in education. This week I'll describe how. I'll tackle one method each day this week.I'll keep things as simple as possible, but fasten your seatbelt if you feel the need.
Neuroscience can give researchers clues about the basic architecture of a cognitive process. It can show that a cognitive process might be more complex than we would have otherwise guessed, or that it's more simple.
Consider the figure below from Dehaene et al (2003) (click it for a larger version)
This figure summarizes a great deal of work indicating that there are three representations of number in the brain: a core quantity system (red), numbers in verbal form (green), and attentional orientation on the number line.
Suppose I am an educational psychologist, trying to figure out how children develop concepts of number, and how to coordinate the teaching of early mathematics with these concepts. I must have a theory of how number is represented in the mind. It's possible--actually, it's likely--that I would think of number as one thing, that children have one concept of the number five, for example. But this neuroscientific work indicates that the brain might use three representations of number. So it might be wise for me to use three representations in my cognitive theory of mathematics (which will support my educational theory).
In this example, there is greater diversity (three representations) where we might have guessed that we'd see simplicity (one representation). The opposite may also happen.
In one example, neuroscientific data were useful in interpreting variations in dyslexia across languages.
One of the peculiarities of dyslexia is that some key symptoms vary across different languages. For example, people with dyslexia usually show a large disparity between visual word recognition and IQ. But that disparity tends to be much larger in languages in which the spelling-sound correspondence is often inconsistent (e.g., English) than in languages where it's more consistent (e.g., Italian).
This pattern raises the question: is what we're calling "dyslexia" really the same thing in English and Italian? Maybe reading difficulties are so intertwined with the language you're learning to read that it doesn't make sense to call problems by the same label when they apply to English vs. Italian. Or maybe the problems kids develop in English-speaking vs. Italian-speaking countries is due to differences in the way reading tends to be taught in different countries.
Eraldo Paulesu and his colleagues (2000) used brain imaging data to argue that dyslexia is the same disorder in readers of different languages. They showed that the same brain region in left temporal cortex shows reduced activation during reading in French, Italian, and British readers who have been diagnosed with dyslexia.
Hence in this case neuroscientific data has shown us that there is simplicity (one reading problem) where we could have reasonably thought there was greater diversity (different reading problems across languages).
EDIT: It's worth adding that anatomic separability (or overlap) doesn't guarantee cognitive separability or identity. But it's an indicator.
Tomorrow: Method 2.
Dehaene, S., Pizaaz, M., Pinel, P., & Cohen, L. (2003). Three parietal circuits for number processing. Cognitive Neuropsychology, 20, 487-506.
Paulesu, E., Demonet, J.-F., Fazio, F., McCrory, E., Chanoine, V., Brunswick, N., Cappa, S. F., Cossu, G., Habib, M., Frith, C. D., & Frith, U. (2000). Dyslexia: Cultural diversity and biological unity. Science, 291, 2165-2167.
Neuroscience reporting: unimpressive.
in the New York Times reported on some backlash against inaccurate reporting on neuroscience. (It included name-checks for some terrific blogs, including Neurocritic
, Mind Hacks
, Dorothy Bishop's Blog
). The headline ("Neuroscience: Under Attack") was inaccurate, but the issue raised is important; there is some sloppy reporting and writing on neuroscience.
How does education fare in this regard?
There is definitely a lot of neuro-garbage in the education market. Sometimes
it's the use of accurate but ultimately pointless neuro-talk that's mere window dressing for something that teachers already know (e.g., explaining the neural consequences of exercise to persuade teachers that recess is a good idea for third-graders).
Other times the neuroscience is simply inaccurate (exaggerations regarding the differences between the left and right hemispheres, for example).
You may have thought I was going to mention learning styles.
Well, learning styles is not a neuroscientific claim; it's a claim about the mind. But it's often presented
as a brain claim, and that error is perhaps the most instructive. You see, people who want to talk to teachers about neuroscience will often present behavioral findings (e.g., the spacing effect
)--as though they are neuroscientific findings.
What's the difference, and who cares? Why does it matter whether the science that leads to a useful classroom application is neuroscience or behavioral?
It matters because it gets to the heart of how and when neuroscience can be applied to educational practice. And when a writer doesn't seem to understand these issues, I get anxious that he or she is simply blowing smoke.
Let's start with behavior. Applying findings from the laboratory is not straightforward. Why? Consider this question. Would a terrific math tutor who has never been in a classroom before be a good teacher? Well, maybe. But we recognize that tutoring one-on-one is not the same thing as teaching a class. Kids interact, and that leads to new issues, new problems. Similarly, a great classroom teacher won't necessarily be a great principal.
This problem--that collections don't behave the same way as individuals--is pervasive.
Similarly, knowing something about a cognitive process--memory, say--is useful, but it's not guaranteed to translate "upwards" the way you expect. Just as children interact, making the classroom more than a collection of kids, so too cognitive processes interact, making a child's mind more than a collection of cognitive processes. That's why we can't take lab findings and pop them right into the classroom. To use my favorite painfully obvious example, lab findings consistently show that repetition is good for memory. But you can't mindlessly implement that in schools--"keep repeating this til you've got it, kids." Repetition is good for memory, but terrible for motivation.
I've called this
the vertical problem
(Willingham, 2009). You can't assume that a finding at one level will work well at another level. When we add neuroscience, there's a second problem. It's easiest to appreciate this way. Consider that in schools, the outcomes we care about are behavioral; reading, analyzing, calculating, remembering. These are the ways we know the child is getting something from schooling. At the end of the day, we don't really care what her hippocampus is doing, so long as these behavioral landmarks are in place. Likewise, most of the things that we can change are behavioral. We're not going to plant electrodes in the child's brain to get her to learn--we're going to change her environment and encourage certain behaviors. A notable exception is when we suspect that there is a pharmacological imbalance, and we try to use medication to restore it. But mostly, what we do is behavioral and what we hope to see is behavioral. Neuroscience is outside the loop.
For neuroscience to be useful in the classroom we've got to translate from the behavioral side to the neural side and then back again. I've called this
the horizontal problem
The translation to use neuroscience in education can be done--it has
been done--but it isn't easy. (I wrote about four techniques for doing it here
, Willingham & Lloyd, 2007).
Now, let's return to the question we started with: does it matter if claims about laboratory findings about behavior are presented as brain claims? I'm arguing it matters because it shows a misunderstanding of the relationship of mind, brain, and educational applications.As we've seen, behavioral sciences and neuroscience face different problems in application. Both face the vertical problem. The horizontal problem is particular to neuroscience. When people don't seem to appreciate the difference, that indicates sloppy thinking. Sloppy thinking is a good indicator of bad advice to educators. Bad advice means that neurophilia
will become another flash in the pan, another fad of the moment in education, and in ten year's time policymakers (and funders) will say "Oh yeah, we tried that."
Neuroscience deserves better. With patience, it can add to our collective wisdom on education. At the moment, however, neuro-garbage is ascendant in education. EDIT:I thought it was worth elaborating on the methods whereby neuroscientific data CAN be used to improve education:Method 1Method 2Method 3Method 4Method 5ConclusionsWillingham, D. T. (2009). Three problems in the marriage of neuroscience and education. Cortex, 45, 54-545.Wilingham, D. T. & Lloyd, J. W. (2007).
How educational theories can use neuroscientific data. Mind, Brain, & Education,
Michael Gove, Secretary of State for Education in Great Britain, delivered a speech on education policy last week called "In Praise of Tests" (text here
), in which he argued for "regular, demanding, rigourous examinations." The reasons offered included arguments invoking scientific evidence, and cited my work as examples of such evidence. That invites the question
"Does Willingham think that the scientific evidence supports testing, as Gove suggested?"This question really has two parts. Did Gove get the science right? And did he apply it in a way that is likely to work as he expects?The answer to the first question is straightforward: yes, he got the science right.
The answer to the second question is that I agree that testing is necessary, but have a different take on the scientific backing for this claim than Gove offered. First, the science. Gove made three scientific claims. First, that people enjoy mental activity that is successful
--it's fun to solve challenging problems. Much of the first chapter of Why Don't Students Like School
is devoted to this idea, but it's a commonplace observation; that's why people enjoy problem-solving hobbies like crossword puzzles or reading mystery novels. Second, Gove claimed that background knowledge is critical for higher thought, a topic I've written about in several places (e.g., here).
The only quibble I have with Gove on this topic is when he says "Memorisation is a necessary precondition of understanding." I'd have preferred "knowledge," to "memorisation" because the latter makes it sound as though one must sit down and willfully commit information to memory. This is a poor way to learn new information--it's much more desirable that the to-be-learned material is embedded in some interesting activity, so that the student will be likely to remember it as a matter of course.It's plain that Gove agrees with me on this point, because he emphasized that exam preparation should not mean a dull drilling of facts, but rather should happen through "
entertaining narratives in history, striking practical work in science and unveiling hidden patterns in maths." I think the word "memorisation" may be what led the Guardian to use a headline
suggesting Gove was advocating rote learning. Third, Gove argued that people (teachers and others) are biased in their evaluations of students
, based on the student's race, ethnicity, gender, or other features that have nothing to do with the students actual performance. A number of studies from the last forty years show that this danger is real.
So on the science, I think Gove is on firm ground. What of the policy he's advocating?I lack expertise in policy matters, and I've argued on this blog that the world of education might be less chaotic if each of us stuck a little closer to the home territory of what we know. Worse yet, I know little about the British education system nor about Gove's larger policy plans. With those caveats in place, I'll tread on Gove's territory and offer these thoughts on policy.It's true that successful thought brings pleasure. The sort of effort I (and others) meant was the solving of a cognitive problem.
Gove offers the example of a singer finishing an aria or a craftsman finishing an artefact. These works of creative productivity likely would bring the sort of pleasure I discussed. It's less certain that the passing of examination would be "successful thought" in this sense.
Why? Because exams seldom call for the creative
deployment of knowledge. Instead, they call for the straightforward recall of knowledge. That's because it's very difficult to write exams that call for creative responses, yet are psychometrically reliable and valid. There is a second manner in which achievement can bring pleasure; I haven't written about it, but I think it's the one Gove may have in mind. It's the pleasure of overcoming a formidable obstacle that you were not sure you could surmount. I agree that passing a difficult test could be a profound
experience. Some children really don't see themselves as students. They have self-confidence, but it comes from knowing that they are effective in other activities. Passing a challenging exam might prompt child who never really thought of himself as "a student" to recognize that he's every bit as able as other children, and that might redirect the remainder of his school experience, even his life.
But there are some obvious difficulties in reaching this goal. How do we motivate the student to work hard enough to actually pass the difficult test? The challenge of the exam is unlikely to do it--the child is much more likely to conclude that he can't possibly pass, so there is no point in trying.
The clear solution is to engage creative teachers who have the skill to work with students who begin school poorly prepared and who may come from homes where education is not a priority. But motivation was the problem we began with, the one we hoped to address. It seems to me that the motivational boost we get from kids passing a tough exam might be a good outcome of successfully motivating kids. It's not clear to me that it will motivate them.
My second concern in Gove's vision of testing is how teachers will believe they should best prepare kids for a difficult exam that demands a lot of factual recall.Gove is exactly right when he argues that teachers ought not to construe this as a call for rote learning of lists of facts, but rather should ensure that rich factual content is embedded in rich learning activities. My concern is that some British teachers--in particular, the ones whose performance Gove hopes to boost--won't listen to him. I say that because of the experience in the US with the No Child Left Behind Act. In the face of mandatory testing for students, some teachers kept doing what they had been doing, which is exactly what Gove
suggests; rich content interwoven with a demand for critical thinking, delivered in a way that motivates kids. These teachers were unfazed by the test, certain that their students would pass.Other teachers changed lesson plans to emphasize factual knowledge, and focused activities on test prep. I've never met a teacher who was happy about this change. Teachers emphasize facts at the expense of all else and engage in disheartening test prep because they think it's necessary. Teachers believed it was necessary because (1) they were uncertain that their old lesson plans would leave kids with the factual knowledge base to pass the test; or (2) they thought that their students entered the class so far behind that extreme measures were necessary to get them to the point of passing; or (3) they thought that the test was narrow or poorly designed and would not capture the learning that their old set of lesson plans brought to kids; or (4) some combination of these factors. So pointing out that exam prep and memorization of facts
is bad practice will probably not be enough.
Despite these difficulties, I think some plan of testing is necessary. Gove puts it this way: "Exams help those who need support to better know what support they need." A cognitive psychologist would say "learning is not possible without feedback." That learning might be an individual student mastering a subject, OR a teacher evaluating whether his students learned more from a new set of lesson plans he devised compared to last year, OR whether students at a school are learning more with block scheduling compared to their old schedule. In each case, you want to be confident that the feedback is valid, reliable, and unbiased. And if social psychology has taught us anything in the last fifty years, it's that people will believe their informal judgments are valid, reliable, and unbiased, whether they are or not.There's more to the speech and I encourage you to read all of it. Here I've commented only on some of the centerpiece scientific claims in it. Again, I emphasize that I don't know British education and I don't know Gove's plans in their entirety, so what I've written here may be inaccurate because it lacks broader context. I can confidently say this: hard as it is, good science is easier than good policy.
There is a lot of talk these days about STEM--science, technology, engineering, and math--and the teachers of STEM subjects. It would seem self-evident that these teachers, given their skill set, would be in demand in business and industry, and thus would be harder to keep in the classroom.A new study
(Ingersoll & May, 2012
) offers some surprising data on this issue.
Using the national Schools and Staffing Survey and the Teacher Follow-Up Survey, they found that science and math teachers have NOT left the field at rates higher than that of other teachers. In this data set (1988-2005) math teachers and science teachers left teaching at about the same rate as teachers in other subjects: about 6% each year.
Furthermore, when these teachers do leave a school, they are no more likely to take a non-education job than other teachers: about 8% of "leavers" took another job outside of education. Much more common reasons to leave the classroom were retirement (about 15%) or an education job other than teaching (about 17%).
The authors argue that teacher turnover, not teachers leaving the field, is the engine behind staffing problems for math and science classes.
So what prompts teacher turnover?
The authors argue that on this dimension math and science teachers differ. Both groups are, unsurprisingly, motivated by better working conditions and higher salaries, but the former matter more to math teachers, and science teachers care more about the latter.
But in both cases, the result is that math and science teachers tend to leave schools with large percentages of low-incomes kids in order to move to schools with wealthier kids.
Ingersoll, R. M., & May, H. (2012). The magnitude, destinations, and determinants of mathematics and science teacher turnover. Educational Evaluation and Policy Analysis, 34, 435-464.
Every teacher wants his or her students to be honest. It's not just a question of fairness, it's a life lesson. The challenge is that people seem to have little qualm about cheating--so long as the cheating is relatively slight: peeking at just an answer or two on a neighbor's quiz, for example
. People want to maintain their own self-concept as an honest person, and small infractions allow them to think of themselves as "basically honest" while raking in the easy profit that dishonesty can afford (Mazar, Amir, & Ariely, 2008).How can we encourage students to be more honest? Christopher Bryan and his colleagues (Bryan, Adams, & Monin, 2012
) had a clever approach to this problem. In talking to people about the subject they either referred to "cheating" or "being a cheater." Note that the latter term makes cheating part of one's identity
. If people are ready to cheat because they are able to maintain their positive self image as a basically honest person, then reminding them that one who cheats is, in fact, a cheater,
ought to make it harder to tell oneself that lie. The test was simple.
An experimenter approached people on the campus of Stanford university, and said We’re interested in how common [cheating is/cheaters are] on college
campuses. We’re going to play a game in which we will be able to
determine the approximate [rate of cheating/number of cheaters] in
the group as a whole but it will be impossible for us to know whether
you’re [cheating/a cheater].
Subjects were asked to pick a number from 1 to 10, and then were told that if they had picked an even number they would receive $5, but if the number were odd, they would receive nothing.
When the experimenter used the word "cheater" 21% of subjects reported having picked an even number, but when "cheating" was used, 50% did. (Other research has shown that there is a strong bias to pick odd numbers in the task; that's why the rates are so low.)Two further experiments replicated the effect. Could teachers make use of this finding? The experiment was not, of course, conducted with K-12 students in an academic setting. But I suspect that the basic manipulation--subtly confronting the individual with the fact that even minor infractions does say something about his or her character--ought to work the same way with students in middle or high school
. That said, it's worth pointing out that other data from Dan Ariely show that a reminder
of the positive
aspect of the person's moral spectrum also helps. In one well known experiment (Mazar & Ariely, 2006) asking subjects to name the ten commandments made them less likely to cheat. The interpretation is that recalling the ten commandments made people reflect on their moral values.In short, the ideal is to remind people of their best side, their good intentions, and then remind them that cheating--sorry, being a cheater--is not compat
ible with their image of themselves.
Bryan, C. J., Adams, G. S., & Monin, B. (in press) When cheating would make you a cheater: Implicating the self prevents unethical behavior. Journal of Experimental Psychology: General.
Mazar, N., & Ariely, D. (2006). Dishonesty in everyday life and its policy implications. Journal of Public Policy & Marketing, 117-126.
Mazar, N., Amir, O., & Ariely, D. (2008). The dishonesty of honest people:
A theory of self-concept maintenance.Journal of Marketing Research,
Is technology changing how students learn, that is, the workings of the brain?
in today's New York Times
reports that most teachers think the answer is "yes," and this development is not positive. The article reports the results of two surveys of teachers, one conducted by the Pew Internet Project, and the other by Common Sense Media. Both report that teachers believe that students' use of digital technology adversely affects their attention spans and makes them less likely to stick with challenging tasks.
In interviews, many teachers report feeling that they have to work harder than they used to in order to keep students engaged.
As the article notes, there have not been any long-term studies that show whether student attention span has been affected by digital media. Still, a lot of psychologists are actually skeptical that digital media are likely to fundamentally change the fundamentals of human cognition.Steven Pinker has written "Electronic media aren't going to revamp the brain's mechanisms of information processing." I made the same argument here.
The basic architecture is likely to be relatively fixed, and in the absence of extreme deprivation, will develop fairly predictably. Sure, it is shaped by experience but those changes will just tune to experience what's already there--it might change the dimensions of the rooms, without altering the fundamental floor plan, so to speak.Does that view conflict with teacher's impressions? Not necessarily.When we talk about a student's attention span, I suspect we're really talking about a particular type of attention. It's not their overall ability to pay attention: kids today can, I think, get lost for hours in a movie or a book or a game just as readily as their parents did. Rather, the seemingly shorter attention span is their ability to maintain attention on a task that is not very interesting to them.
But even within that situation, I suspect that there are two factors at work: one is the raw capacity to direct one's attention. The second is the willingness
to do so. I doubt that technology affects the first, but I'm ready to believe that it affects the second. Directing attention--forcing yourself to think about something you'd rather not think about--is effortful, even mildly aversive. Why would you do it? There are lots of possible reasons. Among them would be previous experiences leading you to believe that such sustained attention leads to a payoff. In other words, if you've grown up in circumstances where very little effort usually led to something that was stimulating and interesting, then you likely have an expectation that that's the nature of the world: I do just a little something, and I get a big payoff. (And the payoff is probably immediate.) The process by which children learn to expect a lot of cool stuff to happen based on minimal effort
may start early.When a toddler is given a toy that puts on a dazzling display of light and sound when a button is pushed, we might be teaching him this lesson. In contrast, the toddler who gets a set of blocks has to put a heck of a lot more effort (and sustained attention) into getting the toy to do something interesting--build a tower, for example, that she can send crashing down. It's hard for me to believe that something as fundamental to cognition as the ability to pay attention can moved around a whole lot. It's much easier for me to accept that one's beliefs--beliefs about what is worthy of my attention, beliefs about how much effort I should dispense to tasks--can be moved around, because beliefs are a product of experience. I actually think that much of what I've written here was implicit in some of the teachers' comments--the emphasis on immediacy, for example--but it's worth making it explicit.
A new survey
of American reading habits was published earlier this week. Much of the news coverage led with the somewhat surprising finding that young people (age 16-29), supposedly enamored of gaming and video content, reported that they read and use libraries. In fact, that they do so more than older people. New York Times blog: Young people frequent libraries, study says
.Christian Science Monitor: Millenials: A rising generation of book lovers.NPR (Boston): Facebook generation is reading strong. Sexy stuff, but I think it's misleading.One message is that young people are reading "a lot." What constitutes "a lot" is a judgement call, obviously, but in this study the data showed that 83% of 18-29 year-olds had a read a book sometime in the previous year. That strikes me as a low bar to be considered "a reader." Other data show t
hat Americans spend much
more time watching television each day than they do reading. This chart
is from the Bureau of Labor Statistics
Those data include Americans of all ages. If we look at younger Americans, the picture looks more or less the same: not a lot of reading. The figure
below shows leisure time activities, separated by sex.
The second way in which the coverage of the Pew study was deceptive lay in the reported age difference. Yes, young people were more likely than older people to report having read a book in the past year, but that difference was very likely due to the fact that many of them were students, doing required reading.
The study did report these data separately, shown below.
By the sometime-in-the-last year measure, older and younger Americans are about the same, except insofar as they are required to read for work or school.
Likewise, the increased use of libraries by young respondents is likely mediated by their need to use libraries for schoolwork.
There have been many reports of American reading habits in the last fifty years, and especially in the last twenty. The overall picture is that reading dropped when television became widely available, and hasn't changed much since then.
Does going to school actually make you smarter (at least, as measured by standard cognitive ability tests)? Answering this question is harder than it would first appear because schooling is confounded with many other variables.
Yes, kids cognitive abilities improve the longer they have been in school, but it's certainly plausible that better cognitive abilities make it more probable that you'll stay in school longer. And schooling is also confounded with age--kids who have been in school longer are also older and therefore have had more life experiences, and perhaps those have prompted the increases in intelligence.
One strategy is to test everyone on their birthday. That way, everyone should have had the same opportunity for life experiences, but the student with a birthday in May has had four months more schooling than the child with the January birthday.
That solves some problems, but it entails other assumptions. For example, older children within a grade might experience fewer social problems, for example.
A new paper (Carlsson, Dahl, & Rooth, 2012)
takes a different approach to addressing this difficult problem.
The authors capitalized on the fact that every male in Sweden must take a battery of cognitive tests for military service. The testing occurs near his 18th birthday, but the precise date is assigned more or less randomly (constrained by logistical factors for the military testers). So the authors could statistically control for the time-of-year effect of the birthday and in addition investigate the effects of just a few days more (or less) of schooling. The researchers were able to access a database of all the males tested between 1980 and 1994.
Students took four tests. Two tests (one of word meanings and one of reading technical prose) tap crystallized intelligence
(i.e. what you know). Two others (spatial reasoning, and logic) tap fluid intelligence
(i.e., reasoning that is not dependent on particular knowledge).
The authors found that older students scored better on all four tests--no surprise there. What about students who were the same age, but who, because of the vagaries of the testing, happened to have had a few days more or fewer of schooling?
More schooling was associated with better performance, but only on the crystallized intelligence tests: an extra 10 days in school improved by about 1% of a standard deviation. Extra non-school days had no effect.
There was no measurable effect of school days on the fluid intelligence tests. This result might mean that these cognitive skills are unaffected by schooling, but it might also mean that the "dose" of schooling was too small to have an impact, or that the measure was insensitive to the effect that schooling has on fluid intelligence.
Carlsson, M. Dahl, G. B. & Rooth, D-O. (2012). The Effect of Schooling on Cognitive Skills. NBER Working Paper No. 18484 October 2012
Last June I posted a blog entry
about training working memory, focusing on a study by Tom Redick and his colleagues, which concluded that training working memory might boost performance on whatever task was practiced, but it would not improve fluid intelligence.(Measures of fluid intelligence are highly correlated with
measures of working memory, and improving intelligence would be most people's purpose in undergoing working memory training.)I recently received an email from
Martin Walker, of MindSparke.com, which offers brain training. Walker sent me a polite email arguing that the study is not ecologically valid: that is, the conclusions may be accurate for the conditions used in the study, but the conditions used in the study do not match those typically encountered outside the laboratory. Here's the critical text of his email, reprinted with his permission: "There is a significant problem with the design of the study that invalidates all of the hard work of the researchers--training frequency. The paper states that the average participant completed his or her training in 46 days. This is an average frequency of about 3 sessions per week. In our experience this frequency is insufficient. The original Jaeggi study enforced a training frequency of 5 days per week. We recommend at least 4 or 5 days per week.
With the participants taking an average of 46 days to complete the training, the majority of the participants did not train with sufficient frequency to achieve transfer. The standard deviation was 13.7 days which indicates that about 80% of the trainees trained less frequently than necessary. What’s more, the training load was further diluted by forcing each session to start at n=1 (for the first four sessions) or n=2, rather than starting where the trainee last left off.
"I forwarded the email to Tom Redick
, who replied: "Your comment about the frequency of training was something that, if not in the final version of the manuscript, was questioned during the review process. Perhaps it would’ve been better to have all subjects complete all 20 training sessions (plus the mid-test transfer session) within a shorter prescribed amount of time, which would have led to the frequency of training sessions being increased per week. Logistically, having subjects from off-campus come participate complicated matters, but we did that in an effort to ensure that our sample of young adults was broader in cognitive ability than other cognitive training studies that I’ve seen. This was particularly important given that our funding came from the Office of Naval Research – having all high-ability 18-22 year old Georgia Tech students would not be particularly informative for the application of dual n-back training to enlisted recruits in the Army and Marines.
However, I don’t really know of literature that indicates the frequency of training sessions is a moderating factor of the efficacy of cognitive training, especially in regard to dual n-back training. If you know of studies that indicate 4-5 days per week is more effective than 2-3 days week, I’d be interested in looking at it.
As mentioned in our article, the Anguera et al. (2012) article that did not include the matrix reasoning data reported in the technical report by Seidler et al. (2010) did not find transfer from dual n-back training to either BOMAT or RAPM [Bochumer Matrices Test and Raven's Advanced Progressive Matrices, both measures of fluid intelligence], despite the fact that “Participants came into the lab 4–5 days per week (average = 4.5 days) for approximately 25 min of training per session” (Anguera et al., 2012), for a minimum of 22 training sessions. In addition, Chooi and Thompson (2012) administered dual n-back to participants for either 8 or 20 days, and “Participants trained once a day (for about 30 min), four days a week”. They found no transfer to a battery of gF and gC tests, including RAPM.
In our data, I correlated the amount of dual n-back practice gain (using the same method as Jaeggi et al) during training and the number of days it took to finish all 20 practice sessions (and 1 mid-test session). I would never really trust a correlation of N = 24 subjects, but the correlation was r = -.05.'. I re-analyzed our data, looking only at those dual n-back and visual search training subjects that completed the 20 training and 1 mid-test session within 23-43 days, meaning they did an average of at least 3 sessions of training per week. For the 8 gF tasks (the only ones I analyzed), there was no hint of an interaction or pattern suggesting transfer from dual n-back.So to boil Redick's response down to a sentence, he's pointing out that other studies have observed no impact on intelligence when using a training regimen closer to that advocated by Walker, and Redick finds no such effect in a follow-up analysis of his own data (although I'm betting he would acknowledge that the experiment was not designed to address this question, and so does not offer the most powerful means of addressing it.)
So it does not seem that training frequency is crucial. A final note: Walker commented in another email that customers of MindSparke consistently feel that the training helps, and Redick remarked that subjects in his experiments have the same impression. It just doesn't bear out in performance.
Psychologists have long looked to Oxford Press for top-flight works of original scholarship and useful synthesis volumes. Now Oxford is publishing a new series, Fundamentals of Cognition, designed to serve as very brief summaries of the state of the field, suitable for an undergraduate course or as the key reading in a beginning graduate course.
The first volume has been published: Fundamentals of Comparative Cognition by Sara Shettleworth and if it’s any indication of the quality of future volumes, Oxford has done very well indeed.
In a mere 124 pages Shettleworth offers the reader a good (though necessarily hurried) look at comparative cognition: the field that asks what humans have in common with other creatures regarding how they think, and what makes humans unique?
As she reviews highlights of this complex literature, Shettleworth shows us some of the key principles of comparative cognition. For example, different species might use very different cognitive strategies to solve the same problem: to orient in space, species might use dead reckoning, vectors, landmarks, route-learning or cognitive maps.
Another example: because animals have different abilities than we, humans may be insensitive to how they experience a problem. For example, because the visual systems of some birds and honeybees extend into the ultraviolet range, a scientist looking a brightly colored flower or plumage may mistake what a bird or bee responds to.
Another key principle that has frustrated many an undergraduate is Lloyd Morgan’s Cannon: boiled down, it means that one shouldn’t interpret animal behavior as reflecting more sophisticated cognition if simpler cognition will do. It’s natural to interpret an animal behavior as reflecting cognitive processes humans would invoke in that situation. The animal may be doing what humans do, but for very different reasons or different methods.
Most often, this “other mechanism” is simple association. Time and time again, Shettleworth points out that what looks like sophisticated communication, say, or empathy, is explainable by the operation of relatively simple associative models, and that more work is actually needed to persuade us that the claimed cognitive process is actually at work. Such reading leads to momentary frustration, but ultimate admiration for the care of the scientists.
So how exactly are species different than humans? First, I should repeat that species are all different from one another, and so the question that might interest us (as it interested Darwin) is whether humans are in any way unique? Shettleworth closes with a review of a few proposed answers—e.g., Mike Tomasello’s suggestion that humans alone cooperatively share intentions—but ultimately casts her vote with none.
This is a wonderful book for a reader with a bit of background in psychology, but make no mistake, it’s not popular reading. Shettleworth sets out to review the field, not to offer choice bits to tempt a reader who was not otherwise interested.
Should educators read this book? Direct applications to educational practice are unlikely to spring to mind, but educators who, as part of their practice, are deeply immersed in understanding human cognition and development will likely find it of value.