One of the great intellectual pleasures is to hear an idea that not only seems right, but that strikes you as so terribly obvious (now that you've heard it) you're in disbelief that no one has ever made the point before.I tasted that pleasure this week, courtesy of a paper by Walter Boot and colleagues (2013)
. The paper concerned the adequacy of control groups in intervention studies--interventions like (but not limited to) "brain games" meant to improve cognition, and the playing of video games, thought to improve certain aspects of perception and attention.
To appreciate the point made in this paper, consider what a control group is supposed to be and do. It is supposed to be a group of subjects as similar to the experimental group as possible, except for the critical variable under study. Active control group
The performance of the control group is to be compared to the performance of the experimental group, which should allow an assessment of the impact of the critical variable on the outcome measure.
Now consider video gaming or brain training. Subjects in an experiment might very well guess the suspected relationship between the critical variable and the outcome. They have an expectation as to what is likely to happen. If they do, then there might be a placebo effect--people perform better on the outcome test simply because they expect that the training will help just as some people feel less pain when given a placebo that they believe is a analgesic.
The standard way to deal with that problem is the use an "active control." That means that the control group doesn't do nothing--they do something, but it's something that the experimenter does not believe will affect the outcome variable. So in some experiments testing the impact of action video games on attention and perception, the active control plays slow-paced video games like Tetris or Sims. Out of control group
The purpose of the active control is that it is supposed to make expectations equivalent in the two groups. Boot et al.'s simple and valid point is that it probably doesn't do that. People don't believe playing Sims will improve attention.
The experimenters gathered some data on this point. They had subjects watch a brief video demonstrating what an action video game was like or what the active control game was like. Then they showed them videos of the measures of attention and perception that are often used in these experiments. And they asked subjects "if you played the video game a lot, do you think it would influence how well you would do on those other tasks?"
And sure enough, people think that action video games will help on measures of attention and perception. Importantly, they don't think that they would have an impact on a measure like story recall. And subjects who saw the game Tetris were less likely to think it would help the perception measures, but were more likely to say it would help with mental rotation.
In other words, subjects see the underlying similarities between games and the outcome measures, and they figure that higher similarity between them means a greater likelihood of transfer.
As the authors note, this problem is not limited to the video gaming literature; the need for an active control that deals with subject expectations also applies to the brain training literature.
More broadly, it applies to studies of classroom interventions. Many of these studies don't use active controls at all. The control is business-as-usual.
In that case, I suspect you have double the problem. You not only have the placebo effect affecting students, you also have one set of teachers asked to do something new, and another set teaching as they typically do. It seems at least plausible that the former will be extra reflective on their practice--they would almost have to be--and that alone might lead to improved student performance.
It's hard to say how big these placebo effects might be, but this is something to watch for when you read research in the future.
Boot, W. R., Simons, D. J., Stothart, C. & Stutts, C. (2013). The pervasive problems with placebos in psychology: Why active control groups are not sufficient to rule out placebo effects. Perspectives in Psychological Science, 8, 445-454.
If you've followed this series of blog postings throughout the week, great. If you haven't, let me catch you up. Well, actually that's not very realistic.
But in two sentences: I wrote a blog
entry about why it's so difficult to apply neuroscientific data to educational practice, but claiming towards the end that doing so was possible. This week I sought to specify how it's possible by describing five techniques bring neuroscientific data to bear on education: Technique 1
, Technique 2
, Technique 3
, Technique 4
, Technique 5
I realize that the posts on the specific techniques probably included more detail than would interest many readers of this blog, so let me highlight the takeaways: 1) It can be done. It's being done. There is a backlash cresting against neuropop--"check out these brain pictures taken during the Rush Limbaugh's show!"--and this backlash is justified. (See Gary Marcus's nuanced take on this issue in the New Yorker.) But the fact that there is neuro-garbage in education does not mean we should dismiss brain science as an aid to the development of useful education practices. 2). These methods are indirect. (Technique 5, which concerns early identification of learning difficulties is not, but neither does it hold the promise of intervention.)
Neuroscience will help education by informing behavioral theory, which we will then try to use to improve educational practice.
I draw two implications from the indirect path from neuroscience to education practice. First, when you hear someone say they’ve got a way to improve education that is “based on brain research” they are either (1) misinformed because it’s (at best) neuroscience that has informed a behavioral theory or (2) willfully trying to humbug you. I know certain people will find this implication insulting. Well, prove me wrong.
Second, the indirect path makes clear that this is complicated stuff. The pathway from neuroscientific data to educational practice will be crooked, moving back forth between neural and behavioral theory. We are going to creep and lurch toward useful applications, not sprint. Brilliant work is being conducted by people like Stan Dehaene
, Daniel Ansari
, and others, but payoffs will require patience. As I've written about at length elsewhere, it's also important to bear in mind that not everything we care about in education is open to scientific investigation. I'm an enthusiastic booster of
using scientific knowledge to improve educational practice. For that use to be effective, we need to bear in mind the limitations of science. The scientific method is very good for addressing some challenges in education and irrelevant to others. I think it's useful to be very explicit about each.
This is Day 5 of my week-long series of posts on the use of neuroscientific data in educational practice.There is another post today, summing things up. It's here.Links to previous posts: Challenges in applying neuroscientific data to education
.Day 1: Basic architectureDay 2: Single cell inspirationDay 3: Reliable neuro-knowledgeDay 4: Confirm a Construct
Today's technique differs from the other four. Those concerned how you could use neuroscientific data to improve a behavioral theory, which you would then use to improve education outcomes. Neuroscientific data also shows promise in helping with the early identification
of learning problems. The best-studied
of these is dyslexia.
It would be very useful indeed to know with confidence which children will have difficulty learning to read. The earlier the intervention, the better. Traditionally, one would use behavioral measures
like word attack or reading fluency, or phonological processing. Typically, a battery of tests would be used. (One intriguing new study suggests that a measure of visuo-spatial attention may be a good predictor of later reading difficulty: Franceschini et al, 2012)But there is evidence that structural differences in the brains of children who will later have trouble learning to read are present before reading onset. (Raschle, Chang & Gaab, 2011;
Raschle, Zuk & Gaab, 2012). dyslexia has a neural basis present before reading instruction begins, might you be able to identify children who will very likely have significant trouble with reading before instruction ever begins?A number of laboratories have been working on this problem, and progress is being made.
These researchers are not looking to toss out behavioral measures--they are looking to supplement them. The more successful of these efforts (e.g., Hoeft et al., 2007) show that behavioral measures predict reading problems, neuroscientific measures predict reading problems, and using both types of data provides better prediction than either measure alone. In other words, the neuroscientific data is capturing information not captured by the behavioral measures, and vice versa.
This is not an easy problem to solve, but progress seems likely.
Franceschini, S., Gori, S. Ruffino, M., Pedrolli, K. & Facoetti, A.(2012). A causal link between visual spatial attention and reading acquisition. Current Biology, 22, 814-819.
Hoeft, F., Ueno, T., Reiss, A. L., Meyler, A., Whitfield-Gabrieli, S., Glover, G. H., ... & Gabrieli, J. D. (2007). Prediction of children's reading skills using behavioral, functional, and structural neuroimaging measures. Behavioral neuroscience
Raschle, N. M., Chang, M., & Gaab, N. (2011). Structural brain alterations associated with dyslexia predate reading onset. Neuroimage
Raschle, N. M., Zuk, J., & Gaab, N. (2012). Functional characteristics of developmental dyslexia in left-hemispheric posterior brain regions predate reading onset. Proceedings of the National Academy of Sciences
For example, suppose I notice if someone reads a phone number and then is distracted, he can remember the phone number for about 30 seconds or less. If he's distracted longer, the phone number is forgotten.
I suggest that there is a mental structure called a short-term memory system, which can store information for about 30 seconds. Short-term memory is an abstract construct; it's a proposed mechanism of the mind, which I think will help explain behavior.
Now it's clear that I've simply invented this idea of a short-term memory and that seems like a problem. "Oh people remember things for 30 seconds? That must mean you've got a remember-for-30-seconds mechanism in your mind!" I need something better to persuade people (and myself) that this entity actually helps explain how people think.
But now suppose I use functional brain imaging during that 30 seconds. I test 20 people and find that the same network of three brain areas is always active. Haven't I now seen short term memory in action? And doesn't this support my theory?
No and no.
To understand why not, suppose instead that I proposed a theory that there is a "cafeteria navigation" module in the brain whose sole purpose is to help you select items when you're in a cafeteria. And suppose I conduct an elaborate experiment where people wear virtual reality goggles and see a virtual cafeteria that they navigate while I image their brains. Lo and behold, there is a network of six brain areas that is active in every one of my subjects during this task! I've found the cafeteria navigation system! It must be real!
Here's the problem. Finding activation is not interesting because mental activity is going to cause brain activity somewhere. Some part of the brain is always going to "light up" during a task. It proves nothing.
A more reasonable interpretation of my cafeteria study is this: people have brain systems that support vision, decision making, movement, spatial processing, etc. When given a complex task (e.g., cafeteria navigation) they recruit the appropriate systems to get the job done. The "cafeteria navigation system" is a dumb theory because it applies to just one task.
How do we know what the real brain systems are then, if "finding" them via brain imaging doesn't work?
Well, if we think systems ought to support lots of different tasks, that's a clue. This is a general desideratum of science, not particular to psychology. It's okay to make up theoretical entities that can't be observed if they can account for a lot of data.
In the most famous example, Newton readily admitted that he didn't know what gravity was. And gravity was very peculiar: it was a force that purportedly had action between two objects instantaneously at great distances, with nothing intervening. Newton's reply was that, peculiar as the entity might be, it was a crucial part of a theory that accounted very well for an enormous amount of data.
Likewise, it's legitimate for me to propose something like "short term memory" if it's part of a theory that accounts for a lot of data. But the mere fact that some part of the brain is active during what I claim to be a task tapping short-term memory doesn't help my case. I need to show that "short term memory" helps to account for data.
So can brain imaging do anything to help verify that a theoretical construct is useful? Yes. It can serve as a dependent measure.
Here's a problem I face in persuading you that my proposed construct, short-term memory, is legitimate. I need to show that short-term memory participates in lots of tasks (so its not like the cafeteria navigation task). But how do I know that short term memory is at work during a task? Presumably there would be some sign in your behavior that it's at work. But in addition, if I've previously shown that three brain areas, A, B, and C, support short-term memory, then A, B, and C ought to be active during any task that requires short-term memory. Now I have a way of verifying that short-term memory contributes to a task, and that's useful to me, because one of my goals is to show that it's important in many different tasks.
Further, I can use this fact (A, B, and C will be active) to show that my theory of short-term memory is well developed. I can devise two tasks that look very very similar, but that I (with my terrific theory in hand) can predict differ in the extent to which they tap short-term memory. So one task will make the three areas active and the other task won't even though the tasks look very similar. Or I can devise two tasks that look wildly different but that my theory predicts both tap short-term memory and so will show overlapping activation in areas A, B, and C.
Tomorrow: A highly practical application, and the big wrap-up.
When most people are asked certain questions (e.g., "What shape are Snoopy's ears?" or "In which hand does the Statue of Liberty hold her torch?") they report that they answer the question by creating an image in their mind's eye of Snoopy or the Statue, and then inspect the image.
During the 1970's there was a lively (often acrimonious) debate as to the mental representation that allows people to answer this sort of question. Some researchers (most visibly, Stephen Kosslyn
) argued that we use mental representations that are inherently spatial, and that visual mental imagery overlaps considerably with visual perception.
Other researchers (most visibly, Zenon Pylyshyn
) argued that the feeling
of inspecting an image may be real, but the inspection does not support your answering the question. The mental representation that allows you to say "the torch is in her right hand" is actually linguistic
, and therefore has a lot in common with the language system.
Although these two theories seem radically different, it was actually quite difficult (technically it was impossible--Anderson, 1978) to distinguish between them via behavioral experiments only.
But even 30 years ago, we knew enough about the brain to gather neuroscientific data that helped to decide between these competing theories. We knew some of the key brain areas supporting visual perception (red circle below) and we knew some of those supporting language representations (green circle).
So if imagery is like perception, damage to the red-circle areas should lead not only to problems with vision, but problems with imagery. But if imagery is based on linguistic representations, we'd expect damage to the green-circle areas to compromise imagery. A 1988 review of existing neurological cases (Farah, 1988) supported the former theory, not the latter.
Functional brain imaging makes corresponding predictions about which parts of the brain will be active during imagery tasks, and again, the data supported the theory positing that imagery relies on representations similar to those supporting visual perception (e.g., Le Bihan et al, 1993).
Can this technique be applied to education? Yes.
For example, Shaywitz et al (1998) imaged the brains of typical readers and readers diagnosed with dyslexia during tasks of increasing phonological complexity. A number of differences in activation emerged, as shown below (click for larger image).
The numbers refer to a system of distinguishing cortical areas--they are called Brodmann's areas. Ant =anterior (front of the brain). Post = posterior (back of the brain)
The lighter areas correspond to less activation, the darker areas to greater activation. As phonological processing gets more difficult, typical readers show increased activation in a network of areas at the back of the brain--dyslexic readers show a smaller increase in those areas. They also show a greater increase toward the front of the brain.
What's especially interesting from our perspective today is that the researchers interpreted these data in light of other data leading them to suggest that they already knew what certain brain areas do.
Thus, the authors highlighted the abnormal activity in area 39 (the angular gyrus) which they suggested is crucial for translating visual input into sound; they pointed to other research showing that people who were typical readers suddenly had trouble reading if that brain area suffered damage.
Thus another way we can use neuroscientific data is to leverage what we already know about the brain to test behavioral theories.
We have to not that it's also pretty easy to get fooled with this method. Obviously, if we think we know what a brain area supports but we're wrong, we'll draw the wrong conclusion. Then too, we might be right about what the brain area supports, but it may support more than one behavior.
For example, there are good data showing that the amygdala is important in the processing of some emotions, especially fear. But it would be hasty (and wrong) to suppose that every time the amygdala is active, it's because the person is experiencing strong emotion. Amygdala activation is observed in many circumstances, and it very likely participates in a number of functions.
So using our knowledge of the brain to inform behavioral theories is great, but it's easy to screw up.
Tomorrow: the most common misconception about brain imaging and how to correct it.
Anderson, J. R. (1978). Arguments concerning representation for mental imagery. Psychological Review, 85, 249-277.
Farah, M. J. (1988). Is visual mental imagery really visual? Overlooked evidence from neuropsychology. Psychological Review, 95, 307-317.
Le Bihan, D, Turner, R., Zeffiro, T. A., Cuenod, C. A., Jezzard, P. & Bonnerot, V. (1993) Activation of human primary visual cortex during visual recall: A magnetic resonance imaging study. Proceedings of the National Academy of Sciences, 90, 11802-11805.
Shaywitz, S. E et al (1998). Functional disruption in the organization of the brain for reading in dyslexia. Proceedings of the National Academy of Sciences, 95, 2636-2641.
This is the second of my week-long series of posts about how neuroscientific data might be used in education. (First post here
. Last weeks complaints about neuro-garbage in education products here
Single cell recording allows an investigator to record the activity of an individual neuron. Different techniques are available, but most commonly a rat (or cat, or other non-human animal) will undergo surgery under anesthesia that allows an anchoring device to be affixed to the skull. The device serves as a guide for a microelectrode
to be placed in the brain region of interest. The microelectrode measures changes in electrical potential just outside a neuron--changes associated with an action potential. In other words, it measures each time the neuron "fires." When the animal recovers from surgery, researchers can "eavesdrop" on the activity of individual neurons while the animal is awake and behaving.
The goal is to figure out what makes the neuron fire. The technique is to expose the animal to many different situations, and to note what makes the neuron you're recording from respond maximally.
For example, you might record from a cell in the primary visual cortex (left) and present a bunch of stimuli on a screen : a picture of a human face, a cat's face, a cat's face in profile, a triangle, a circle, a bicycle, a car, and so forth.
David Hubel and Torsten Wiesel performed exactly this experiment in the 1950s and reported that cells in primary visual cortex of cats responded maximally to simple lines of a particular orientations.
Cognitive psychology was just getting going at this time, and some researchers (e.g., Oliver Selfridge) drew inspiration from these findings. They thought "hmm, here we are, trying to figure out a basic unit of representation for vision. . . the "bits" out of which more complex visual experience will be built. Hubel & Wiesel have good evidence that the basic "bits" for the brain are lines. So maybe we should try to model vision using lines."
A cartoon of a visual processing model. Processing moves from left to right. The letter R is the thing in the environment, and the image demon would be something like the retinal image. That is broken up into features, analogous to Hubel & Wiesel's simple shapes.
The result was a series of models in the early 1960s that used lines as the starting point for complex visual processing. So another method of integrating neuroscientific data into behavioral theory is using data from single-cell recording studies to make an educated guess as to what the brain codes, and then using the guess as the foundational representation in a cognitive model. How does this relate to education?This technique is not often used, but one examp
le might be John Stein's (2001) magnocelluar theory of dyslexia
, which puts cells in the magnocellular layer of the lateral geniculate nucleus of the thalamus (click here
for image) in a central role in dyslexia. These cells are crucial for timing of rapid events, including (Stein argues) for the stability of eye fixation when you move your eyes (including when you move your eyes as you read); hence, kids with dyslexia are more likely to have the image of letters slip out of the field of view, as well as other problems. In addition, they will have deficits in hearing that are also traceable to problems with precise timing, and these hearing problems also affect reading.
Stein's work has its roots in single cell recording work that first distinguished the role of cells in the magnocellular layer from those in the parvocellular layer (e.g., Derrington, 1984).
Yesterday we saw that neuroscientific data might provide a researcher with clues about the large-scale architecture supporting a cognitive process. Today we have moved to a much finer level of detail; instead of the overall plan, neuroscientific data provided hints about the nature of the building blocks.Tomorrow, Method 3. References:
Derrington , A. M . ( 1984 ). Spatial and temporal contrast sensitivities
of neurons in lateral geniculate nucleus of macaque . Journal of
, 219 – 240
Selfridge, O. G. & Neisser, U. (1960), Pattern recognition by machine, Scientific American
203, 60-68Stein, J. The magnocellular theory of developmental dyslexia. Dyslexia, 7, 12-36.
Neuroscience--especially human neuroscience, and more especially human functional brain imaging--has had a quite a run in the last twenty years. In the first decade the advances were known mostly to scientists. In the last ten years there have been plenty of articles in the popular press featuring brain images. Many of these articles have been breathless and silly. Some backlash was inevitable and one of the more potent examples was a recent op-ed in the New York Times.
Still, as Gary Marcus
pointed out in a nice blog piece
, we would be wise not to throw the baby out with the bath water.
In that vein, I am following up on a piece I wrote last week,
in which I argued that much of the work on this topic in education is neuro-garbage. Most of the piece was devoted to explaining why it's difficult to apply neuroscience to education. (I left it to the reader to infer that it's correspondingly easy to be glib.)
Toward the end of that piece I suggested that neuroscience can and has been usefully applied to problems in education. This week I'll describe how. I'll tackle one method each day this week.I'll keep things as simple as possible, but fasten your seatbelt if you feel the need.
Neuroscience can give researchers clues about the basic architecture of a cognitive process. It can show that a cognitive process might be more complex than we would have otherwise guessed, or that it's more simple.
Consider the figure below from Dehaene et al (2003) (click it for a larger version)
This figure summarizes a great deal of work indicating that there are three representations of number in the brain: a core quantity system (red), numbers in verbal form (green), and attentional orientation on the number line.
Suppose I am an educational psychologist, trying to figure out how children develop concepts of number, and how to coordinate the teaching of early mathematics with these concepts. I must have a theory of how number is represented in the mind. It's possible--actually, it's likely--that I would think of number as one thing, that children have one concept of the number five, for example. But this neuroscientific work indicates that the brain might use three representations of number. So it might be wise for me to use three representations in my cognitive theory of mathematics (which will support my educational theory).
In this example, there is greater diversity (three representations) where we might have guessed that we'd see simplicity (one representation). The opposite may also happen.
In one example, neuroscientific data were useful in interpreting variations in dyslexia across languages.
One of the peculiarities of dyslexia is that some key symptoms vary across different languages. For example, people with dyslexia usually show a large disparity between visual word recognition and IQ. But that disparity tends to be much larger in languages in which the spelling-sound correspondence is often inconsistent (e.g., English) than in languages where it's more consistent (e.g., Italian).
This pattern raises the question: is what we're calling "dyslexia" really the same thing in English and Italian? Maybe reading difficulties are so intertwined with the language you're learning to read that it doesn't make sense to call problems by the same label when they apply to English vs. Italian. Or maybe the problems kids develop in English-speaking vs. Italian-speaking countries is due to differences in the way reading tends to be taught in different countries.
Eraldo Paulesu and his colleagues (2000) used brain imaging data to argue that dyslexia is the same disorder in readers of different languages. They showed that the same brain region in left temporal cortex shows reduced activation during reading in French, Italian, and British readers who have been diagnosed with dyslexia.
Hence in this case neuroscientific data has shown us that there is simplicity (one reading problem) where we could have reasonably thought there was greater diversity (different reading problems across languages).
EDIT: It's worth adding that anatomic separability (or overlap) doesn't guarantee cognitive separability or identity. But it's an indicator.
Tomorrow: Method 2.
Dehaene, S., Pizaaz, M., Pinel, P., & Cohen, L. (2003). Three parietal circuits for number processing. Cognitive Neuropsychology, 20, 487-506.
Paulesu, E., Demonet, J.-F., Fazio, F., McCrory, E., Chanoine, V., Brunswick, N., Cappa, S. F., Cossu, G., Habib, M., Frith, C. D., & Frith, U. (2000). Dyslexia: Cultural diversity and biological unity. Science, 291, 2165-2167.
Neuroscience reporting: unimpressive.
in the New York Times reported on some backlash against inaccurate reporting on neuroscience. (It included name-checks for some terrific blogs, including Neurocritic
, Mind Hacks
, Dorothy Bishop's Blog
). The headline ("Neuroscience: Under Attack") was inaccurate, but the issue raised is important; there is some sloppy reporting and writing on neuroscience.
How does education fare in this regard?
There is definitely a lot of neuro-garbage in the education market. Sometimes
it's the use of accurate but ultimately pointless neuro-talk that's mere window dressing for something that teachers already know (e.g., explaining the neural consequences of exercise to persuade teachers that recess is a good idea for third-graders).
Other times the neuroscience is simply inaccurate (exaggerations regarding the differences between the left and right hemispheres, for example).
You may have thought I was going to mention learning styles.
Well, learning styles is not a neuroscientific claim; it's a claim about the mind. But it's often presented
as a brain claim, and that error is perhaps the most instructive. You see, people who want to talk to teachers about neuroscience will often present behavioral findings (e.g., the spacing effect
)--as though they are neuroscientific findings.
What's the difference, and who cares? Why does it matter whether the science that leads to a useful classroom application is neuroscience or behavioral?
It matters because it gets to the heart of how and when neuroscience can be applied to educational practice. And when a writer doesn't seem to understand these issues, I get anxious that he or she is simply blowing smoke.
Let's start with behavior. Applying findings from the laboratory is not straightforward. Why? Consider this question. Would a terrific math tutor who has never been in a classroom before be a good teacher? Well, maybe. But we recognize that tutoring one-on-one is not the same thing as teaching a class. Kids interact, and that leads to new issues, new problems. Similarly, a great classroom teacher won't necessarily be a great principal.
This problem--that collections don't behave the same way as individuals--is pervasive.
Similarly, knowing something about a cognitive process--memory, say--is useful, but it's not guaranteed to translate "upwards" the way you expect. Just as children interact, making the classroom more than a collection of kids, so too cognitive processes interact, making a child's mind more than a collection of cognitive processes. That's why we can't take lab findings and pop them right into the classroom. To use my favorite painfully obvious example, lab findings consistently show that repetition is good for memory. But you can't mindlessly implement that in schools--"keep repeating this til you've got it, kids." Repetition is good for memory, but terrible for motivation.
I've called this
the vertical problem
(Willingham, 2009). You can't assume that a finding at one level will work well at another level. When we add neuroscience, there's a second problem. It's easiest to appreciate this way. Consider that in schools, the outcomes we care about are behavioral; reading, analyzing, calculating, remembering. These are the ways we know the child is getting something from schooling. At the end of the day, we don't really care what her hippocampus is doing, so long as these behavioral landmarks are in place. Likewise, most of the things that we can change are behavioral. We're not going to plant electrodes in the child's brain to get her to learn--we're going to change her environment and encourage certain behaviors. A notable exception is when we suspect that there is a pharmacological imbalance, and we try to use medication to restore it. But mostly, what we do is behavioral and what we hope to see is behavioral. Neuroscience is outside the loop.
For neuroscience to be useful in the classroom we've got to translate from the behavioral side to the neural side and then back again. I've called this
the horizontal problem
The translation to use neuroscience in education can be done--it has
been done--but it isn't easy. (I wrote about four techniques for doing it here
, Willingham & Lloyd, 2007).
Now, let's return to the question we started with: does it matter if claims about laboratory findings about behavior are presented as brain claims? I'm arguing it matters because it shows a misunderstanding of the relationship of mind, brain, and educational applications.As we've seen, behavioral sciences and neuroscience face different problems in application. Both face the vertical problem. The horizontal problem is particular to neuroscience. When people don't seem to appreciate the difference, that indicates sloppy thinking. Sloppy thinking is a good indicator of bad advice to educators. Bad advice means that neurophilia
will become another flash in the pan, another fad of the moment in education, and in ten year's time policymakers (and funders) will say "Oh yeah, we tried that."
Neuroscience deserves better. With patience, it can add to our collective wisdom on education. At the moment, however, neuro-garbage is ascendant in education. EDIT:I thought it was worth elaborating on the methods whereby neuroscientific data CAN be used to improve education:Method 1Method 2Method 3Method 4Method 5ConclusionsWillingham, D. T. (2009). Three problems in the marriage of neuroscience and education. Cortex, 45, 54-545.Wilingham, D. T. & Lloyd, J. W. (2007).
How educational theories can use neuroscientific data. Mind, Brain, & Education,
It's not often that an initiative prompts grave concern
in some and ridicule
in others. The Gates Foundation managed it. The Foundation has funded a couple of projects to investigate the feasibility of developing a passive measure of student engagement, using galvanic skin response (GSR). The ridicule comes from an assumption that it won't work.GSR basically measures how sweaty you are. Two leads are placed on the skin. One emits a very very mild charge. The other measures the charge. The more sweat on your skin, the better it conducts the charge, so the better the second lead will pick up the charge.Who cares how sweaty your skin is? Sweat--as well as heart rate, respiration rate and a host of other physiological signs controlled by the peripheral nervous system--
vary with your emotional state.Can you tell whether a student is paying attention from these data? It's at least plausible that it could be made to work.
There has long been controversy over how separable different emotional states are, based on these sorts of metrics. It strikes me as a tough problem, and we're clearly not there yet, but the idea is far from kooky, and indeed, the people who have been arguing its possible have been making some progress--this lab group says they've successfully distinguished engagement, relaxation and stress.
(Admittedly, they gathered a lot more data than just GSR and one measure they collected was EEG, a measure of the central, not peripheral, nervous system.) The grave concern springs from the possible use to which the device would be put.
A Gates Foundation spokeswoman says the plan is that a teacher would be able to tell, in real time, whether students are paying attention in class. (Earlier the Foundation website indicated that the grant was part of a program meant to evaluate teachers, but that was apparently an error
.)Some have objected that such measurement would be insulting to teachers. After all, can't teachers tell when their students are engaged, or bored, or frustrated, etc.? I'm sure some can, but not all of them. And it's a good bet that
beginning teachers can't make these judgements as accurately as their more experienced colleagues, and beginners are just the ones who need this feedback. Presumably the information provided by the system would be redundant to teachers who can read it by their students faces and body language, and these teachers will simply ignore it. I would hope that classroom use would be optional--GSR bracelets would enter classrooms only if teachers requested them.Of greater concern to me are the rights of the students. Passive reading of physiological data without consent feels like an invasion of privacy. Parental consent ought to be obligatory.
Then too, what about HIPAA
? What is the procedure if a system that measures heartbeat detects an irregularity? These two concerns--the effect on teachers and the effect on students--strike me as serious
, and people with more experience than I have in ethics and in the law will need to think them through with great care. But I still think the project is a terrific idea, for two reasons, neither of which has received much attention in all the uproar.First, even if the devices were never used in classrooms, researchers could put them to good use.
I sat in at a meeting a few years ago of researchers considering a grant submission (not to the Gates Foundation) on this precise idea--using peripheral nervous system data as an on-line measure of engagement. (The science involved here is not really in my area of expertise, and had no idea why I was asked to be at the meeting, but that seems to be true of about two-thirds of the meetings I attend.) Our thought was that the device would be used by researchers, not teachers and administrators.Researchers would love a good measure of engagement because the proponents of new materials or methods so often claim "increased engagement" as a benefit. But how are researchers supposed to know whether or not the claim is true?
Teacher or student judgements of engagement are subject to memory loss and to well-known biases. In addition, I see potentially great value for parents and teachers of kids with disabilities. For example, have a look at these two pictures.
This is my daughter Esprit. She's 9 years old, and she has Edward's syndrome
. As a consequence, she has a host of cognitive and physical challenges, e.g., she cannot speak, and she has limited motor control and bad motor tone (she can't sit up unaided). Esprit can never tell me that she's engaged either with words or signs. But I'm comfortable concluding that she is engaged at moments like that captured in the top photo--she's turning the book over in her hands and staring at it intently. In the photo at the bottom, even I, her dad, am unsure of what's on her mind. (She looks sleepy, but isn't--ptosis, or drooping upper eyelids, is part of the profile). If Esprit wore this expression while gazing towards a video for example, I wouldn't be sure whether she was engaged by the video or was spacing out.Are there moments that I would slap a bracelet on her if I thought it could measure whether or not she was engaged? You bet your sweet bippy there are. I'm not the first to think of using physiologic data to measure engagement in people with disabilities that make it hard to make their interests known.
In this article
, researchers sought to reduce the communication barriers that exclude children with disabilities from social activities; the kids might be present, but because of their difficulties describing or showing their thoughts, they cannot fully participate in the group. Researchers reported some success in distinguishing engaged from disengaged states of mind from measures of blood volume pulse, GSR, skin temperature, and respiration in nine young adults with muscular dystrophy or cerebral palsy. I respect
the concerns of those who see the potential for abuse in the passive measurement of physiological data. At the same time, I see the potential for real benefit in such a system, wisely deployed.When we see the potential for abuse, let's quash that possibility, but let's not let it blind us to the possibility of the good that might be done.And finally, because Esprit didn't look very cute in the pictures above, I end with this picture.
from Education Week suggests that teachers ought to learn neuroscience.That strikes me as a colossal wast
e of teachers' time.The offered justification is that a high percentage of teacher's hold false beliefs about the brain, and thus ought to be "armed" to evaluate claims that they encounter in professional development sessions, the media, etc. But it takes an awful lot of work for any individual to become knowledgeable enough
about neuroscience to evaluate new ideas. And why would it stop at neuroscience? One could make the same case for cognitive psychology, developmental psychology, social psychology, sociology, cultural studies, and economics, among other fieldsFurther, this suggestion seems like unnecessary duplication
of effort. What's really needed is for a few
trusted educators to evaluate new ideas, and to periodically bring their colleagues up to date.
In fact, that's how the system is set up. But it's not working. First, the neuro-myths mentioned in the article ought to be defused during teacher training. Some programs do so, I'm sure, but most appear not to be doing a good enough job. It's certainly true that textbooks aimed at teachers don't do enough in this regards. Learning styles, for example, go unmentioned, or perhaps get a paragraph in which the theory is (accurately) said to be lacking evidence. Given the pervasiveness of these myths, schools of education ought to address the problem with more vigor.Second, there is virtually always someone in the district central office who is meant to be the resource person for professional development: is this PD session likely to be legit, or is this person selling snake oil? If teachers are exposed to PD with sham science, the right response, it seems to me, is not to suggest that teachers learn some neuroscience. The right response is outrage directed at the person who brought the knucklehead in there to do the PD session. Third, it would make perfect sense if professional groups helped out in this regard. The Department of Education has tried with the What Works Clearinghouse and with
it's various practice guides
. These have had limited success. It might be time for teachers to take a try at this themselves. Teachers don't need to learn neuroscience, or better put, teachers shouldn't need to learn neuroscience--not to be protected from charlatans. Teachers need to learn things that will directly help their practice. Charlatan protection ought to come from
institutions: from schools of education, from district central offices, and (potentially) from institutions of teachers' own creation.