Daniel Willingham--Science & Education
Hypothesis non fingo
  • Home
  • About
  • Books
  • Articles
  • Op-eds
  • Videos
  • Learning Styles FAQ
  • Daniel Willingham: Science and Education Blog

When educational neuroscience works! The case of reading disability.

1/27/2014

 
I’ve often written that it’s hard to bring neuroscientific data to bear on issues in education (example here). Hard, but not impossible.  Dorothy Bishop offered similar concerns on her blog Saturday. A study from Guinevere Eden’s lab provides a great example of how it can be done.

It concerns the magnocellular theory of dyslexia (Stein, 2001). According to this theory, many varieties of reading disability have, at their core, a problem in the functioning of the magnocellular layer of the lateral geniculate nucleus of the thalamus. This layer of cells is known to be important in the processing of rapid motion, and people with developmental dyslexia are impaired on certain visual tasks entailing motion, such detecting coherent motion amongst a subset of randomly moving dots, or discriminating speeds of objects.

The most widely accepted theory of reading disability points to a problem in phonological awareness—hearing individual speech sounds. The magnocellular theory emphasizes that phonological processing does not explain all of the data. There are visual problems in dyslexia as well. Proponents point to problems like letter transpositions and word substitutions while reading, and to visuo-motor coordination problems (Stein & Walsh, 1997; see Figure below) although the pervasiveness of these symptoms are not uncontested.
Picture
Parts of the posterior parietal cortex heavily influenced by magnocellular projections (A) and expected consequences of magnocellular impairment observed in children with dyslexia (B). From Stein & Walsh (1997)
 Consistent with this hypothesis are post-mortem findings of cell volume differences in the magnocellular layer of dyslexics (Livingstone et al, 2001), deficits in motion detection process in individuals with dyslexia (Cornelissen, et al., 1997) and brain imaging studies showing reduced activity in cortical motion detection areas that are closely linked to the magnocellular system (e.g., Demb et al, 1997).

It’s certainly an interesting hypothesis, but the data have been correlational. Maybe learning to read somehow steps up magnocellular function. That’s where Eden and her team come in.

They compared kids with dyslexia to kids with typical reading development and found (as others have), reduced processing in motion detection cortical area V5. But then they compared kids with dyslexia to kids who were matched for reading achievement (and were therefore younger). Now there were no V5 differences between groups. These data are inconsistent with the idea that kids with dyslexia have an impaired magnocellular system. They are consistent with the idea that reading improves magnocellular function. (Why? A reasonable guess would be that reading requires rapid shifts of visual attention).

In a second experiment, the researchers trained kids with dyslexia with a standard treatment protocol that focused on phonological awareness. V5 activity—which, again, is a cortical area concerned with motion processing--increased after the training! This result too, is consistent with the interpretation that reading prompts changes in magnocellular function.

These are pretty compelling data indicating that reading disability is not caused by a congenital problem in magnocellular functioning. We see differences in motion detection between kids with and without dyslexia because reading improves the system’s functioning.

The finding is interesting enough on its own, but I also want to point out that it’s a great example of how neuroscientific data can inform problems of interest to educators. About a year ago I wrote a series of blogs about techniques to solve this difficult problem.

Eden’s group used a technique where brain activation is basically used as a dependent measure. Based on prior findings, researchers confidently interpreted  V5 activity as a proxy for cognitive activity for motion processing. Indistinguishable V5 activity (compared to reading-matched controls) was interpreted as a normally operating system to detect motion. And therefore, not the cause of reading disability.

I’m going out of my way to point out this success because I’ve so often said in the past that neuroscience applied to education has mostly been empty speculation, or the coopting of behavioral science with neuro-window-dressing.

And I don’t want educators to start abbreviating “brain science” as BS.

References:
Cornelissen, P., Richardson, A., Mason, A., Fowler, S., and Stein, J. (1995). Contrast sensitivity and coherent motion detection measured at photopic luminance levels in dyslexics and controls. Vision Research, 35, 1483–1494.

Demb, J.B., Boynton, G.M., and Heeger, D.J. (1997). Brain activity in visual cortex predicts individual differences in reading performance. PNAS, 94, 13363–13366.

Livingstone, M.S., Rosen, G.D., Drislane, F.W., and Galaburda, A.M. (1991). Physiological and anatomical evidence for a magnocellular defect in develop-mental dyslexia. PNAS, 88, 7943–7947

Stein, J. (2001). The magnocellular theory of developmental dyslexia. Dyslexia, 7, 12-36.

Stein, J. & Walsh, V. (1997). To see but not to read: The magnocellular theory of dyslexia. Trends in Neurosciences, 20, 147-152.

Out of Control: Fundamental Flaw in Claims about Brain-Training

7/15/2013

 
One of the great intellectual pleasures is to hear an idea that not only seems right, but that strikes you as so terribly obvious (now that you've heard it) you're in disbelief that no one has ever made the point before.

I tasted that pleasure this week, courtesy of a paper by Walter Boot and colleagues (2013).

The paper concerned the adequacy of control groups in intervention studies--interventions like (but not limited to) "brain games" meant to improve cognition, and the playing of video games, thought to improve certain aspects of perception and attention.
PictureControl group
To appreciate the point made in this paper, consider what a control group is supposed to be and do. It is supposed to be a group of subjects as similar to the experimental group as possible, except for the critical variable under study.

The performance of the control group is to be compared to the performance of the experimental group, which should allow an assessment of the impact of the critical variable on the outcome measure.

Now consider video gaming or brain training. Subjects in an experiment might very well guess the suspected relationship between the critical variable and the outcome. They have an expectation as to what is likely to happen. If they do, then there might be a placebo effect--people perform better on the outcome test simply because they expect that the training will help just as some people feel less pain when given a placebo that they believe is a analgesic.

PictureActive control group
The standard way to deal with that problem is the use an "active control." That means that the control group doesn't do nothing--they do something, but it's something that the experimenter does not believe will affect the outcome variable. So in some experiments testing the impact of action video games on attention and perception, the active control plays slow-paced video games like Tetris or Sims.

The purpose of the active control is that it is supposed to make expectations equivalent in the two groups. Boot et al.'s simple and valid point is that it probably doesn't do that. People don't believe playing Sims will improve attention.

The experimenters gathered some data on this point. They had subjects watch a brief video demonstrating what an action video game was like or what the active control game was like. Then they showed them videos of the measures of attention and perception that are often used in these experiments. And they asked subjects "if you played the video game a lot, do you think it would influence how well you would do on those other tasks?"

PictureOut of control group
And sure enough, people think that action video games will help on measures of attention and perception. Importantly, they don't think that they would have an impact on a measure like story recall. And subjects who saw the game Tetris were less likely to think it would help the perception measures, but were more likely to say it would help with mental rotation.

In other words, subjects see the underlying similarities between games and the outcome measures, and they figure that higher similarity between them means a greater likelihood of transfer.

As the authors note, this problem is not limited to the video gaming literature; the need for an active control that deals with subject expectations also applies to the brain training literature.

More broadly, it applies to studies of classroom interventions. Many of these studies don't use active controls at all. The control is business-as-usual.

In that case, I suspect you have double the problem. You not only have the placebo effect affecting students, you also have one set of teachers asked to do something new, and another set teaching as they typically do. It seems at least plausible that the former will be extra reflective on their practice--they would almost have to be--and that alone might lead to improved student performance.

It's hard to say how big these placebo effects might be, but this is something to watch for when you read research in the future.

Reference

Boot, W. R., Simons, D. J., Stothart, C. & Stutts, C. (2013). The pervasive problems with placebos in psychology: Why active control groups are not sufficient to rule out placebo effects. Perspectives in Psychological Science, 8, 445-454.

Final Thoughts on Neuroscience and Education--Please Don't Abbreviate Brain Science as B.S. 

12/7/2012

 
If you've followed this series of blog postings throughout the week, great. If you haven't, let me catch you up. Well, actually that's not very realistic.
Picture
But in two sentences: I wrote a blog entry about why it's so difficult to apply neuroscientific data to educational practice, but claiming towards the end that doing so was possible. This week I sought to specify how it's possible by describing five techniques bring neuroscientific data to bear on education: Technique 1, Technique 2, Technique 3, Technique 4, Technique 5.

I realize that the posts on the specific techniques probably included more detail than would interest many readers of this blog, so let me highlight the takeaways:

1) It can be done. It's being done. There is a backlash cresting against neuropop--"check out these brain pictures taken during the Rush Limbaugh's show!"--and this backlash is justified. (See Gary Marcus's nuanced take on this issue in the New Yorker.) But the fact that there is neuro-garbage in education does not mean we should dismiss brain science as an aid to the development of useful education practices.

2). These methods are indirect. (Technique 5, which concerns early identification of learning difficulties is not, but neither does it hold the promise of intervention.) Neuroscience will help education by informing behavioral theory, which we will then try to use to improve educational practice.

I draw two implications from the indirect path from neuroscience to education practice. First, when you hear someone say they’ve got a way to improve education that is “based on brain research” they are either (1) misinformed because it’s (at best) neuroscience that has informed a behavioral theory or (2) willfully trying to humbug you. I know certain people will find this implication insulting. Well, prove me wrong.

Second, the indirect path makes clear that this is complicated stuff. The pathway from neuroscientific data to educational practice will be crooked, moving back forth between neural and behavioral theory. We are going to creep and lurch toward useful applications, not sprint. Brilliant work is being conducted by people like Stan Dehaene, Daniel Ansari, and others, but payoffs will require patience.

As I've written about at length elsewhere, it's also important to bear in mind that not everything we care about in education is open to scientific investigation. I'm an enthusiastic booster of using scientific knowledge to improve educational practice. For that use to be effective, we need to bear in mind the limitations of science. The scientific method is very good for addressing some challenges in education and irrelevant to others. I think it's useful to be very explicit about each.

Neurosci & Educ.--5 Days, 5 Ways. Day 5: Predicting Trouble

12/7/2012

 
This is Day 5 of my week-long series of posts on the use of neuroscientific data in educational practice.

There is another post today, summing things up. It's here.

Links to previous posts:

Challenges in applying neuroscientific data to education.
Day 1: Basic architecture
Day 2: Single cell inspiration
Day 3: Reliable neuro-knowledge
Day 4: Confirm a Construct

Today's technique differs from the other four. Those concerned how you could use neuroscientific data to improve a behavioral theory, which you would then use to improve education outcomes.

Neuroscientific data also shows promise in helping with the early identification of learning problems. The best-studied of these is dyslexia.

It would be very useful indeed to know with confidence which children will have difficulty learning to read. The earlier the intervention, the better.

Traditionally, one would use behavioral measures like word attack or reading fluency, or phonological processing. Typically, a battery of tests would be used. (One intriguing new study suggests that a measure of visuo-spatial attention may be a good predictor of later reading difficulty: Franceschini  et al, 2012)

But there is evidence that structural differences in the brains of children who will later have trouble learning to read are present before reading onset. (Raschle, Chang & Gaab, 2011; Raschle, Zuk & Gaab, 2012). dyslexia has a neural basis present before reading instruction begins, might you be able to identify children who will very likely have significant trouble with reading before instruction ever begins?

A number of laboratories have been working on this problem, and progress is being made.  These researchers are not looking to toss out behavioral measures--they are looking to supplement them. The more successful of these efforts (e.g., Hoeft et al., 2007) show that behavioral measures predict reading problems, neuroscientific measures predict reading problems, and using both types of data provides better prediction than either measure alone. In other words, the neuroscientific data is capturing information not captured by the behavioral measures, and vice versa.

This is not an easy problem to solve, but progress seems likely.




References

Franceschini, S., Gori, S. Ruffino, M., Pedrolli, K. & Facoetti, A.(2012). A causal link between visual spatial attention and reading acquisition. Current Biology, 22, 814-819.


Hoeft, F., Ueno, T., Reiss, A. L., Meyler, A., Whitfield-Gabrieli, S., Glover, G. H., ... & Gabrieli, J. D. (2007). Prediction of children's reading skills using behavioral, functional, and structural neuroimaging measures. Behavioral neuroscience, 121, 602-613.

Raschle, N. M., Chang, M., & Gaab, N. (2011). Structural brain alterations associated with dyslexia predate reading onset. Neuroimage, 57, 742-749.

Raschle, N. M., Zuk, J., & Gaab, N. (2012). Functional characteristics of developmental dyslexia in left-hemispheric posterior brain regions predate reading onset. Proceedings of the National Academy of Sciences, 109(6), 2156-2161.

Neurosci & Educ.--5 days, 5 ways. Day 4: Confirm a Construct

12/6/2012

 
This is Day 4 of my week-long series of posts on the use of neuroscientific data in educational practice. 

Links to previous posts:

Challenges in applying neuroscientific data to education.
Day 1: Basic architecture
Day 2: Single cell inspiration
Day 3: Reliable neuro-knowledge

Today I'll tackle what is probably the most common misinterpretation of human brain-imaging data.

It's almost irresistible to interpret brain imaging results as making visible  and thereby, confirming , some abstract construct you use to account for behavioral data. By abstract construct I mean some entity that you've invented that's meant to account for data. 
Picture
For example, suppose I notice if someone reads a phone number and then is distracted, he can remember the phone number for about 30 seconds or less. If he's distracted longer, the phone number is forgotten.

I suggest that there is a mental structure called a short-term memory system, which can store information for about 30 seconds. Short-term memory is an abstract construct; it's a proposed mechanism of the mind, which I think will help explain behavior.

Now it's clear that I've simply invented this idea of a short-term memory and that seems like a problem. "Oh people remember things for 30 seconds? That must mean you've got a remember-for-30-seconds mechanism in your mind!" I need something better to persuade people (and myself) that this entity actually helps explain how people think.

But now suppose I use functional brain imaging during that 30 seconds. I test 20 people and find that the same network of three brain areas is always active. Haven't I now seen short term memory in action? And doesn't this support my theory?

No and no.

Picture
To understand why not, suppose instead that I proposed a theory that there is a "cafeteria navigation" module in the brain whose sole purpose is to help you select items when you're in a cafeteria. And suppose I conduct an elaborate experiment where people wear virtual reality goggles and see a virtual cafeteria that they navigate while I image their brains. Lo and behold, there is a network of six brain areas that is active in every one of my subjects during this task! I've found the cafeteria navigation system! It must be real!

Here's the problem. Finding activation is not interesting because mental activity is going to cause brain activity somewhere. Some part of the brain is always going to "light up" during a task. It proves nothing.

A more reasonable interpretation of my cafeteria study is this: people have brain systems that support vision, decision making, movement, spatial processing, etc. When given a complex task (e.g., cafeteria navigation) they recruit the appropriate systems to get the job done. The "cafeteria navigation system" is a dumb theory because it applies to just one task.

How do we know what the real brain systems are then, if "finding" them via brain imaging doesn't work?

Well, if we think systems ought to support lots of different tasks, that's a clue. This is a general desideratum of science, not particular to psychology. It's okay to make up theoretical entities that can't be observed if they can account for a lot of data.

Picture
In the most famous example, Newton readily admitted that he didn't know what gravity was. And gravity was very peculiar: it was a force that purportedly had action between two objects instantaneously at great distances, with nothing intervening. Newton's reply was that, peculiar as the entity might be, it was a crucial part of a theory that accounted very well for an enormous amount of data.

Likewise, it's legitimate for me to propose something like "short term memory" if it's part of a theory that accounts for a lot of data. But the mere fact that some part of the brain is active during what I claim to be a task tapping short-term memory doesn't help my case. I need to show that "short term memory" helps to account for data.

So can brain imaging do anything to help verify that a theoretical construct is useful? Yes. It can serve as a dependent measure.

Here's a problem I face in persuading you that my proposed construct, short-term memory, is legitimate. I need to show that short-term memory participates in lots of tasks (so its not like the cafeteria navigation task). But how do I know that short term memory is at work during a task? Presumably there would be some sign in your behavior that it's at work. But in addition, if I've previously shown that three brain areas, A, B, and C, support short-term memory, then A, B, and C ought to be active during any task that requires short-term memory. Now I have a way of verifying that short-term memory contributes to a task, and that's useful to me, because one of my goals is to show that it's important in many different tasks.

Further, I can use this fact (A, B, and C will be active) to show that my theory of short-term memory is well developed. I can devise two tasks that look very very similar, but that I (with my terrific theory in hand) can predict differ in the extent to which they tap short-term memory. So one task will make the three areas active and the other task won't even though the tasks look very similar. Or I can devise two tasks that look wildly different but that my theory predicts both tap short-term memory and so will show overlapping activation in areas A, B, and C.

Tomorrow: A highly practical application, and the big wrap-up.

Neurosci & Educ--5 Days, 5 Ways. Day 3: Reliable Neuro-knowledge

12/5/2012

 
This is the third entry in my week-long series of posts about how neuroscientific data might be used in education. Links to previous entries:

Last weeks description of the challenges in using neuro data in education.
Day 1: Basic Architecture
Day 2: Single Cell Inspiration

Most of the time, neuroscientific research is designed to tell us something about the brain, or about brain-behavior relationships. Occasionally, we are in a position to use neuroscientific data to inform purely behavioral theories.

A great example of this principle comes from the debates of the 1970's and 1980's regarding the basis of visual mental imagery.
Picture
Oops
When most people are asked certain questions (e.g., "What shape are Snoopy's ears?" or "In which hand does the Statue of Liberty hold her torch?") they report that they answer the question by creating an image in their mind's eye of Snoopy or the Statue, and then inspect the image.

During the 1970's there was a lively (often acrimonious) debate as to the mental representation that allows people to answer this sort of question. Some researchers (most visibly, Stephen Kosslyn) argued that we use mental representations that are inherently spatial, and that visual mental imagery overlaps considerably with visual perception.

Other researchers (most visibly, Zenon Pylyshyn) argued that the feeling of inspecting an image may be real, but the inspection does not support your answering the question. The mental representation that allows  you to say "the torch is in her right hand" is actually linguistic , and therefore has a lot in common with the language system. 

Although these two theories seem radically different, it was actually quite difficult (technically it was impossible--Anderson, 1978) to distinguish between them via behavioral experiments only.

But even 30 years ago, we knew enough about the brain to gather neuroscientific data that helped to decide between these competing theories. We knew some of the key brain areas supporting visual perception (red circle below) and we knew some of those supporting language representations (green circle).

Picture
So if imagery is like perception, damage to the red-circle areas should lead not only to problems with vision, but problems with imagery. But if imagery is based on linguistic representations, we'd expect damage to the green-circle areas to compromise imagery. A 1988 review of existing neurological cases (Farah, 1988) supported the former theory, not the latter.

Functional brain imaging makes corresponding predictions about which parts of the brain will be active during imagery tasks, and again, the data supported the theory positing that imagery relies on representations similar to those supporting visual perception (e.g., Le Bihan et al, 1993).

Can this technique be applied to education? Yes.

For example, Shaywitz et al (1998) imaged the brains of typical readers and readers diagnosed with dyslexia during tasks of increasing phonological complexity. A number of differences in activation emerged, as shown below (click for larger image).
Picture
The numbers refer to a system of distinguishing cortical areas--they are called Brodmann's areas. Ant =anterior (front of the brain). Post = posterior (back of the brain)
The lighter areas correspond to less activation, the darker areas to greater activation. As phonological processing gets more difficult, typical readers  show increased activation in a network of areas at the back of the brain--dyslexic readers show a smaller increase in those areas. They also show  a greater increase toward the front of the brain.

What's especially interesting from our perspective today is that the researchers interpreted these data in light of other data leading them to suggest that they already knew what certain brain areas do. 

Thus, the authors highlighted the abnormal activity in area 39 (the angular gyrus) which they suggested is crucial for translating visual input into sound; they pointed to other research showing that people who were typical readers suddenly had trouble reading if that brain area suffered damage. 

Thus another way we can use neuroscientific data is to leverage what we already know about the brain to test behavioral theories.

We have to not that it's also pretty easy to get fooled with this method. Obviously, if we think we know what a brain area supports but we're wrong, we'll draw the wrong conclusion. Then too, we might be right about what the brain area supports, but it may support more than one behavior.

For example, there are good data showing that the amygdala is important in the processing of some emotions, especially fear. But it would be hasty (and wrong) to suppose that every time the amygdala is active, it's because the person is experiencing strong emotion. Amygdala activation is observed in many circumstances, and it very likely participates in a number of functions.

So using our knowledge of the brain to inform behavioral theories is great, but it's easy to screw up.

Tomorrow: the most common misconception about brain imaging and how to correct it.


Anderson, J. R. (1978). Arguments concerning representation for mental imagery. Psychological Review, 85, 249-277.

Farah, M. J. (1988). Is visual mental imagery really visual? Overlooked evidence from neuropsychology. Psychological Review, 95, 307-317.

Le Bihan, D, Turner, R., Zeffiro, T. A., Cuenod, C. A., Jezzard, P. & Bonnerot, V. (1993) Activation of human primary visual cortex during visual recall: A magnetic resonance imaging study. Proceedings of the National Academy of Sciences, 90, 11802-11805.

Shaywitz, S. E et al (1998). Functional disruption in the organization of the brain for reading in dyslexia. Proceedings of the National Academy of Sciences, 95, 2636-2641.

Neurosci & Educ--5 days, 5 Ways. Day 2: Single Cell Inspiration

12/4/2012

 
This is the second of my week-long series of posts about how neuroscientific data might be used in education. (First post here. Last weeks complaints about neuro-garbage in education products here.)

Method 2:

Single cell recording allows an investigator to record the activity of an individual neuron. Different techniques are available, but most commonly a rat (or cat, or other non-human animal) will undergo surgery under anesthesia that allows an anchoring device to be affixed to the skull. The device serves as a guide for a microelectrode to be placed in the brain region of interest. The microelectrode measures changes in electrical potential just outside a neuron--changes associated with an action potential. In other words, it measures each time the neuron "fires."

When the animal recovers from surgery, researchers can "eavesdrop" on the activity of individual neurons while the animal is awake and behaving.

The goal is to figure out what makes the neuron fire. The technique is to expose the animal to many different situations, and to note what makes the neuron you're recording from respond maximally.
Picture
For example, you might record from a cell in the primary visual cortex (left) and present a bunch of stimuli on a screen : a picture of a human face, a cat's face, a cat's face in profile, a triangle, a circle, a bicycle, a car, and so forth.

David Hubel and Torsten Wiesel performed exactly this experiment in the 1950s and reported that cells in primary visual cortex of cats responded maximally to simple lines of a particular orientations.

Cognitive psychology was just getting going at this time, and some researchers (e.g., Oliver Selfridge) drew inspiration from these findings. They thought "hmm, here we are, trying to figure out a basic unit of representation for vision. . . the "bits" out of which more complex visual experience will be built. Hubel & Wiesel have good evidence that the basic "bits" for the brain are lines. So maybe we should try to model vision using lines."
Picture
A cartoon of a visual processing model. Processing moves from left to right. The letter R is the thing in the environment, and the image demon would be something like the retinal image. That is broken up into features, analogous to Hubel & Wiesel's simple shapes.

The result was a series of models in the early 1960s that used lines as the starting point for complex visual processing.

So another method of integrating neuroscientific data into behavioral theory is using data from single-cell recording studies to make an educated guess as to what the brain codes, and then using the guess as the foundational representation in a cognitive model.

How does this relate to education?

This technique is not often used, but one example might be John Stein's (2001) magnocelluar theory of dyslexia, which puts cells in the magnocellular layer of the lateral geniculate nucleus of the thalamus (click here for image) in a central role in dyslexia. These cells are crucial for timing of rapid events, including (Stein argues) for the stability of eye fixation when you move your eyes (including when you move your eyes as you read); hence, kids with dyslexia are more likely to have the image of letters slip out of the field of view, as well as other problems. In addition, they will have deficits in hearing that are also traceable to problems with precise timing, and these hearing problems also affect reading.

Stein's work has its roots in single cell recording work that first distinguished the role of cells in the magnocellular layer from those in the parvocellular layer (e.g., Derrington, 1984).



Yesterday we saw that neuroscientific data might provide a researcher with clues about the large-scale architecture supporting a cognitive process. Today we have moved to a much finer level of detail; instead of the overall plan, neuroscientific data provided hints about the nature of the building blocks.

Tomorrow, Method 3.

References:

Derrington , A. M . ( 1984 ). Spatial and temporal contrast sensitivities
of neurons in lateral geniculate nucleus of macaque . Journal of
Physiology, 357
, 219 – 240

Selfridge, O. G. & Neisser, U. (1960), Pattern recognition by machine, Scientific American 203, 60-68

Stein, J. The magnocellular theory of developmental dyslexia. Dyslexia, 7, 12-36.

Neurosci & Educ--5 Days, 5 Ways. Day 1: Basic Architecture

12/3/2012

 
Neuroscience--especially human neuroscience, and more especially human functional brain imaging--has had a quite a run in the last twenty years. In the first decade the advances were known mostly to scientists. In the last ten years there have been plenty of articles in the popular press featuring brain images. Many of these articles have been breathless and silly. Some backlash was inevitable and one of the more potent examples was a recent op-ed in the New York Times. Still, as Gary Marcus pointed out in a nice blog piece, we would be wise not to throw the baby out with the bath water.

In that vein, I am following up on a piece I wrote last week, in which I argued that much of the work on this topic in education is neuro-garbage. Most of the piece was devoted to explaining why it's difficult to apply neuroscience to education. (I left it to the reader to infer that it's correspondingly easy to be glib.)
Picture
Toward the end of that piece I suggested that neuroscience can and has been usefully applied to problems in education. This week I'll describe how. I'll tackle one method each day this week.

I'll keep things as simple as possible, but fasten your seatbelt if you feel the need.

Method 1:

Neuroscience can give researchers clues about the basic architecture of a cognitive process. It can show that a cognitive process might be more complex than we would have otherwise guessed, or that it's more simple.

Consider the figure below from Dehaene et al (2003) (click it for a larger version)
Picture
This figure summarizes a great deal of work indicating that there are three representations of number in the brain: a core quantity system (red), numbers in verbal form (green), and attentional orientation on the number line.

Suppose I am an educational psychologist, trying to figure out how children develop concepts of number, and how to coordinate the teaching of early mathematics with these concepts. I must have a theory of how number is represented in the mind. It's possible--actually, it's likely--that I would think of number as one thing, that children have one concept of the number five, for example. But this neuroscientific work indicates that the brain might use three representations of number. So it might be wise for me to use three representations in my cognitive theory of mathematics (which will support my educational theory).

In this example, there is greater diversity (three representations) where we might have guessed that we'd see simplicity (one representation). The opposite may also happen.

In one example, neuroscientific data were useful in interpreting variations in dyslexia across languages.

One of the peculiarities of dyslexia is that some key symptoms vary across different languages. For example, people with dyslexia usually show a large disparity between visual word recognition and IQ. But that disparity tends to be much larger in languages in which the spelling-sound correspondence is often inconsistent (e.g., English) than in languages where it's more consistent (e.g., Italian).

This pattern raises the question: is what we're calling "dyslexia" really the same thing in English and Italian? Maybe reading difficulties are so intertwined with the language you're learning to read that it doesn't make sense to call problems by the same label when they apply to English vs. Italian. Or maybe the problems kids develop in English-speaking vs. Italian-speaking countries is due to differences in the way reading tends to be taught in different countries.

Eraldo Paulesu and his colleagues (2000) used brain imaging data to argue that dyslexia is the same disorder in readers of different languages. They showed that the same brain region in left temporal cortex shows reduced activation during reading in French, Italian, and British readers who have been diagnosed with dyslexia.
Picture
Hence in this case neuroscientific data has shown us that there is simplicity (one reading problem) where we could have reasonably thought there was greater diversity (different reading problems across languages).

EDIT: It's worth adding that anatomic separability (or overlap) doesn't guarantee cognitive separability or identity. But it's an indicator.

Tomorrow: Method 2.

References:
Dehaene, S., Pizaaz, M., Pinel, P., & Cohen, L. (2003). Three parietal circuits for number processing. Cognitive Neuropsychology, 20,  487-506.

Paulesu, E., Demonet, J.-F., Fazio, F., McCrory, E., Chanoine, V., Brunswick, N., Cappa, S. F., Cossu, G., Habib, M., Frith, C. D., & Frith, U. (2000). Dyslexia: Cultural diversity and biological unity. Science, 291, 2165-2167.

Neuroscience Applied to Education: Mostly Unimpressive

11/26/2012

 
Picture
Neuroscience reporting: unimpressive.
An op-ed in the New York Times reported on some backlash against inaccurate reporting on neuroscience. (It included name-checks for some terrific blogs, including Neurocritic, Neurobonkers, Neuroskeptic, Mind Hacks, Dorothy Bishop's Blog). The headline ("Neuroscience: Under Attack") was inaccurate, but the issue raised is important; there is some sloppy reporting and writing on neuroscience.

How does education fare in this regard?

There is definitely a lot of neuro-garbage in the education market.

Sometimes it's the use of accurate but ultimately pointless neuro-talk that's mere window dressing for something that teachers already know (e.g., explaining the neural consequences of exercise to persuade teachers that recess is a good idea for third-graders).

Other times the neuroscience is simply inaccurate (exaggerations regarding the differences between the left and right hemispheres, for example).
 
You may have thought I was going to mention learning styles.

Well, learning styles is not a neuroscientific claim; it's a claim about the mind. But it's often presented as a brain claim, and that error is perhaps the most instructive. You see, people who want to talk to teachers about neuroscience will often present behavioral findings (e.g., the spacing effect)--as though they are neuroscientific findings.

What's the difference, and who cares? Why does it matter whether the science that leads to a useful classroom application is neuroscience or behavioral?

It matters because it gets to the heart of how and when neuroscience can be applied to educational practice. And when a writer doesn't seem to understand these issues, I get anxious that he or she is simply blowing smoke.

Let's start with behavior. Applying findings from the laboratory is not straightforward. Why? Consider this question. Would a terrific math tutor who has never been in a classroom before be a good teacher? Well, maybe. But we recognize that tutoring one-on-one is not the same thing as teaching a class. Kids interact, and that leads to new issues, new problems. Similarly, a great classroom teacher won't necessarily be a great principal.

This problem--that collections don't behave the same way as individuals--is pervasive.

Picture
Similarly, knowing something about a cognitive process--memory, say--is useful, but it's not guaranteed to translate "upwards" the way you expect. Just as children interact, making the classroom more than a collection of kids, so too cognitive processes interact, making a child's mind more than a collection of cognitive processes.

That's why we can't take lab findings and pop them right into the classroom. To use my favorite painfully obvious example, lab findings consistently show that repetition is good for memory. But you can't mindlessly implement that in schools--"keep repeating this til you've got it, kids." Repetition is good for memory, but terrible for motivation.

I've called this the vertical problem (Willingham, 2009). You can't assume that a finding at one level will work well at another level.

When we add neuroscience, there's a second problem. It's easiest to appreciate this way. Consider that in schools, the outcomes we care about are behavioral; reading, analyzing, calculating, remembering. These are the ways we know the child is getting something from schooling. At the end of the day, we don't really care what her hippocampus is doing, so long as these behavioral landmarks are in place.

Likewise, most of the things that we can change are behavioral. We're not going to plant electrodes in the child's brain to get her to learn--we're going to change her environment and encourage certain behaviors. A notable exception is when we suspect that there is a pharmacological imbalance, and we try to use medication to restore it. But mostly, what we do is behavioral and what we hope to see is behavioral. Neuroscience is outside the loop. 
Picture
For neuroscience to be useful in the classroom we've got to translate from the behavioral side to the neural side and then back again. I've called this the horizontal problem (Willingham, 2009).

The translation to use neuroscience in education can be done--it has been done--but it isn't easy. (I wrote about four techniques for doing it here, Willingham & Lloyd, 2007).

Now, let's return to the question we started with: does it matter if claims about laboratory findings about behavior are presented as brain claims?

I'm arguing it matters because it shows a misunderstanding of the relationship of mind, brain, and educational applications.

As we've seen, behavioral sciences and neuroscience face different problems in application. Both face the vertical problem. The horizontal problem is particular to neuroscience. 

When people don't seem to appreciate the difference, that indicates sloppy thinking. Sloppy thinking is a good indicator of bad advice to educators. Bad advice means that neurophilia will become another flash in the pan, another fad of the moment in education, and in ten year's time policymakers (and funders) will say "Oh yeah, we tried that."

Neuroscience deserves better. With patience, it can add to our collective wisdom on education. At the moment, however, neuro-garbage is ascendant in education.

EDIT:
I thought it was worth elaborating on the methods whereby neuroscientific data CAN be used to improve education:
Method 1
Method 2
Method 3
Method 4
Method 5
Conclusions

Willingham, D. T. (2009). Three problems in the marriage of neuroscience and education. Cortex, 45, 54-545.
Wilingham, D. T. & Lloyd, J. W. (2007). How educational theories can use neuroscientific data. Mind, Brain, & Education, 1, 140-149.

The Gates Foundation's "engagement bracelets"

6/26/2012

 
It's not often that an initiative prompts grave concern in some and ridicule in others. The Gates Foundation managed it.

The Foundation has funded a couple of projects to investigate the feasibility of developing a passive measure of student engagement, using galvanic skin response (GSR).

The ridicule comes from an assumption that it won't work.

GSR basically measures how sweaty you are. Two leads are placed on the skin. One emits a very very mild charge. The other measures the charge. The more sweat on your skin, the better it conducts the charge, so the better the second lead will pick up the charge.

Who cares how sweaty your skin is?

Sweat--as well as heart rate, respiration rate and a host of other physiological signs controlled by the peripheral nervous system--vary with your emotional state.

Can you tell whether a student is paying attention from these data? 

It's at least plausible that it could be made to work. There has long been controversy over how separable different emotional states are, based on these sorts of metrics. It strikes me as a tough problem, and we're clearly not there yet, but the idea is far from kooky, and indeed, the people who have been arguing its possible have been making some progress--this lab group says they've successfully distinguished engagement, relaxation and stress. (Admittedly, they gathered a lot more data than just GSR and one measure they collected was EEG, a measure of the central, not peripheral, nervous system.)

The grave concern springs from the possible use to which the device would be put.

A Gates Foundation spokeswoman says the plan is that a teacher would be able to tell, in real time, whether students are paying attention in class. (Earlier the Foundation website indicated that the grant was part of a program meant to evaluate teachers, but that was apparently an error.)

Some have objected that such measurement would be insulting to teachers. After all, can't teachers tell when their students are engaged, or bored, or frustrated, etc.?

I'm sure some can, but not all of them. And it's a good bet that beginning teachers can't make these judgements as accurately as their more experienced colleagues, and beginners are just the ones who need this feedback. Presumably the information provided by the system would be redundant to teachers who can read it by their students faces and body language, and these teachers will simply ignore it.

I would hope that classroom use would be optional--GSR bracelets would enter classrooms only if teachers requested them.

Of greater concern to me are the rights of the students. Passive reading of physiological data without consent feels like an invasion of privacy. Parental consent ought to be obligatory. Then too, what about HIPAA? What is the procedure if a system that measures heartbeat detects an irregularity?

These two concerns--the effect on teachers and the effect on students--strike me as serious, and people with more experience than I have in ethics and in the law will need to think them through with great care.

But I still think the project is a terrific idea, for two reasons, neither of which has received much attention in all the uproar.

First, even if the devices were never used in classrooms, researchers could put them to good use.

I sat in at a meeting a few years ago of researchers considering a grant submission (not to the Gates Foundation) on this precise idea--using peripheral nervous system data as an on-line measure of engagement. (The science involved here is not really in my area of expertise, and had no idea why I was asked to be at the meeting, but that seems to be true of about two-thirds of the meetings I attend.) Our thought was that the device would be used by researchers, not teachers and administrators.

Researchers would love a good measure of engagement because the proponents of new materials or methods so often claim "increased engagement" as a benefit. But how are researchers supposed to know whether or not the claim is true? Teacher or student judgements of engagement are subject to memory loss and to well-known biases.

In addition, I see potentially great value for parents and teachers of kids with disabilities. For example, have a look at these two pictures.
Picture
This is my daughter Esprit. She's 9 years old, and she has Edward's syndrome. As a consequence, she has a host of cognitive and physical challenges, e.g., she cannot speak, and she has limited motor control and bad motor tone (she can't sit up unaided).

Esprit can never tell me that she's engaged either with words or signs. But I'm comfortable concluding that she is engaged at moments like that captured in the top photo--she's turning the book over in her hands and staring at it intently.

In the photo at the bottom, even I, her dad, am unsure of what's on her mind. (She looks sleepy, but isn't--ptosis, or drooping upper eyelids, is part of the profile).  If Esprit wore this expression while gazing towards a video for example, I wouldn't be sure whether she was engaged by the video or was spacing out.

Are there moments that I would slap a bracelet on her if I thought it could measure whether or not she was engaged?

You bet your sweet bippy there are. 

I'm not the first to think of using physiologic data to measure engagement in people with disabilities that make it hard to make their interests known. In this article, researchers sought to reduce the communication barriers that exclude children with disabilities from social activities; the kids might be present, but because of their difficulties describing or showing their thoughts, they cannot fully participate in the group.  Researchers reported some success in distinguishing engaged from disengaged states of mind from measures of blood volume pulse, GSR, skin temperature, and respiration in nine young adults with muscular dystrophy or cerebral palsy.

I respect the concerns of those who see the potential for abuse in the passive measurement of physiological data. At the same time, I see the potential for real benefit in such a system, wisely deployed.

When we see the potential for abuse, let's quash that possibility, but let's not let it blind us to the possibility of the good that might be done.

And finally, because Esprit didn't look very cute in the pictures above, I end with this picture.

Picture
<<Previous

    Enter your email address:

    Delivered by FeedBurner

    RSS Feed


    Purpose

    The goal of this blog is to provide pointers to scientific findings that are applicable to education that I think ought to receive more attention.

    Archives

    April 2022
    July 2020
    May 2020
    March 2020
    February 2020
    December 2019
    October 2019
    April 2019
    March 2019
    January 2019
    October 2018
    September 2018
    August 2018
    June 2018
    March 2018
    February 2018
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    April 2017
    March 2017
    February 2017
    November 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    December 2015
    July 2015
    April 2015
    March 2015
    January 2015
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    September 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012
    February 2012

    Categories

    All
    21st Century Skills
    Academic Achievement
    Academic Achievement
    Achievement Gap
    Adhd
    Aera
    Animal Subjects
    Attention
    Book Review
    Charter Schools
    Child Development
    Classroom Time
    College
    Consciousness
    Curriculum
    Data Trustworthiness
    Education Schools
    Emotion
    Equality
    Exercise
    Expertise
    Forfun
    Gaming
    Gender
    Grades
    Higher Ed
    Homework
    Instructional Materials
    Intelligence
    International Comparisons
    Interventions
    Low Achievement
    Math
    Memory
    Meta Analysis
    Meta-analysis
    Metacognition
    Morality
    Motor Skill
    Multitasking
    Music
    Neuroscience
    Obituaries
    Parents
    Perception
    Phonological Awareness
    Plagiarism
    Politics
    Poverty
    Preschool
    Principals
    Prior Knowledge
    Problem-solving
    Reading
    Research
    Science
    Self-concept
    Self Control
    Self-control
    Sleep
    Socioeconomic Status
    Spatial Skills
    Standardized Tests
    Stereotypes
    Stress
    Teacher Evaluation
    Teaching
    Technology
    Value-added
    Vocabulary
    Working Memory