Dear Governor McDonnellWe understand
that during a recent radio appearance
you suggested that "there should at least be a discussion" of the idea that some school officials might carry guns so that they would be able to respond in the event of a school shooting.We thank you for the invitation to discuss the matter. We
think it's a terrible idea.
The likely outcomes, were this idea implemented, are predictable. We'll start to read news stories about a school official who mistakenly shoots a student he thought was threatening. About a school official who blows her top and shoots a teacher or student. About a student who blows his top, wrests the gun from the school official and shoots him. About a depressed school official who commits suicide on school grounds. Such events are predictable because we
know that the availability of a gun in a home (put there for self-defense) is more likely to be used for other violent purposes. But I'm sure you have good scientific advisers who have told you all this. (If you don't, let Dan know. He can put you in touch with a few.)There's another aspect of this proposal that perhaps you didn't consider. We can't speak for all teachers, but do speak for ourselves: one of us an instructor at a public university, the other, an elementary school teacher
.Carrying a weapon, even for the purpose of self-defense, conflicts with the very essence of why we teach. We love to teach because we love to communicate to students the beauty of the world, and to help them see beauty they did not know was there.
We love to teach because teaching is about creation: the creation of new knowledge, the creation of better minds, and yes, the creation of a better commonwealth, nation, and world.We love to teach because we want to build--to build competence
, self-confidence, and character in our students.
Can you see why carrying firearms, even for the purpose of self-defense, is not our first choice for a solution? It conflicts with beauty, with creation, with building. It is a "solution" by destruction, even if it is the destruction of a wretched, desperate soul.
To us, it is an admission of failure. It is a "solution" born of desperation. We will be saying to students "We do not know how to prevent these events, so we plan a response."
We are not ready to admit defeat. There are positive measures that have yet to be tried. Better mental health screening, better education of gun owners on firearms security, tighter laws regulating their sale. Let's not throw in the towel. Let's try some positive steps and see if we can improve things. Let's do everything we can to make schools a place of serenity, joy, and contemplation, not a place of
"security" paid for with the grim, anxious vigilance of teachers and students. Respectfully,Dan & Trisha WillinghamKeswick, Virginia
The PIRLS results are better than you may realize.
Last week, the results of the 2011 Progress in International Reading Literacy Study (PIRLS) were published. This test compared reading ability in 4th grade children.
U.S. fourth-graders ranked 6th among 45 participating countries. Even better, US kids scored significantly better than the last time the test was administered in 2006.
There's a small but decisive factor that is often forgotten in these discussions: differences in orthography across languages.
Lots of factors go into learning to read. The most obvious is learning to decode--learning the relationship between letters and (in most languages) sounds. Decode is an apt term. The correspondence of letters and sound is a code that must be cracked.
In some languages the correspondence is relatively straightforward, meaning that a given letter or combination of letters reliably corresponds to a given sound. Such languages are said to have a shallow orthography. Examples include Finnish, Italian, and Spanish.
In other languages, the correspondence is less consistent. English is one such language. Consider the letter sequence "ough." How should that be pronounced? It depends on whether it's part of the word "cough," "through," "although," or "plough." In these languages, there are more multi-letter sound units, more context-depenent rules and more out and out quirks.
Another factor is syllabic structure. Syllables in languages with simple structures typically (or exclusively) have the form CV (i.e., a consonant, then a vowel as in "ba") or VC (as in "ab.") Slightly more complex forms include CVC ("bat") and CCV ("pla"). As the number of permissible combinations of vowels and consonants that may form a single syllable increases, so does the complexity. In English, it's not uncommon to see forms like CCCVCC (.e.g., "splint.")
Here's a figure (Seymour et al., 2003) showing the relative orthographic depth of 13 languages, as well as the complexity of their syllabic structure.
From Seymour et al (2003)
Orthographic depth correlates with incidence of dyslexia (e.g., Wolf et al, 1994) and with word and nonword reading in typically developing children (Seymour et al. 2003). Syllabic complexity correlates with word decoding (Seymour et al, 2003).
This highlights two points, in my mind.
First, when people trumpet the fact that Finland doesn't begin reading instruction until age 7 we should bear in mind that the task confronting Finnish children is easier than that confronting English-speaking children. The late start might be just fine for Finnish children; it's not obvious it would work well for English-speakers.
Of course, a shallow orthography doesn't guarantee excellent reading performance, at least as measured by the PIRLS. Children in Greece, Italy, and Spain had mediocre scores, on average. Good instruction is obviously still important.
But good instruction is more difficult in languages with deep orthography, and that's the second point. The conclusion from the PIRLS should not just be "Early elementary teachers in the US are doing a good job with reading." It should be "Early elementary teachers in the US are doing a good job with reading despite teaching reading in a language that is difficult to learn."
Seymour, P. H. K., Aro, M., & Erskine, J. M. (2003). Foundation literacy acquisition in European orthographies. British Journal of Psychology, 94, 143-174.
Wolf, M., Pfeil, C., Lotz, R., & Biddle, K. (1994). Towarsd a more universal understanding of the developmental dyslexias: The contribution of orthographic factors. In Berninger, V. W. (Ed), The varieties of orthographic knowledge, 1: Theoretical and developmental issues.Neuropsychology and cognition, Vol. 8., (pp. 137-171). New York, NY, US: Kluwer
Something happens to the "inner clocks" of teens. They don't go to sleep until later in the evening but still must wake up for school. Hence, many are sleep-deprived.
These common observations are borne out in research, as I summarize in an article on sleep and cognition
in the latest American Educator.
What are the cognitive consequences of sleep deprivation?
It seems to affect executive function tasks such as working memory. In addition, it has an impact on new learning--sleep is important for a process called consolidation
whereby newly formed memories are made more stable. Sleep deprivation compromises consolidation of new learning (though surprisingly, that effect seems to be smaller or absent in young children.)
Parents and teachers consistently report that the mood of sleep-deprived students is affected: they are more irritable, hyperactive or inattentive. Although this sounds like ADHD, lab studies of attention show little impact of sleep deprivation on formal measures of attention. This may be because students are able, for brief periods, to rally resources and perform well on a lab test. They may be less able to sustain attention for long periods of time when at home or at school and may be less motivated to do so in any event.
Perhaps most convincingly, the few studies that have examined academic performance based on school start times show better grades associated with later school start times. (You might think that if kids know they can sleep later, they might just stay up later. They do, a bit, but they still get more sleep overall.)
Although these effects are reasonably well established, the cognitive cost of sleep deprivation is less widespread and statistically smaller than I would have guessed. That may be because they are difficult to test experimentally. You have two choices, both with drawbacks:
1) you can do correlational studies that ask students how much they sleep each night (or better, get them to wear devices that provide a more objective measure of sleep) and then look for associations between sleep and cognitive measures or school outcomes. But this has the usual problem that one cannot draw causal conclusions from correlational data.
2) you can do a proper experiment by having students sleep less than they usually would, and see if their cognitive performance goes down as a consequence. But it's unethical to significantly deprive students of significant sleep (and what parent would allow their child to take part in such a study?) And anyway, a night or two of severe sleep deprivation is not really what we think is going on here--we think it's months or years of milder deprivation.
So even though scientific studies may not indicate that sleep deprivation is a huge problem, I'm concerned that the data might be underestimating the effect. To allay that concern, can anything be done to get teens to sleep more?
Believe it or not, telling teens "go to sleep" might help. Students with parent-set bedtimes do get more sleep on school nights than students without them. (They get the same amount of sleep on weekends, which somewhat addresses the concern that kids with this sort of parent differ in many ways kids who don't.)
Another strategy is to maximize the "sleepy cues" near bedtime. The internal clock of teens is not just set for later bedtime, it also provides weaker internal cues that he or she ought to be sleepy. Thus, teens are arguably more reliant on external cues that it's bedtime. So the student who is gaming at midnight might tell you "I'm playing games because I'm not sleepy" could be mistaken. It could be that he's not sleepy because he's playing games. Good cues would be a bedtime ritual that doesn't include action video games or movies in the few hours before bed, and ends in a dark quiet room at the same time each night.
So yes, this seems to be a case where good ol' common sense jibes with data. The best strategy we know of for better sleep is consistency. References: All the studies alluded to (and more) appear in the article.
If you've followed this series of blog postings throughout the week, great. If you haven't, let me catch you up. Well, actually that's not very realistic.
But in two sentences: I wrote a blog
entry about why it's so difficult to apply neuroscientific data to educational practice, but claiming towards the end that doing so was possible. This week I sought to specify how it's possible by describing five techniques bring neuroscientific data to bear on education: Technique 1
, Technique 2
, Technique 3
, Technique 4
, Technique 5
I realize that the posts on the specific techniques probably included more detail than would interest many readers of this blog, so let me highlight the takeaways: 1) It can be done. It's being done. There is a backlash cresting against neuropop--"check out these brain pictures taken during the Rush Limbaugh's show!"--and this backlash is justified. (See Gary Marcus's nuanced take on this issue in the New Yorker.) But the fact that there is neuro-garbage in education does not mean we should dismiss brain science as an aid to the development of useful education practices. 2). These methods are indirect. (Technique 5, which concerns early identification of learning difficulties is not, but neither does it hold the promise of intervention.)
Neuroscience will help education by informing behavioral theory, which we will then try to use to improve educational practice.
I draw two implications from the indirect path from neuroscience to education practice. First, when you hear someone say they’ve got a way to improve education that is “based on brain research” they are either (1) misinformed because it’s (at best) neuroscience that has informed a behavioral theory or (2) willfully trying to humbug you. I know certain people will find this implication insulting. Well, prove me wrong.
Second, the indirect path makes clear that this is complicated stuff. The pathway from neuroscientific data to educational practice will be crooked, moving back forth between neural and behavioral theory. We are going to creep and lurch toward useful applications, not sprint. Brilliant work is being conducted by people like Stan Dehaene
, Daniel Ansari
, and others, but payoffs will require patience. As I've written about at length elsewhere, it's also important to bear in mind that not everything we care about in education is open to scientific investigation. I'm an enthusiastic booster of
using scientific knowledge to improve educational practice. For that use to be effective, we need to bear in mind the limitations of science. The scientific method is very good for addressing some challenges in education and irrelevant to others. I think it's useful to be very explicit about each.
This is Day 5 of my week-long series of posts on the use of neuroscientific data in educational practice.There is another post today, summing things up. It's here.Links to previous posts: Challenges in applying neuroscientific data to education
.Day 1: Basic architectureDay 2: Single cell inspirationDay 3: Reliable neuro-knowledgeDay 4: Confirm a Construct
Today's technique differs from the other four. Those concerned how you could use neuroscientific data to improve a behavioral theory, which you would then use to improve education outcomes. Neuroscientific data also shows promise in helping with the early identification
of learning problems. The best-studied
of these is dyslexia.
It would be very useful indeed to know with confidence which children will have difficulty learning to read. The earlier the intervention, the better. Traditionally, one would use behavioral measures
like word attack or reading fluency, or phonological processing. Typically, a battery of tests would be used. (One intriguing new study suggests that a measure of visuo-spatial attention may be a good predictor of later reading difficulty: Franceschini et al, 2012)But there is evidence that structural differences in the brains of children who will later have trouble learning to read are present before reading onset. (Raschle, Chang & Gaab, 2011;
Raschle, Zuk & Gaab, 2012). dyslexia has a neural basis present before reading instruction begins, might you be able to identify children who will very likely have significant trouble with reading before instruction ever begins?A number of laboratories have been working on this problem, and progress is being made.
These researchers are not looking to toss out behavioral measures--they are looking to supplement them. The more successful of these efforts (e.g., Hoeft et al., 2007) show that behavioral measures predict reading problems, neuroscientific measures predict reading problems, and using both types of data provides better prediction than either measure alone. In other words, the neuroscientific data is capturing information not captured by the behavioral measures, and vice versa.
This is not an easy problem to solve, but progress seems likely.
Franceschini, S., Gori, S. Ruffino, M., Pedrolli, K. & Facoetti, A.(2012). A causal link between visual spatial attention and reading acquisition. Current Biology, 22, 814-819.
Hoeft, F., Ueno, T., Reiss, A. L., Meyler, A., Whitfield-Gabrieli, S., Glover, G. H., ... & Gabrieli, J. D. (2007). Prediction of children's reading skills using behavioral, functional, and structural neuroimaging measures. Behavioral neuroscience
Raschle, N. M., Chang, M., & Gaab, N. (2011). Structural brain alterations associated with dyslexia predate reading onset. Neuroimage
Raschle, N. M., Zuk, J., & Gaab, N. (2012). Functional characteristics of developmental dyslexia in left-hemispheric posterior brain regions predate reading onset. Proceedings of the National Academy of Sciences
For example, suppose I notice if someone reads a phone number and then is distracted, he can remember the phone number for about 30 seconds or less. If he's distracted longer, the phone number is forgotten.
I suggest that there is a mental structure called a short-term memory system, which can store information for about 30 seconds. Short-term memory is an abstract construct; it's a proposed mechanism of the mind, which I think will help explain behavior.
Now it's clear that I've simply invented this idea of a short-term memory and that seems like a problem. "Oh people remember things for 30 seconds? That must mean you've got a remember-for-30-seconds mechanism in your mind!" I need something better to persuade people (and myself) that this entity actually helps explain how people think.
But now suppose I use functional brain imaging during that 30 seconds. I test 20 people and find that the same network of three brain areas is always active. Haven't I now seen short term memory in action? And doesn't this support my theory?
No and no.
To understand why not, suppose instead that I proposed a theory that there is a "cafeteria navigation" module in the brain whose sole purpose is to help you select items when you're in a cafeteria. And suppose I conduct an elaborate experiment where people wear virtual reality goggles and see a virtual cafeteria that they navigate while I image their brains. Lo and behold, there is a network of six brain areas that is active in every one of my subjects during this task! I've found the cafeteria navigation system! It must be real!
Here's the problem. Finding activation is not interesting because mental activity is going to cause brain activity somewhere. Some part of the brain is always going to "light up" during a task. It proves nothing.
A more reasonable interpretation of my cafeteria study is this: people have brain systems that support vision, decision making, movement, spatial processing, etc. When given a complex task (e.g., cafeteria navigation) they recruit the appropriate systems to get the job done. The "cafeteria navigation system" is a dumb theory because it applies to just one task.
How do we know what the real brain systems are then, if "finding" them via brain imaging doesn't work?
Well, if we think systems ought to support lots of different tasks, that's a clue. This is a general desideratum of science, not particular to psychology. It's okay to make up theoretical entities that can't be observed if they can account for a lot of data.
In the most famous example, Newton readily admitted that he didn't know what gravity was. And gravity was very peculiar: it was a force that purportedly had action between two objects instantaneously at great distances, with nothing intervening. Newton's reply was that, peculiar as the entity might be, it was a crucial part of a theory that accounted very well for an enormous amount of data.
Likewise, it's legitimate for me to propose something like "short term memory" if it's part of a theory that accounts for a lot of data. But the mere fact that some part of the brain is active during what I claim to be a task tapping short-term memory doesn't help my case. I need to show that "short term memory" helps to account for data.
So can brain imaging do anything to help verify that a theoretical construct is useful? Yes. It can serve as a dependent measure.
Here's a problem I face in persuading you that my proposed construct, short-term memory, is legitimate. I need to show that short-term memory participates in lots of tasks (so its not like the cafeteria navigation task). But how do I know that short term memory is at work during a task? Presumably there would be some sign in your behavior that it's at work. But in addition, if I've previously shown that three brain areas, A, B, and C, support short-term memory, then A, B, and C ought to be active during any task that requires short-term memory. Now I have a way of verifying that short-term memory contributes to a task, and that's useful to me, because one of my goals is to show that it's important in many different tasks.
Further, I can use this fact (A, B, and C will be active) to show that my theory of short-term memory is well developed. I can devise two tasks that look very very similar, but that I (with my terrific theory in hand) can predict differ in the extent to which they tap short-term memory. So one task will make the three areas active and the other task won't even though the tasks look very similar. Or I can devise two tasks that look wildly different but that my theory predicts both tap short-term memory and so will show overlapping activation in areas A, B, and C.
Tomorrow: A highly practical application, and the big wrap-up.
When most people are asked certain questions (e.g., "What shape are Snoopy's ears?" or "In which hand does the Statue of Liberty hold her torch?") they report that they answer the question by creating an image in their mind's eye of Snoopy or the Statue, and then inspect the image.
During the 1970's there was a lively (often acrimonious) debate as to the mental representation that allows people to answer this sort of question. Some researchers (most visibly, Stephen Kosslyn
) argued that we use mental representations that are inherently spatial, and that visual mental imagery overlaps considerably with visual perception.
Other researchers (most visibly, Zenon Pylyshyn
) argued that the feeling
of inspecting an image may be real, but the inspection does not support your answering the question. The mental representation that allows you to say "the torch is in her right hand" is actually linguistic
, and therefore has a lot in common with the language system.
Although these two theories seem radically different, it was actually quite difficult (technically it was impossible--Anderson, 1978) to distinguish between them via behavioral experiments only.
But even 30 years ago, we knew enough about the brain to gather neuroscientific data that helped to decide between these competing theories. We knew some of the key brain areas supporting visual perception (red circle below) and we knew some of those supporting language representations (green circle).
So if imagery is like perception, damage to the red-circle areas should lead not only to problems with vision, but problems with imagery. But if imagery is based on linguistic representations, we'd expect damage to the green-circle areas to compromise imagery. A 1988 review of existing neurological cases (Farah, 1988) supported the former theory, not the latter.
Functional brain imaging makes corresponding predictions about which parts of the brain will be active during imagery tasks, and again, the data supported the theory positing that imagery relies on representations similar to those supporting visual perception (e.g., Le Bihan et al, 1993).
Can this technique be applied to education? Yes.
For example, Shaywitz et al (1998) imaged the brains of typical readers and readers diagnosed with dyslexia during tasks of increasing phonological complexity. A number of differences in activation emerged, as shown below (click for larger image).
The numbers refer to a system of distinguishing cortical areas--they are called Brodmann's areas. Ant =anterior (front of the brain). Post = posterior (back of the brain)
The lighter areas correspond to less activation, the darker areas to greater activation. As phonological processing gets more difficult, typical readers show increased activation in a network of areas at the back of the brain--dyslexic readers show a smaller increase in those areas. They also show a greater increase toward the front of the brain.
What's especially interesting from our perspective today is that the researchers interpreted these data in light of other data leading them to suggest that they already knew what certain brain areas do.
Thus, the authors highlighted the abnormal activity in area 39 (the angular gyrus) which they suggested is crucial for translating visual input into sound; they pointed to other research showing that people who were typical readers suddenly had trouble reading if that brain area suffered damage.
Thus another way we can use neuroscientific data is to leverage what we already know about the brain to test behavioral theories.
We have to not that it's also pretty easy to get fooled with this method. Obviously, if we think we know what a brain area supports but we're wrong, we'll draw the wrong conclusion. Then too, we might be right about what the brain area supports, but it may support more than one behavior.
For example, there are good data showing that the amygdala is important in the processing of some emotions, especially fear. But it would be hasty (and wrong) to suppose that every time the amygdala is active, it's because the person is experiencing strong emotion. Amygdala activation is observed in many circumstances, and it very likely participates in a number of functions.
So using our knowledge of the brain to inform behavioral theories is great, but it's easy to screw up.
Tomorrow: the most common misconception about brain imaging and how to correct it.
Anderson, J. R. (1978). Arguments concerning representation for mental imagery. Psychological Review, 85, 249-277.
Farah, M. J. (1988). Is visual mental imagery really visual? Overlooked evidence from neuropsychology. Psychological Review, 95, 307-317.
Le Bihan, D, Turner, R., Zeffiro, T. A., Cuenod, C. A., Jezzard, P. & Bonnerot, V. (1993) Activation of human primary visual cortex during visual recall: A magnetic resonance imaging study. Proceedings of the National Academy of Sciences, 90, 11802-11805.
Shaywitz, S. E et al (1998). Functional disruption in the organization of the brain for reading in dyslexia. Proceedings of the National Academy of Sciences, 95, 2636-2641.
This is the second of my week-long series of posts about how neuroscientific data might be used in education. (First post here
. Last weeks complaints about neuro-garbage in education products here
Single cell recording allows an investigator to record the activity of an individual neuron. Different techniques are available, but most commonly a rat (or cat, or other non-human animal) will undergo surgery under anesthesia that allows an anchoring device to be affixed to the skull. The device serves as a guide for a microelectrode
to be placed in the brain region of interest. The microelectrode measures changes in electrical potential just outside a neuron--changes associated with an action potential. In other words, it measures each time the neuron "fires." When the animal recovers from surgery, researchers can "eavesdrop" on the activity of individual neurons while the animal is awake and behaving.
The goal is to figure out what makes the neuron fire. The technique is to expose the animal to many different situations, and to note what makes the neuron you're recording from respond maximally.
For example, you might record from a cell in the primary visual cortex (left) and present a bunch of stimuli on a screen : a picture of a human face, a cat's face, a cat's face in profile, a triangle, a circle, a bicycle, a car, and so forth.
David Hubel and Torsten Wiesel performed exactly this experiment in the 1950s and reported that cells in primary visual cortex of cats responded maximally to simple lines of a particular orientations.
Cognitive psychology was just getting going at this time, and some researchers (e.g., Oliver Selfridge) drew inspiration from these findings. They thought "hmm, here we are, trying to figure out a basic unit of representation for vision. . . the "bits" out of which more complex visual experience will be built. Hubel & Wiesel have good evidence that the basic "bits" for the brain are lines. So maybe we should try to model vision using lines."
A cartoon of a visual processing model. Processing moves from left to right. The letter R is the thing in the environment, and the image demon would be something like the retinal image. That is broken up into features, analogous to Hubel & Wiesel's simple shapes.
The result was a series of models in the early 1960s that used lines as the starting point for complex visual processing. So another method of integrating neuroscientific data into behavioral theory is using data from single-cell recording studies to make an educated guess as to what the brain codes, and then using the guess as the foundational representation in a cognitive model. How does this relate to education?This technique is not often used, but one examp
le might be John Stein's (2001) magnocelluar theory of dyslexia
, which puts cells in the magnocellular layer of the lateral geniculate nucleus of the thalamus (click here
for image) in a central role in dyslexia. These cells are crucial for timing of rapid events, including (Stein argues) for the stability of eye fixation when you move your eyes (including when you move your eyes as you read); hence, kids with dyslexia are more likely to have the image of letters slip out of the field of view, as well as other problems. In addition, they will have deficits in hearing that are also traceable to problems with precise timing, and these hearing problems also affect reading.
Stein's work has its roots in single cell recording work that first distinguished the role of cells in the magnocellular layer from those in the parvocellular layer (e.g., Derrington, 1984).
Yesterday we saw that neuroscientific data might provide a researcher with clues about the large-scale architecture supporting a cognitive process. Today we have moved to a much finer level of detail; instead of the overall plan, neuroscientific data provided hints about the nature of the building blocks.Tomorrow, Method 3. References:
Derrington , A. M . ( 1984 ). Spatial and temporal contrast sensitivities
of neurons in lateral geniculate nucleus of macaque . Journal of
, 219 – 240
Selfridge, O. G. & Neisser, U. (1960), Pattern recognition by machine, Scientific American
203, 60-68Stein, J. The magnocellular theory of developmental dyslexia. Dyslexia, 7, 12-36.
Neuroscience--especially human neuroscience, and more especially human functional brain imaging--has had a quite a run in the last twenty years. In the first decade the advances were known mostly to scientists. In the last ten years there have been plenty of articles in the popular press featuring brain images. Many of these articles have been breathless and silly. Some backlash was inevitable and one of the more potent examples was a recent op-ed in the New York Times.
Still, as Gary Marcus
pointed out in a nice blog piece
, we would be wise not to throw the baby out with the bath water.
In that vein, I am following up on a piece I wrote last week,
in which I argued that much of the work on this topic in education is neuro-garbage. Most of the piece was devoted to explaining why it's difficult to apply neuroscience to education. (I left it to the reader to infer that it's correspondingly easy to be glib.)
Toward the end of that piece I suggested that neuroscience can and has been usefully applied to problems in education. This week I'll describe how. I'll tackle one method each day this week.I'll keep things as simple as possible, but fasten your seatbelt if you feel the need.
Neuroscience can give researchers clues about the basic architecture of a cognitive process. It can show that a cognitive process might be more complex than we would have otherwise guessed, or that it's more simple.
Consider the figure below from Dehaene et al (2003) (click it for a larger version)
This figure summarizes a great deal of work indicating that there are three representations of number in the brain: a core quantity system (red), numbers in verbal form (green), and attentional orientation on the number line.
Suppose I am an educational psychologist, trying to figure out how children develop concepts of number, and how to coordinate the teaching of early mathematics with these concepts. I must have a theory of how number is represented in the mind. It's possible--actually, it's likely--that I would think of number as one thing, that children have one concept of the number five, for example. But this neuroscientific work indicates that the brain might use three representations of number. So it might be wise for me to use three representations in my cognitive theory of mathematics (which will support my educational theory).
In this example, there is greater diversity (three representations) where we might have guessed that we'd see simplicity (one representation). The opposite may also happen.
In one example, neuroscientific data were useful in interpreting variations in dyslexia across languages.
One of the peculiarities of dyslexia is that some key symptoms vary across different languages. For example, people with dyslexia usually show a large disparity between visual word recognition and IQ. But that disparity tends to be much larger in languages in which the spelling-sound correspondence is often inconsistent (e.g., English) than in languages where it's more consistent (e.g., Italian).
This pattern raises the question: is what we're calling "dyslexia" really the same thing in English and Italian? Maybe reading difficulties are so intertwined with the language you're learning to read that it doesn't make sense to call problems by the same label when they apply to English vs. Italian. Or maybe the problems kids develop in English-speaking vs. Italian-speaking countries is due to differences in the way reading tends to be taught in different countries.
Eraldo Paulesu and his colleagues (2000) used brain imaging data to argue that dyslexia is the same disorder in readers of different languages. They showed that the same brain region in left temporal cortex shows reduced activation during reading in French, Italian, and British readers who have been diagnosed with dyslexia.
Hence in this case neuroscientific data has shown us that there is simplicity (one reading problem) where we could have reasonably thought there was greater diversity (different reading problems across languages).
EDIT: It's worth adding that anatomic separability (or overlap) doesn't guarantee cognitive separability or identity. But it's an indicator.
Tomorrow: Method 2.
Dehaene, S., Pizaaz, M., Pinel, P., & Cohen, L. (2003). Three parietal circuits for number processing. Cognitive Neuropsychology, 20, 487-506.
Paulesu, E., Demonet, J.-F., Fazio, F., McCrory, E., Chanoine, V., Brunswick, N., Cappa, S. F., Cossu, G., Habib, M., Frith, C. D., & Frith, U. (2000). Dyslexia: Cultural diversity and biological unity. Science, 291, 2165-2167.