A blog posting
over at Schools Matter @ The Chalk Face has gathered a lot of interest--78 comments, many of them outraged.The New York State Education Dept. has a website that is meant to help teachers prepare for the Common Core Standards.
Author Chris Cerrone posted a bit of a 1st grade curriculum module on early civilizations. Here it is:
Cerrone asked primary grade educators to weigh in: "what do you think of the vocabulary contained in this unit of study?"
The responses in the 78 comments were nearly uniformly negative. As you might expect from that volume of commentary, the criticisms were wide-ranging, much of it directed more generally at standardized testing and the idea of the CCSS themselves.
But a lot of the commentary concerned cognitive development, and I want to focus there. This comment was typical (click for larger image).
Photo from milwaukee-montessori.org
There is an important idea at the heart of this criticism: developmental stages. This commenter specifically invokes Piaget, but you don't have to be a Piagetian to think that stages are a good way to think about children's thinking. Stage theories hold that children's thinking is relatively stable, but then undergoes a big shift in a relatively brief time (say, a few months) whereupon it stabilizes again.
So lessons would be developmentally inappropriate if they demanded a type of thinking that the child was simply incapable of, given his developmental stage.
I have argued in some detail
that stage theories have two major problems: first, data from the last twenty years or so make development look like it's continuous, rather than occurring in discrete stages. Second, children's cognition is fairly variable day to day
, even when the same child tries the same task. I have argued elsewhere that trying to take a psychological finding and using it to draw strong conclusions about instruction--including what children are, in principle, ready for--is fraught with problems.
How much the more is that true when using a psychological theory rather than an experimental finding.So if Piaget will not be our guide as to what 1st graders are ready for, what should be? The experience of early elementary educators, of course, and some of the people commenting on the blog posting are or were first grade teachers.
And almost unanimously, they thought this material was inappropriate for first graders. (Some thought kids this age shouldn't be learning about other religions at this age. No argument there, that's a matter of ones values. I'm only talking about what kids can cognitively handle.) But if we adopt a proof-of-the-pudding-is-in-the-eating criterion, lessons on ancient civilizations are fine because they are in use and children are learning. The material shown above is part of the Core Knowledge sequence, around for more than a decade and used by over a thousand schools. (NB: I'm on the Board of the Core Knowledge Foundation.)And Core Knowledge is not alone. Another curriculum has had first-graders learn about ancient civilizations not for a decade, but for about a century: Montessori.
(NB again: my children experienced these lessons at their school, and my wife teaches them--she's an early elementary Montessori teacher.)Montessori schools teach the same "Five Great Lessons"
at the beginning of first, second, and third grades. They are
- The history of the universe and earth
- The coming of life
- The origins of human beings
- The history of signs and writing
- The story of numbers and mathematics
Naturally, these lessons are presented in ways that make sense to young children, but they are far from devoid of content. Montessori educators see them as the foundation and the wellspring of interest for everything to come: biology, geology, mathematics, reading, writing, chemistry and so on.
If it seems impossible or highly unlikely to you that 6 year olds could really get anything out of such lessons, I'll ask you to consider this. Our understanding of any
new concept is always incomplete.
For example, how do children learn that some people they hear about (Peter Pan) are made up and never lived, whereas others (the Pharaohs) were real? Not by an inevitable process of neurological maturation that makes their brain "ready" for this information, whereupon they master it quickly. They learn it bit by bit, in fits and starts, sometimes seeming to get it, other times not.
And you can't always wait until children are "ready." Think about mathematics. Children are born understanding numerosity
, but they understand it on a logarithmic scale--the difference between five and ten is larger than the difference between 70 and 75. To understand elementary mathematics they must learn to think of numbers of a linear scale. In this case, teachers have to undo
Nature. And if you wait until the child is "developmentally ready" to understand numbers this way, you'll never teach them mathematics. It will never happen.
In sum, I don't think developmental psychology is a good guide to what children should learn; it provides some help in thinking about how
children learn. The best guide to "what" is what children know now, and where you want their learning to head.
My own learning style is Gangnam
A teacher from the UK has just written to me asking for a bit of clarification (EDIT: the email came from Sue Cowley
, who is actually a teacher trainer.)She says that some people are taking my writing on the experiments that have tested predictions of learning styles theories (see here) as implying that teachers ought not to use these theories to inform their practice.
Her reading of what I've written on the subject differs: she thinks I'm suggesting that although the scientific backing for learning styles is absent, teachers may still find the idea useful in the classroom.
The larger issue--the relationship of basic science to practice--is complex enough that I thought it was worth writing a book
about it. But I'll describe one important aspect of the problem here.
There are two methods by which one might use learning styles theories to inspire ones practice. The way that scientific evidence bears on these two methods is radically different. Method 1
: Scientific evidence on children's learning is consistent with how I teach.
Teachers inevitably have a theory--implicit or explicit--of how children learn. This theory influences choices teachers make in their practice. If you believe that science provides a good way to develop and update your theory of how children learn, then the harmony between this theory and your practice is one way that you build your own confidence that you're teaching effectively. (It is not, of course, the only source of evidence teachers would consider.)
It would seem, then, that because learning styles theories have no scientific support, we would conclude that practice meant to be consistent with learning styles theories will inevitably be bad practice.
It's not that simple, however. "Inevitably" is too strong. Scientific theory and practice are just not that tightly linked.
It's possible to have effective practices motivated by a theory that lacks scientific support. For example, certain acupuncture treatments were initially motivated by theories entailing chakras--energy fields for which scientific evidence is lacking. Still, some treatments motivated by the theory are known to be effective in pain management.
But happy accidents like acupuncture are going to be much rarer than cases in which the wrong theory leads to practices that are either a waste of time or are actively bad. As long as we're using time-worn medical examples, let's not forget the theory of four humors.
Bottom line for Method 1: learning styles theories are not accurate representations of how children learn. Although they are certainly not guaranteed to lead to bad practice, using them as a guide is more likely to degrade practice than improve it. Method 2
: Learning styles as inspiration for practice, not evidence to justify practice.
In talking with teachers, I think this second method is probably more common. Teachers treat learning styles theories not as sacred truth about how children learn, but as a way to prime the creativity pump, to think about new angles on lesson plans.
Scientific theory is not the only source of inspiration for classroom practice. Any
theory (or more generally, anything
) can be a source of inspiration.
What's crucial is that the inspirational source bears no evidential status for the practice.
In the case of learning styles a teacher using this method does not say to himself "And I'll do this
because then I'm appealing to the learning styles of all my students," even if the this
was an idea generated by learning styles. The evidence that this
is a good idea comes from professional judgment, or because a respected colleague reported that she found it effective, or whatever.
Analogously, I may frequently think about Disneyland when planning lessons simply because I think Disneyland is cool and I believe I often get engaging, useful ideas of classroom activities when I think about Disneyland. Disneyland is useful to me, but it doesn't represent how kids learn.
Bottom line for Method 2: Learning styles theories might serve as an inspiration for practice, but it holds no special status as such; anything can inspire practice.
The danger, of course, lies in confusing these two methods. It would never occur to me that a Disneyland-inspired lesson is a good idea because Disneyland represents how kids think. But that slip-of-the-mind might happen with learning styles theories and indeed, it seems to with some regularity.
Ben Goldacre is a British physician and academic, and is the author of Bad Science,
an expose of bad medical practice that is based on wrong-headed science. For the last decade he has written a terrific column
by the same name for the Guardian
.Goldacre has recently turned his critical scientific eye to educational practices in Britain. He was asked by the British Department for Education to comment on the use of scientific data in education and
on the current state of affairs in Britain. You can download the report here
So what does Goldacre say? He offers an analogy of education to medicine; the former can benefit from the application of scientific methods, just as the latter has.Goldacre touts the potential of randomised controlled trails (RCTs). You take a group of students and administer an intervention (a new instructional method for long division, say) to one group and not to another. Then you see how each group of students did. Goldacre also speculates on what institutions would need to do to make the British education system as a whole more research-minded. He names two significant changes;
- There would need to be an institution that communicates the findings of scientific research (similar to the American "What Works Clearinghouse.")
- British teachers would need a better appreciation for scientific research so that they would understand why a particular practice was touted as superior, and could evaluate themselves the evidence for the claim
I'm a booster of science in education
As someone who has written shorter
treatments of the role that scientific research might play in education, I'm very excited that Goldacre has made this thoughtful and spirited contribution. I offer no criticisms of what Goldacre suggests, but would like to add three points.First,
I agree with Goldacre that randomized trials allow the strongest conclusions. But I don't think that we should emphasize RCTs to the exclusion of all other sources of data. After all, if we continue with Goldacre's analogy to medicine, I think he would agree that epidemiology has proven useful. As a matter of tactics, note that the What Works Clearinghouse emphasized RCTs to the near exclusion of all other types of evidence, and that came to be seen as a problem.
If you exclude other types of studies the available data will likely be thin. RCTs are simply hard to pull off: they are expensive, they require permission from lots of people.
Hence, the What Works Clearinghouse ended up being agnostic about many interventions--"no randomized controlled trials yet." Its impact has been minimal. Other sources of data can be useful; smaller scale studies, and especially, basic scientific work that bears on the underpinnings of an intervention. We must also remember that each RCT--strictly interpreted--offers pretty narrow information: method A is better than method B (for these kids, as implemented by these teachers, etc.)
Allowing other sources of data in the picture potentially offers a richer interpretation.As a simple example, shouldn't laboratory studies showing the importance of phonemic awareness influence our interpretation of RCTs in preschool interventions that teach phonemic awareness skills?
basic scientific knowledge gleaned from cognitive and developmental psychology (and other fields) can not only help us to interpret the results of randomized trials, that knowledge can be useful to teachers on its own. Just as a physician uses her knowledge of human physiology to diagnose a case, a teacher can use her knowledge of cognition to "diagnose" how to best teach a particular concept to a particular child.
I don't know about Britain, but this information is not taught in most American schools of Education. I wrote a book
about cognitive principles that might apply to education. The most common remark I hear from teachers is surprise (and often, anger) that they were not taught these principles when they trained. Elsewhere I've suggested
we need not just a "what works" clearinghouse to evaluate interventions, but a "what's known" clearinghouse for basic scientific knowledge that might apply to education. Third,
I'm uneasy about the medicine analogy. It too easily leads to the perception that science aims to prescribe what teachers must do, that science will identify one set of "best practices" which all must follow. Goldacre makes clear on the very first page of the report that's NOT what he's suggesting, but to the non-doctors among us, we see medicine this way: I go to my doctor, she diagnoses what's wrong, and there is a standard way (established by scientific method) to treat the disease.
That perception may be in error, but I think it's common.
I've suggested a different analogy: architecture. When building a house an architect must respect certain basic facts set out by science. Physics and materials science will loom large for the architect; for educators it might be psychology, sociology et al. The rules represent limiting conditions, but so long as you stay within those boundaries there is lots of ways to get it right. Just as physics doesn't tell the architect what the house must look like, so too cognitive psychology doesn't tell teachers how they must teach.
RCTs play a different role. They provide proof that a standard solution to a common problem is useful. For example, architects routinely face the problem of ensuring that a wall doesn't collapse when a large window is placed in it, and there are standard solutions to this problem. Likewise, educators face common problems, and RCTs hold the promise of providing proven solutions. Just as the architect doesn't have
to use any of the standard methods, the teacher needn't use a method proven by an RCT. But the architect needs be sure that the wall stays up, and the teacher needs to be sure that the child learns.
I made one of my garage-band-quality videos
on this topic.
There's more to this topic--what it will mean to train teachers to evaluate scientific evidence, the role of schools of education. Indeed, there's more in Goldacre's report and I urge you to read it. Longer term, I urge you to consider why we wouldn't
want better use of science in educational practice.
The importance of a good relationship between teacher and student is no surprise. More surprising is that the "human touch" is so powerful it can improve computer-based learning.In a series of ingenious yet simple experiments, Rich Mayer and Scott DaPra showed that students learn better from an onscreen slide show when it is accompanied by an onscreen avatar that uses social cues.
Eighty-eight college students watched a 4-minute Powerpoint slide show that explained how a solar cell converts sunlight to electricity. It consisted of 11 slides and a voice-over explanation.
Some subjects saw an avatar which used a full compliment of social cues (gesturing, changing posture, facial expression, changes in eye gaze, and lip movements synchronized to speech) which were meant to direct student attention to relevant features of the slide show.
Other subjects saw an avatar that maintained the same posture, maintained eye gaze straight ahead, and did not move (except for lip movements synchronized to speech).
A third group saw no avatar at all, but just saw the slides and listened to the narration.
All subjects were later tested with fact-based recall questions and transfer questions (e.g. "how could you increase the electrical output of a solar power?") meant to test subjects ability to apply their knowledge to new situations.
There was no difference among the three groups on the retention test, but there was a sizable advantage (d = .90) for the high embodiment subjects on the transfer test. (The low-embodiment and no-avatar groups did not differ.)
A second experiment showed that the effect was only obtained when a human voice was used; the avatar did not boost learning when synchronized to a machine voice.
The experimenters emphasized the social aspect of the situation to learning; students process the slideshow differently because the avatar is "human enough" for them to treat it prime interaction like those learners would use with a real person. This interpretation seems especially plausible in light of the second experiment; all of the more cognitive cues (e.g., the shifts in the avatar's eye gaze prompting shifts in learner's attention) were still present in the machine-voice condition, yet there was no advantage to learners.
There is something special about learning from another person. Surprisingly, that other person can be an avatar.
Mayer, R. E. & DaPra, C. S. (2012). An embodiment effect in computer-based learning with animated pedagogical agents. Journal of Experimental Psychology: Applied, 18, 239-252.
In primary school, a student's relationship with his or her teacher has a significant impact on the student's academic progress. Students with positive relationships are more engaged and learn more (e.g., Hughes et al, 2008). In addition, teachers are more likely to have negative relationships with boys than with girls (e.g., Hamre & Pianta, 2001).
Previous research has not, however, accounted for the gender of the teacher. Perhaps conflict is more likely when teacher and student are of different sexes, and because there are more female than male teachers, we end up concluding that boys tend not to get along with their teachers.
A new study (Split, Koomen & Jak, in press
) indicates that's not the case.
This appears to be the first large-scale study that examined teacher-student relationships in primary school while accounting for the sex of teachers.
Teachers completed questionnaires about their relationships with their students. The questionnaires measured three constructs:
- Closeness Warmth and open communication. Sample item "If upset, this child will seek comfort from me."
- Conflict Negative interactions, need for the teacher to correct student behavior. Sample item "This child remains angry or resentful after being disciplined."
- Dependency Clinginess on the part of the student; sample item "This child asks for my help when he or she really does not need help."
All in all, the data did not support the idea that boys connect emotionally with male teachers.
For Closeness, female teachers generally felt closer to their students than male teachers. Male teachers did not feel closer to either boys or girls, but female teachers felt closer to girls than they did to boys.
For Conflict, female teachers reported less conflict than male teachers did. Both male and female teachers reported less conflict with girls than with boys.
For Dependency, female teachers reported less dependency than male teachers did. There were no differences among boys and girls on this measure.
This research has been difficult to conduct, simply because most groups of teachers don't have enough male teachers in elementary grades to conduct a meaningful analysis. This is just one study, but the results indicate that all teachers--male and female--have a tougher time with boys. More conflictual relationships are reported with boys than with girls, and female teachers report less close relationships with boys.
Hamre, B. K., & Pianta, R. C. (2001). Early teacher–child relationships and the trajectory of children's school outcomes through eighth grade. Child Development, 72, 625–638.
Hughes, J. N., Luo, W., Kwok, O. M., & Loyd, L. K. (2008). Teacher–student support, effortful engagement, and achievement: A 3-year longitudinal study. Journal of Educational Psychology, 100, 1–14.
Split, J. L., Koomen, H. M. Y., & Jak, S. (in press) Are boys better off with male and girls with female teachers? A multilevel investigation of measurement invariance and gender match in teacher-student relationship quality. Journal of School Psychology.