The results of this experiment probably won't surprise you. What surprised me was the fact that we didn't already have data like this in hand.
The researchers (Sadler et al., 2013) tested 181 7th and 8th grade science teachers for their knowledge of physical science in fall, mid-year, and years end. They also tested their students (about 9,500) with the exact same instrument.
Each was a twenty-item multiple choice test. For 12 of the items, the wrong answers tapped a common misconception that previous research showed middle-schoolers often hold. For example, one common misconception is that burning produces no invisible gases. This question tapped that idea:
But the researchers didn't just ask the teachers to pick the right answer. They also asked teachers to pick the answer that they thought their students would pick.
What makes this study interesting is that it tests teacher subject-matter knowledge directly (instead of using a proxy like courses taken, or degrees) and that it directly measures one aspect of pedagogical content knowledge, namely, student misconceptions. The dependent measure of interest is student gain scores in content knowledge over the course of the year.
Teachers content knowledge was good, but not perfect. They got about 84% of the questions right.
Their knowledge of student misconceptions was not as good. Teachers correctly identified just 43% of those. (And their students had, as in previous studies, selected those incorrect items in high numbers.)
And what type of teacher knowledge matters to student learning? It turns out to interact with past student achievement, as measured by standard math and reading tests.
The graph shows gains in student knowledge, separated by items for which teachers have (or lack) various types of knowledge. Filled circles are for students who scored well on a math and reading test (high achievers), and open circles are students who scored poorly (low achievers)
Look first at learning for concepts without a common misconception. If teachers have subject matter knowledge (SMK in the graph) students learn the concept better. In fact, low-achieving students learned nothing about a concept if teachers didn't know the concept themselves. High-achieving students did. The researchers speculate they may have learned the content from a textbook or other source.
For the strong misconception items, the low-achieving students learned very little, whatever the teacher knowledge. For high-achieving students, knowledge mattered, and they were most likely to learn when their teacher had both subject-matter knowledge and knew the misconceptions their students likely held (KoSM in the graph).
So the overall message is not that surprising. Students learn more when their teachers know the content, and when they can anticipate student misconceptions.
Somewhat more surprising (and saddening), low-achieving students are especially vulnerable when teachers lack knowledge. High-achieving students are more resilient.
There are limitations to this study, the most notable being that the sample is far from random (teachers were volunteers), and that the test was zero-stakes for all.
The strength was the direct measure of both types of knowledge, and that the researchers could examine the relationship of knowledge to performance at the level of individual items. One hopes we'll see more studies using this type of design.
Sadler, P. M., Sonnert, G., Coyle, H.P., Cook-Smith, N., & Miller, J.L. (2013) Student learning in middle school science classrooms. American Educational Research Journal, 50, 1020-1049.
In Why Don't Students Like School?
I pointed out that cognitive challenge is engaging if it's at the right level of difficulty, but boring if it's too easy or too hard. It sounds, then, like it would make sense to organize students into different classes based on their prior achievement. It might make sense cognitively, but the literature shows that
such a practice leads to bad outcomes for the kids in lower tracks. Those classes tend to have less demanding curricula and and lower expectations for achievement (e.g., Brunello & Checchi, 2007). Further, assignment to tracks is often biased by race or social class (e.g.,
Maaz et al., 2007).
What tracking does to students self-perceptions has been less clear. A new international study (Chmielewski et al., 2013) examined data from the 2003 PISA data set to examine the association of different types of tracking and student self-perceptions of mathematics self-concept. The authors compared
- Between school streaming: in which students with different levels of achievement are sent to different schools.
- Within school streaming: in which students with different levels of achievement are put in different sequences of courses for all subjects.
- Course-by-course tracking: in which students are assigned to more or less advanced courses within a school, depending on their achievement within a particular subject.
Controlling for individual achievement and the average achievement of the track or stream, the researchers found that course tracking
is associated with worse
self-perceptions among low-achieving students, but streaming
is associated with better
self-perceptions.This figure shows the difference between the self perceptions of higher and lower
achieving students in individual countries, sorted by the type of tracking system.
The data suggest that when students are tracked for some but not all of their courses, they compare their achievement to other, more advanced students, perhaps because they see these students more often. Students who are streamed within or between schools, in contrast, compare their abilities to their fellow stream-mates.
But why is there self-concept higher than higher-achieving students? This effect may be comparable to a more general phenomenon that people are poorer judges of their competence for tasks that they perform poorly. If you're not very good, you're not good enough to realize what you lack.
The authors do not suggest that between school steaming is the way to go (since it's associated with higher confidence). They note that the association is just the reverse of that seen in achievement: kids who stream between schools seem to take the biggest hit to achievement.
Brunello, G., & Checchi, D. (2007). Does school tracking affect equality of opportunity? New international evidence. Economic Policy, 22, 781–861.
Chmielewski, A. K., Dumont, H. Trautwein, U. (2013). Tracking effects depend on tracking type: An international comparison of students' mathematics self-concept. American Educatioal Research Journal, 50, 925-957.
Maaz, K., Trautwein, U., Ludtke, O., & Baumert, J. (2008). Educational transitions and differential learning environments: How explicit between-school tracking contributes to social inequality in educational outcomes. Child Developmental Perspectives, 2, 99–106.
The cover story
of latest New Republic
wonders whether American educators have fallen in blind love with self-control. Author Elizabeth Weil thinks we have. Titled “American Schools Are Failing Nonconformist Kids: In Defense of the Wild Child” the article suggests that educators harping on self-regulation are really trying to turn kids into submissive little robots. And they do so because little robots are easier to control in the classroom.
But lazy teachers are not the only cause. Education policy makers are also to blame, according to Weil. She writes that “valorizing self-regulation shifts the focus away from an impersonal, overtaxed, and underfunded school system and places the burden for overcoming those shortcomings on its students.”
And the consequence of educators’ selfishness? Weil tells stories that amount to Self-Regulation Gone Wild. A boy has trouble sitting cross-legged in class—the teacher opines he should be tested because something must be wrong with him. During story time the author’s daughter doesn’t like to sit still and to raise her hand when she wants to speak. The teacher suggests occupational therapy.
I can see why Weil and her husband were angry when their daughter’s teacher suggested occupational therapy simply because the child’s behavior was an inconvenience to him
. But I don’t take that to mean that there is necessarily a widespread problem in the psyche of American teachers. I take that to mean that their daughter’s teacher was acting like a selfish bastard.
The problem with stories, of course, is that there are stories to support nearly anything. For every story a parent could tell about a teacher diagnosing typical behavior as a problem, a teacher could tell a story about a child who really could
do with some therapeutic help, and whose parents were oblivious to that fact.
What about evidence beyond stories?
Weil cites a study by Duncan et al (2007) that analyzed six large data sets and found social-emotional skills were poor predictors of later success.
She also points out that creativity among American school kids dropped between 1984 and 2008 (as measured by the Torrance Test of Creative Thinking) and she notes “Not coincidentally, that decrease happened as schools were becoming obsessed with self-regulation.”
There is a problem here. Weil uses different terms interchangeably: self-regulation, grit, social-emotional skills. They are not same thing. Self-regulation (most simply put) is the ability to hold back an impulse when you think that that the impulse will not serve other interests. (The marshmallow study would fit here.) Grit refers to dedication to a long-term
goal, one that might take years to achieve, like winning a spelling bee or learning to play the piano proficiently. Hence, you can have lots of self-regulation but not be very gritty. Social emotional skills might have self-regulation as a component, but it refers to a broader complex of skills in interacting with others.
These are not niggling academic distinctions. Weil is right that some research indicates a link between socioemotional skills and desirable outcomes, some doesn’t. But there is quite a lot of research showing associations between self-control and positive outcomes for kids including academic outcomes, getting along with peers, parents, and teachers, and the avoidance of bad teen outcomes (early unwanted pregnancy, problems with drugs and alcohol, et al.). I reviewed those studies here
. There is another literature showing associations of grit with positive outcomes (e.g., Duckworth et al, 2007
Of course, those positive outcomes may carry a cost. We may be getting better test scores (and fewer drug and alcohol problems) but losing kids’ personalities. Weil calls on the reader’s schema of a “wild child,” that is, an irrepressible imp who may sometimes be exasperating, but whose very lack of self-regulation is the source of her creativity and personality.
But irrepressibility and exuberance is not perfectly inversely correlated with self-regulation. The purpose of self-regulation is not to lose your exuberance. It’s to recognize that sometimes it’s not in your own best interests to be exuberant. It’s adorable when your six year old is at a family picnic and impulsively practices her pas de chat because she cannot resist the Call of the Dance. It’s less adorable when it happens in class when everyone else is trying to listen to a story.
So there’s a case to be made that American society is going too far in emphasizing self-regulation. But the way to make it is not to suggest that the natural consequence of this emphasis is the crushing of children’s spirits because self-regulation is the same thing as no exuberance. The way to make the case is to show us that we’re overdoing self-regulation. Kids feel burdened, anxious, worried about their behavior.
Weil doesn’t have data that would bear on this point. I don’t either. But my perspective definitely differs from hers. When I visit classrooms or wander the aisles of Target, I do not feel that American kids are over-burdened by self-regulation.
As for the decline in creativity from 1984 and 2008 being linked to an increased focus on self-regulation…I have to disagree with Weil’s suggestion that it’s not a coincidence (setting aside the adequacy of the creativity measure). I think it might very well be a coincidence. Note that scores on the mathematics portion of the long-term NAEP increased during the same period. Why not suggest that kids improvement in a rigid, formulaic understanding of math inhibited their creativity?
Can we talk about important education issues without hyperbole?
Duckworth, A. L., Peterson, C., Matthews, M. D., & Kelly, D. R. (2007). Grit: perseverance and passion for long-term goals. Journal of personality and social psychology, 92(6), 1087.
Duncan, G. J., Dowsett, C. J., Claessens, A., Magnuson, K., Huston, A. C., Klebanov, P., ... & Japel, C. (2007). School readiness and later achievement. Developmental psychology, 43(6), 1428.
Part of the fun and ongoing fascination of science of science is "the effect that ought not to work, yet does."The impact of values of affirmation on academic performance is such an effect.
Values-affirmation "undoes" the effect of stereotype threat (also called identity threat). Stereotype threat occurs when a person is concerned about confirming a negative stereotype about his or her group. In other words a boy is so consumed with thinking "Everyone expects me to do poorly on this test because I'm African-American" that his performance actually is
compromised (see Walton & Spencer, 2009 for a review).One way to combat stereotype threat is to give the student
better resources to deal with the threat--make the student feel more confident, more able to control the things that matter in his or her life.That's where values affirmation comes in. In this procedure, students are provided a list of values (e.g.,
relationships with family members, being good at art) and are asked to pick three that are most important to them and to write about why they are so important. In the control condition, students pick three values they imagine might be important to someone else
. Randomized control trials show that this brief intervention boosts school grades (e.g., Cohen et al, 2006).Why? One theory is that values affirmation gives students a greater sense of belonging, of being more connected to other people. (The importance of social connection is an emerging theme in other research areas. For example, you may have heard about the studies showing that people are less anxious when anticipating a painful electric shock if they are holding the hand of a friend or loved one.)A new study (Shnabel et al, 2013) directly tested the idea that writing about social
belonging might be a vital element in making values affirmation work.In Experiment 1 they tested 169 Black and 186 White seventh graders in a correlational study. They did the values-affirmation writing exercise, as described above. The dependent measure was change in GPA (pre-intervention vs. post-intervention.) The experimetners found that writing about social belonging in the writing assignment was associated with a greater increase in GPA for Black students (but not for White students, indicating that the effect is due to reduction in stereotype threat.)In Experiment 2, they
used an experimental design, testing 62 male and 55 female college undergraduates on a standardized math test. Some were specifically told to write about social belonging and others were given standard affirmation writing instructions. Female students in the former group outscored those in the latter group. (And there was no effect for male students.)
The brevity of the intervention relative to the apparent duration of the effect still surprise me. But this new study gives some insight into why it works in the first place.References:
Cohen, G. L., Garcia, J., Apfel, N., & Master, A. (2006). Reducing
the racial achievement gap: A social-psychological interven-tion. Science, 313, 1307-1310.
Shnabel, N., Purdie-Vaughns, V., Cook, J. E., Garcia, J., & Cohen, G. L. (2013). Demystifying values-affirmation interventions: Writing about social belonging is a key to buffering against identity threat. Personality and Social Psychology Bulletin,
Walton, G. M., & Spencer, S. J. (2009). Latent ability: Grades and test
scores systematically underestimate the intellectual ability of negatively stereotyped students. Psychological Science, 20, 1132-1139.
A great deal has been written about the impact of retrieval practice on memory. That's because the effect is sizable, it has been replicated many times (Agarwal, Bain & Chamberlain, 2012) and it seems to lead not just to better memory but deeper
memory that supports transfer (e.g., McDaniel et al, 2013; Rohrer et al, 2010).
("Retrieval practice" is less catchy than the initial name--testing effect. It was renamed both to emphasize that it doesn't matter whether you try to remember for the sake of a test or some other reason and because "testing effect" led some observers to throw up their hands and say "do we really need more tests?")Now researchers (Szpunar, Khan, & Schacter, 2013) have reported testing as a potentially powerful ally in online learning. College students frequently report difficulty in maintaining attention during lectures, and that problem seems to be exacerbated when the lecture occurs on video.In this experiment subjects were asked to learn from a 21 minute video lecture on statistics. They were also told that the lecture would be divided in 4 parts, separated by a break. During the break they would perform math problems for a minute, and then would either do more math problems for two more minutes ("untested group"), they would be quizzed for two minutes on the material they had just learned ("tested group"), or they would review by seeing questions with the answers provided ("restudy group.")Subjects were told that whether or not they were quizzed would be randomly determined
for each segment; in fact, the same thing happened for an individual subject after each segment except
that each was tested after the fourth segment.So note that all subjects had reason to think that they might be tested at any time. There were a few interesting findings.
First, tested students took more notes than other students, and reported that their minds wandered less during the lecture.
The reduction in mind-wandering and/or increase in note-taking paid off--the tested subjects outperformed the restudy and the untested subjects when they were quizzed on the fourth, final segment.
The researchers added another clever measure. There was a final test on all the material, and they asked subjects how anxious they felt about it. Perhaps the frequent testing made learning rather nerve wracking. In fact, the opposite result was observed: tested students were less anxious about the final test. (And in fact performed better: tested = 90%, restudy = 76%, nontested = 68%).
We shouldn't get out in front of this result. This was just a 21 minute lecture, and it's possible that the benefit to attention of testing will wash out under conditions that more closely resemble an on-line course (i.e., longer lectures delivered a few time each week.) Still, it's a promising start of an answer to a difficult problem.
Agarwal, P. K., Bain, P. M., & Chamberlain, R. W. (2012). The value of applied research: Retrieval practice improves classroom learning and recommendations from a teacher, a principal, and a scientist. Educational Psychology Review, 24, 437-448.
McDaniel, M. A., Thomas, R. C., Agarwal, P. K., McDermott, K. B., & Roediger, H. L. (2013). Quizzing in middle-school science: Successful transfer performance on classroom exams. Applied Cognitive Psychology. Published online Feb. 25
Rohrer, D., Taylor, K., & Sholar, B. (2010). Tests enhance the transfer of learning. Journal of Experimental Psychology. Learning, Memory, and Cognition, 36, 233-239.
Szpunar, K. K., Khan, N. &, & Schacter, D. L. (2013). Interpolated memory tests reduce mind wandering and improve learning of online lectures. Proceedings of the National Academy of Sciences, published online April 1, 2013 doi:10.1073/pnas.122176411
What people learn (or don't) from games is such a vibrant research area we can expect fairly frequent literature reviews. It's been about a year since the last one
, so I guess we're due. The last time I blogged on this topic Cedar Riener remarked that it's sort of silly to frame the question as "does gaming work?" It depends on the game. The category is so broad it can include a huge variety of experiences for students.
If there were NO games from which kids seemed to learn anything, you probably ought not to conclude "kids can't learn from games." To do so would be to conclude that distribution of learning for all possible games and all possible teaching would look something like this.
But this pattern of data seems highly unlikely. It seems much more probable that the distributions overlap more, and that whether kids learn more from gaming or traditional teaching is a function of the qualities of each.
So what's the point of a meta-analysis that poses the question "do kids learn more from gaming or traditional teaching?
I think of these reviews not as letting us know whether kids can learn from games, but as an overview of where we are--just how effective are the serious games offered to students?
The latest meta-analysis (Wouters et al, 2013
) includes data from 56 studies and examined both learning outcomes (77 effect sizes), retention (17 effect sizes) and motivation (31 effect sizes).
The headline results featured in the abstract is "games work!" Games are reported to be superior to conventional instruction in terms of learning (d
= 0.29) and retention (d
= .36) but somewhat surprisingly, not motivation (d
The authors examined a large set of moderator variables and this is where things get interesting. Here are a few of these findings:
- Students learn more when playing games in groups than playing alone.
- Peer-reviewed studies showed larger effects than others. (This analysis is meant to address the bias not to publish null results. . . but the interpretation in this case was clouded by small N's.)
- Age of student had no impact.
But two of the most interesting moderators significantly modify the big conclusions.
First, gaming showed no advantage over conventional instruction when the experiment used random assignment. When non-random assignment was used, gaming showed a robust advantage. So it's possible (or even likely) that games in these studies were more effective only when they interacted with some factor in the gamer that is self-selected (or selected by the experimenter or teacher). And we don't know yet what that factor is.
Second the researchers say that gaming showed and advantage over "conventional instruction" but followup analyses show that gaming showed no advantage over what they called "passive instruction"--that it, the teacher talk or reading a textbook. All of the advantage accrued when games were compared to "active instruction," described as "methods that explicitly prompt learners to learning activities (e.g., exercises, hypertext training.)" So gaming (in this data set) is not really better than conventional instruction; it's better than one type of instruction (which in the US is probably less often encountered.)
So yeah, I think the question in this review is ill-posed. What we really want to know is how do we structure better games? That requires much more fine-grained experiments on the gaming experience, not blunt variables. This will be painstaking work.
Still, you've got to start somewhere and this article offers a useful snapshot of where we are. EDIT 5:00 a.m. EST
2/11/13. In the original post I failed to make explicit another important conclusion--there may be caveats on when and how the games examined are superior to conventional instruction, but they were almost never worse. This is not an unreasonable bar, and as a group the games tested pass it.
Wouters, P, van Nimwegen, C, van Oostendorp, H., & van der Spek, E. G. (2013). A meta-analysis of the cognitive and motivational effects of serious games. Journal of Educational Psychology.
Advance online publication. doi: 10.1037/a0031311
Michael Gove, Secretary of Education in Great Britain, certainly has a flair for oratory. In his most recent speech, he accused his political opponents of favoring "Downton Abbey-style" education (meaning one that perpetuates class differences), he evoked a 13 year old servant girl reading Keats, and he
cited as an inspiration the late British reality TV star Jade Goody
(best known for being ignorant), and Marxist writer and political theorist Antonio Gramsci
. Predictably, press coverage in Britain has focused on these details
. (So, of course, have the Tweets
.) The Financial Times
and the Telegraph
pointed to Gove's political challenge to Labour. The Guardian
led with the Goody & Gramsci angle. But these
points of color distract from the real aim. The fulcrum of the speech is the argument that a knowledge-based curriculum is essential to bring greater educational opportunity to disadvantaged children. (The BBC
got half the story right.)The logic is simple: 1) Knowledge is crucial to support cognitive processes.
(e.g., Carnine & Carnine, 2004; Hasselbring, 1988; Willingham, 2006). 2) Children who grow up in disadvantaged circumstances have fewer opportunities to learn important background knowledge at home (Walker et al, 1994) and they come to school with less knowledge, which has an impact on their ability to learn new information at school (Grissmer et al 2010) and likely leads to a negative feedback cycle whereby they fall farther and farther behind (
Stanovich, 1986). Gove is right. And he's right to argue for a knowledge-based curriculum.
The curriculum is most likely to meliorate achievement gaps between advantaged and disadvantaged students because a good fraction of that difference is fueled by differences in cultural capital in the home--differences that schools must try to make up. (Indeed, a knowledge-based curriculum is a critical component of KIPP and other "no excuses" schools in the US.
I'm not writing to defend all education policies undertaken by the current British government--I'm not knowledgeable enough about those policies to defend or attack them. But I find the response from Stephen Twigg (Labour's shadow education secretary) disquieting, because he seems to have missed Gove's point. "Instead of lecturing others, he should listen to business leaders, entrepreneurs, headteachers and parents who think his plans are backward looking and narrow. We need to get young people ready for a challenging and competitive world of work, not just dwell on the past."
(As quoted in the Financial Times.)It's easy to scoff at a knowledge-based curriculum as backward-looking. Memorization of math facts when we have calculators? Knowledge in the age of Google?
But if you mistake advocacy for a knowledge-based curriculum as wistful nostalgia for a better time, or as "old fashioned"
you just don't get it. Surprising though it may seem, you can't just Google everything. You actually need to have knowledge in your head to think well.
So a knowledge-based curriculum is the best way to get young people "ready for the world of work."
Mr. Gove is rare, if not unique, among high-level education policy makers in understanding the scientific point he made in yesterday's speech. You may agree or disagree with the policies Mr. Gove sees as the logical consequence of that scientific point, but education policies that clearly contradict
it are unlikely to help close the achievement gap between wealthy and poor. References
Carnine, L., & Carnine, D. (2004). The interaction of reading skills and science content knowledge when teaching struggling secondary students. Reading & Writing Quarterly
Grissmer, D., Grimm, K. J., Aiyer, S. M., Murrah, W. M., & Steele, J. S. (2010). Fine motor skills and early comprehension of the world: Two new school readiness indicators. Developmental psychology
Hasselbring, T. S. (1988). Developing Math Automaticity in Learning Handicapped Children: The Role of Computerized Drill and Practice. Focus on Exceptional Children
Stanovich, K. E. (1986). Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy. Reading research quarterly
Walker, D., Greenwood, C., Hart, B., & Carta, J. (1994). Prediction of school outcomes based on early language production and socioeconomic factors. Child development
Willingham, D. T. (2006). How knowledge helps. American Educator
Something happens to the "inner clocks" of teens. They don't go to sleep until later in the evening but still must wake up for school. Hence, many are sleep-deprived.
These common observations are borne out in research, as I summarize in an article on sleep and cognition
in the latest American Educator.
What are the cognitive consequences of sleep deprivation?
It seems to affect executive function tasks such as working memory. In addition, it has an impact on new learning--sleep is important for a process called consolidation
whereby newly formed memories are made more stable. Sleep deprivation compromises consolidation of new learning (though surprisingly, that effect seems to be smaller or absent in young children.)
Parents and teachers consistently report that the mood of sleep-deprived students is affected: they are more irritable, hyperactive or inattentive. Although this sounds like ADHD, lab studies of attention show little impact of sleep deprivation on formal measures of attention. This may be because students are able, for brief periods, to rally resources and perform well on a lab test. They may be less able to sustain attention for long periods of time when at home or at school and may be less motivated to do so in any event.
Perhaps most convincingly, the few studies that have examined academic performance based on school start times show better grades associated with later school start times. (You might think that if kids know they can sleep later, they might just stay up later. They do, a bit, but they still get more sleep overall.)
Although these effects are reasonably well established, the cognitive cost of sleep deprivation is less widespread and statistically smaller than I would have guessed. That may be because they are difficult to test experimentally. You have two choices, both with drawbacks:
1) you can do correlational studies that ask students how much they sleep each night (or better, get them to wear devices that provide a more objective measure of sleep) and then look for associations between sleep and cognitive measures or school outcomes. But this has the usual problem that one cannot draw causal conclusions from correlational data.
2) you can do a proper experiment by having students sleep less than they usually would, and see if their cognitive performance goes down as a consequence. But it's unethical to significantly deprive students of significant sleep (and what parent would allow their child to take part in such a study?) And anyway, a night or two of severe sleep deprivation is not really what we think is going on here--we think it's months or years of milder deprivation.
So even though scientific studies may not indicate that sleep deprivation is a huge problem, I'm concerned that the data might be underestimating the effect. To allay that concern, can anything be done to get teens to sleep more?
Believe it or not, telling teens "go to sleep" might help. Students with parent-set bedtimes do get more sleep on school nights than students without them. (They get the same amount of sleep on weekends, which somewhat addresses the concern that kids with this sort of parent differ in many ways kids who don't.)
Another strategy is to maximize the "sleepy cues" near bedtime. The internal clock of teens is not just set for later bedtime, it also provides weaker internal cues that he or she ought to be sleepy. Thus, teens are arguably more reliant on external cues that it's bedtime. So the student who is gaming at midnight might tell you "I'm playing games because I'm not sleepy" could be mistaken. It could be that he's not sleepy because he's playing games. Good cues would be a bedtime ritual that doesn't include action video games or movies in the few hours before bed, and ends in a dark quiet room at the same time each night.
So yes, this seems to be a case where good ol' common sense jibes with data. The best strategy we know of for better sleep is consistency. References: All the studies alluded to (and more) appear in the article.
The British Columbia education system would seem to be doing an excellent job. Although very recent data are not available, performance by BC 15 year-olds on the 2006 PISA showed them lagging just one country in science (Finland), two countries in reading (Finland and Korea), and five in math (Taipei, Finland, Hong Kong, Korea, and fellow Canadian Province Quebec). Meanwhile, in 2007, no one scored better than BC fourth graders on the PIRLS reading assessment. (
Eight countries or provinces scored about the same--36 scored lower. Test data summarized here
.)Despite this record of success, BC is not satisfied, and gearing up to change the curriculum.There's one sense in which this plan is clearly needed: there are too many
objectives. The document describing learning objectives
for the fourth grade runs 21 pages, and includes scores of items. No one can cover all that in a year, so the document ought to be tightened. Another stated objective in the document describing the proposed change is to offer teachers more flexibility
so that they can better tune education to individual students. Whether that's a good idea is, in my view, a judgment call. The BC Ministry of Education contends that the current curriculum is too proscriptive. It may be, but it's being taught (and learned) at very high levels of proficiency, at least as measured by international comparison tests that most observers think are pretty reasonable. Change the curriculum, and that level of performance will likely drop. But other benefits may accrue, such as better performance in academic areas not measured by students with strong interest in those areas, and greater student satisfaction.My real concern is that the plan doesn't make very clear what the expected benefit is, nor how we'll know it when we see it.At least in the overview document, the benefit is described as "increased opportunities to gain the essential learning and life skills necessary to live and work successfully in a complex, interconnected, and rapidly changing world. Students will focus on acquiring skills to help them use knowledge critically and creatively, to solve problems ethically and collaboratively, and to make the decisions necessary to succeed in our increasingly globalized world."Oddly enough, I thought that excellent preparation in Reading, Math, and Science was just the ticket to help you use knowledge critically and creatively. And then I saw this statement:"In today’s technology-enabled world, students have virtually instant access to a limitless amount of information. The greater value of education for every student is not in learning the information but in learning the skills they need to successfully find, consume, think about and apply it in their lives."
A lot of data from the last couple of decades shows a strong association between executive functions (the ability to inhibit impulses, to direct attention, and to use working memory) and positive outcomes in school and out of school (see review here
). Kids with stronger executive functions get better grades, are more likely to thrive in their careers, are less likely to get in trouble with the law, and so forth. Although the relationship is correlational and not known to be causal, understandably researchers have wanted to know whether there is a way to boost executive function in kids.Tools of the Mind (Bedrova & Leong, 2007) looked promising.
It's a full preschool curriculum consisting of some 60 activities, inspired by the work of psychologist Lev Vygotsky. Many of the activities call for the exercise of executive functions through play. For example, when engaged in dramatic pretend play, children must use working memory to keep in mind the roles of other characters and suppress impulses in order to maintain their own character identity. (See Diamond & Lee, 2011, for thoughts on how and why such activities might help students.)A few studies of relatively modest scale (but not trivial--100-200 kids) indicated that Tools of the Mind has the intended effect (Barnett et al, 2008; Diamond et al, 2007). But now some much larger scale followup studies (800-2000 kids) have yielded discouraging results.These studies were reported at a symposium this Spring at a meeting of the Society for Research on Educational Effectiveness. (You can download a pdf summary here.) Sarah Sparks covered this story for Ed Week when it happened in March, but it otherwise seemed to attract little notice. Researchers at the symposium reported the results of three studies. Tools of the Mind
did not have an impact in any of the three. What should we make of these discouraging results? It's too early to conclude that Tools of the Mind simply doesn't work as intended. It could be that there are as-yet unidentified differences among kids such that it's effective for some but not others. It may also be that the curriculum is more difficult to implement correctly than would first appear to be the case. Perhaps the teachers in the initial studies had more thorough training. Whatever the explanation, the results are not cheering. It looked like we might have been on to a big-impact intervention that everyone could get behind.
Now we are left with the dispiriting conclusion "More study is needed."
Barnett, W., Jung, K., Yarosz, D., Thomas, J., Hornbeck, A., Stechuk, R., & Burns, S.(2008). Educational effects of the Tools of the Mind curriculum: A randomized trial. Early Childhood Research Quarterly, 23
, 299–313.Bedrova, E. & Leong, D. (2007) Tools of the Mind: The Bygotskian appraoch to early childhood education. Second edition. New York: Merrill.Diamond, A. & Lee, K. (2011). Interventions shown to aid executive function development in children 4-12 years old. Science, 333, 959-964.
Diamond, A., Barnett, W. S., Thomas, J., & Munro, S. (2007). Preschool program improves cognitive control. Science, 318