My Facebook feed today has lots of links to this article
. The upshot: a new Pew study showing that Americans think that US 15 year olds rank "near the bottom" on international science tests, whereas the truth is that they "rank in the middle among developed countries."I guess "the middle" covers a lot of terrain, but the way I look at the data, this assertion doesn't hold.
The international comparison in question is the 2009 PISA. Here are the rankings. (Click for larger image)
Most everyone would agree that it's not appropriate to compare scores of US kids to those of poorer countries with little infrastructure and funding to support education.
That's why the article specifies the ranking of the US among "developed countries," and by the author's reckoning, kids from 12 developed countries scored better, and kids from 9 developed countries scored worse. That would put US kids at the 41st percentile. The US is ranked 30th on the list. Just eyeballing it, it's hard to see how 17 of the countries scoring better could be considered "not developed." On measures of "developed" status would be the International Monetary Fund's definition of "
advanced economies" which includes: Australia, Austria, Belgium, Canada, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hong Kong, Iceland, Ireland, Israel, Italy, Japan, Luxembourg, Malta, Netherlands, New Zealand, Norway, Portugal, San Marino, Singapore, Slovakia, Slovenia, South Korea, Spain, Sweden, Switzerland, Taiwan, United Kingdom, United States (Click image for larger image
By this definition of "advanced" US kids are 23rd out of 32 countries, or the 28th percentile.
It's true that "near the bottom" is too grim an assessment. But I can't see a way to put the 2009 PISA data together such that American kids are scoring about average.
A new bill
just passed the Education committee in the Oklahoma House of representatives, as reported in the Oklahoman
. Titled "The Scientific Education and Academic Freedom Act," the bill purports to protect the rights of students, teachers and administrators to explore fully scientific controversies.
The bill supposes that some people currently feel inhibited in their pursuit of truth regarding "biological evolution, the chemical origins of life, global warming, and human cloning" and so the bill forbids school administrators and boards of education from disallowing such "exploration."According to opinion pieces in the Daily Beast
, The Week
, and Mother Jones
, the bill is a fairly transparent attempt to allow intelligent design into science classrooms, one that is being pursued in other states as well.Yeah, that's what it sounds like to me too. But even if we take the purported motive of the bill at face value, it's still a terrible idea. Why shouldn't science teachers "teach the controversy?" Isn't it the job of teachers to sharpen students critical thinking skills? Isn't it part of the scientific method to evaluate evidence? If evolution proponents are so sure their theory is right, why are they afraid of students scrutinizing the ideas?Imagine this logic applied in other subjects. Why shouldn't students study and evaluate the version of US history offered by white supremacists? Rather than just reading Shakespeare and assuming he's a great playwright, why not ask students to read Shakespeare and the screenplay to Battlefield Earth, and let students decide? And hey, why is such deference offered to Euclid? My uncle Leon has an alternative version of plane geometry and it shows Euclid was all wrong. I think that theory deserves a hearing. You get the point. Not every theory merits the limited time students have in school. There is a minimum bar of quality that has to be met in order to compete. I'm not allowed to show up at the Olympics, hoping to jump in the pool and swim the 100 m butterfly against Michael Phelps. Indeed, the very inclusion of a theory in a school discussion signals to students that it must have some validity--why else would the teacher discuss it? The obvious retort from supporters of the bill is that intelligent design is actually a good theory, much better than the comparisons I've drawn. That belief may be sincere, but it's due, I think, to a lack of understanding of scientific theory. So here are a few of the important features of how scientists think about theories, and how they bear on this debates.1) It's not telling that legitimate scientists point out una
nswered questions, problems, or lacunae in the theory of evolution. Every
theory, even the best theories, have problems. People who make this point may be thinking about the status of scientific laws as scientists did until the early part of the 20th century--as immutable laws. Scientists today think of all theories as provisional, and open to emendation and improvement.2) A vital aspect of a good scientific theory is that it be open to falsification. It's not obvious what sort of data would falsify intelligent design theories, especially young-earth theories, which make predictions that are already disconfirmed by geology, astrophysics, etc., and yet are maintained by their adherents.
Evolution, in contrast, has survived tests and challenges for 100 years--indeed, the theory has changed and improved in response to those challenges. 3) In the case of old-earth intelligent design theories, the focus is much more on the putative beginnings of the universe of or life on Earth, and these don't have the feel of a scientific theory at all. They seem much more like philosophical queries because they focus on large-scale questions and how these questions ought to formulated--they never get to detailed questions that might be answerable by experiment, the meat-and-potatoes of science.4) Good scientific theories are not static. They not only change in the face of new evidence, they continue to spawn new and interesting hypotheses. Evolution has been remarkably successful on this score for over 100 years. Intelligent design has been static and unfruitful.These are some of the reasons that scientists think that intelligent design
does not qualify as a good scientific theory, and therefore does not merit close attention in K-12 science classes, and more than my uncle's theory of geometry does. If you're going to write bills about what happens in science class, it's useful to know a little science. EDIT: 2/22/13 1:20 p.m. EST: typos
There is a lot of talk these days about STEM--science, technology, engineering, and math--and the teachers of STEM subjects. It would seem self-evident that these teachers, given their skill set, would be in demand in business and industry, and thus would be harder to keep in the classroom.A new study
(Ingersoll & May, 2012
) offers some surprising data on this issue.
Using the national Schools and Staffing Survey and the Teacher Follow-Up Survey, they found that science and math teachers have NOT left the field at rates higher than that of other teachers. In this data set (1988-2005) math teachers and science teachers left teaching at about the same rate as teachers in other subjects: about 6% each year.
Furthermore, when these teachers do leave a school, they are no more likely to take a non-education job than other teachers: about 8% of "leavers" took another job outside of education. Much more common reasons to leave the classroom were retirement (about 15%) or an education job other than teaching (about 17%).
The authors argue that teacher turnover, not teachers leaving the field, is the engine behind staffing problems for math and science classes.
So what prompts teacher turnover?
The authors argue that on this dimension math and science teachers differ. Both groups are, unsurprisingly, motivated by better working conditions and higher salaries, but the former matter more to math teachers, and science teachers care more about the latter.
But in both cases, the result is that math and science teachers tend to leave schools with large percentages of low-incomes kids in order to move to schools with wealthier kids.
Ingersoll, R. M., & May, H. (2012). The magnitude, destinations, and determinants of mathematics and science teacher turnover. Educational Evaluation and Policy Analysis, 34, 435-464.
In an op-ed piece
in August 19th's New York Times, Bronwen Hruska tells of her experiences with her son, Will, between the 3rd and 5th grade. Will was misdiagnosed with ADHD.
Hruska and her husband were initially approached by Will's teacher, who thought his behavior indicated ADHD. Though they were doubtful, they took him to a psychiatrist who said that Will did indeed have ADHD and prescribed stimulant medication. Will took the medication for two years but stopped when he concluded that Aderall is dangerous. Now a happy high school sophomore, there is not much reason to think that the medication was ever necessary.
How did this happen?
The title of the piece--"Raising the Ritalin Generation"--provides a clue to the author's conclusion. Hruska suggests that our society is sick. Teachers are too quick to suggest medication for kids. Schools "want no part" of average kids; they expect kids to be exceptional, extraordinary. And we, as a society, are teaching kids that average is not good enough, and that if you're only average you should take a pill.
But there's an important piece missing from this picture--parents.
From what's written, it sure does sound like Will was misdiagnosed. But I can't help but wonder why his parents didn't know it at the time.
ADHD diagnosis requires that symptoms be present in at least two settings. So it's not enough that Will shows troubling symptoms in school: he would also need to show them at home, in social settings, or in some other context for him to be diagnosed. There's no indication of a problem outside of school.
It's also notable that the mere presence of symptoms is not enough: the symptoms must be clinically significant; in other words, they obstruct the child's ability to function well in that setting and Hruska maintains that Will seems like a typical kid to her.
This is where Hruska loses me. Why would she accept the diagnosis if symptoms were observed in just one context, and if she believed there was limited evidence that the symptoms were clinically significant in that context? Why wouldn't she challenge the physician who diagnosed him?
I'm led to wonder if she knew the diagnostic criteria. They aren't hard to find. Google "adhd diagnosis." The first link
is the CDC site that offers a reader-friendly version of the DSM IV criteria.
Are our kids pill-happy? Are we raising a Ritalin generation? If so, the solution is not to lay all of the blame on schools and society or even on physicians who make mistakes, and to portray parents as powerless victims. The solution is for parents to make better use of the wealth of scientific information available to us, and to ask questions when a doctor or other authority makes claims that fly in the face of our experience.
Making a change to education that seems like a clear improvement is never easy. Or almost never.
Judith Harackiewicz and her colleagues have recently reported an intervention that is inexpensive, simple, and leads high school students to take more STEM courses.
The intervention had three parts, administered over 15-months when students were in the 10th and 11th grades. In October of 10th grade researchers mailed a brochure to each household titled Making Connections: Helping Your Teen Find Value in School. It described the connections between math, science, and daily life, and included ideas about how to discuss this topic with students.
In January of 11th grade a second brochure was sent. It covered similar ideas, but with different examples. Parents also received a letter that included the address of a password-protected website devised by researchers, which provided more information about STEM and daily life, as well as STEM careers.
In Spring of 11th grade, parents were asked to complete an online questionnaire about the website.
There were a total of 188 students in the study: half received this intervention, and the control group did not.
Students in the intervention group took more STEM courses during their last two years of high school (8.31 semesters) than control students (7.50) semesters.
This difference turned out to be entirely due to differences in elective, advanced courses, as shown in the figure below.
An important caveat about this study: all of the subjects are participating in the Wisconsin Study of Families and Work. This study began in 1990. when women were in their fifth month of pregnancy.
The first brochure that researchers sent to subjects included a letter thanking them for their ongoing participation in the longer study. Hence, subjects could reasonably conclude that the present study was part of the longer study.
That's worth bearing in mind because ordinary parents might not be so ready to read brochures mailed to them by strangers, nor to visit suggested websites.
But that's not a fatal flaw of the research. It just means that we can't necessarily count on random parents reading the materials with the same care.
To me, the effect is still remarkable. To put it in perspective, researchers also measured the effect of parental education on taking STEM courses. As many other researchers have found, the kids of better-educated parents took more STEM courses. But the effect of the intervention was nearly as large as the effect of parental education!
Clearly, further work is necessary but this is an awfully promising start.
Harackiewicz, J. M, Rozek, C. S., Hulleman, C. S & Hyde, J. S. (in press). Helping parents to motivate adolescents in mathematics and science: An experimental test of a utility-value intervention. Psychological Science.
Steven Levitt, of Freakonomics fame, has unwittingly provided an example of how science applied to education can go wrong.On his blog, Levitt cites a study
he and three colleagues published (as an NBER working paper
). The researchers rewarded kids for trying hard on an exam. As Levitt notes, the goal of previous research has been to get kids to learn more. That wasn't the goal here. It was simply to get kids to try harder on the exam itself, to really show everything that they knew.Among the findings: (1) it worked. Offering kids a payoff for good performance
prompted better test scores; (2) it was more effective if, instead of offering a payoff for good performance, researchers gave them the payoff straight away and threatened to take it away
if the student didn't get a good score (an instance of a well-known and robust effect called loss aversion
); (3) children prefer different rewards at different ages. As Levitt puts it "With young kids, it is a lot cheaper to bribe them with trinkets like trophies and whoopee cushions, but cash is the only thing that works for the older students."There are a lot of issues one could take up here, but I want to focus on Levitt's surprise that people don't like this plan. He writes "
It is remarkable how offended people get when you pay students for doing well – so many negative emails and comments." Levitt's surprise gets at a central issue in the application of science to education. Scientists are in the business of describing (and thereby enabling predictions of) the Natural world. One such set of phenomenona concerns when students put forth effort and when they don't. Education is a not a scientific enterprise. The purpose is not to describe
the world, but to change it, to make it more similar to some ideal that we envision. (I wrote about this distinction at some length in my new book. I also discussed on this brief video
Thus science is ideally value-neutral. Yes, scientists seldom live up to that ideal; they have a point of view that shapes how they interpret data, generate theories, etc., but neutrality is an agreed-upon goal, and lack of neutrality is a valid criticism of how someone does science. Education, in contrast, must entail values, because it entails selecting goals. We want to change the world--we want kids to learn things--facts, skills, values. Well, which ones? There's no better or worse answer to this question from a scientific point of view.A scientist may know something useful to educators and policymakers, once the educational goal is defined; i.e., the scientist offers information about the Natural world that can make it easier to move towards the stated goal. (For example, if the goal is that kids be able to count to 100 and to understand numbers by the end of preschool, the scientist may offer insights into how children come to understand cardinality.) What scientists cannot do is use science to evaluate the wisdom of stated goals.And now we come to people's hostility to Levitt's idea of rewards
for academic work.
I'm guessing most people don't like the idea of rewards for the same reason I don't. I want my kids to see learning as a process that brings its own reward. I want my kids to see
effort as a reflection of their character, to believe that they should give their all to any task that is their responsibility, even if the task doesn't interest them. There is, of course, a large, well-known research literature on the effect of extrinsic rewards on motivation. Readers of this blog are probably already familiar with it--if so, skip the next paragraph. The problem is one of attribution. When we observe other people act, we speculate on their motives. If I see two people gardening--one paid and the other unpaid--I'm likely to assume that one gardens because he's paid and the other because he enjoys gardening. It turns out that we make these attributions about our own behavior as well. If my child tries her hardest on a test she's likely to think "I'm the kind of kid who always does her best, even on tasks she don't care for." If you pay her for her performance she'll think "I'm the kind of kid who tries hard when she's paid." This research began in the 1970's and has held up very well. Kids work harder for rewards. . . until the rewards stop. Then they
engage in the task even less than they did before the rewards started. I summarized some of this work here.
In the technical paper, Levitt cites some of the reviews of this research but downplays the threat, pointing out that when motivation is low to start with, there's not much danger of rewards lowering it further. That's true, and I've made a close argument: cash rewards might be used as a last-ditch effort for a child who has largely given up on school. But that would dictate using rewards only with kids who were not motivated to start, not in a blanket fashion as was done in Levitt's study. And I can't see concluding that elementary school kids were so unmotivated that they were otherwise impossible to reach.In addressing the threat to student motivation with research, Levitt is approaching the issue in the right way (even if I think he's incorrect in how he does so.)But on the blog (in contrast to the technical paper), Levitt addresses the threat in the wrong way. He skips the scientific argument and simply belittles the idea that parents might object to someone paying their child for academic work. He writes: Perhaps the critics are right and the reason I’m so messed up is that my parents paid me $25 for every A that I got in junior high and high school. One thing is certain: since my only sources of income were those grade-related bribes and the money I could win off my friends playing poker, I tried a lot harder in high school than I would have without the cash incentives. Many middle-class families pay kids for grades, so why is it so controversial for other people to pay them?I think Levitt is getting "so many negative emails and comments" because he's got scientific data to serve one type of goal (get kids to try hard on exams) the application of which conflicts with another goal (encourage kids to see academic work as its own reward). So he scoffs at the latter. I see this blog entry as an object lesson for scientists. We offer something valuable--information about the Natural world--but we hold no status in deciding what to do with that information (i.e., setting goals). In my opinion Levitt's blog entry shows he has a tin ear for the possibility that others do not share his goals for education. If scientitists are oblivious to or dismissive of those goals, they can expect not just angry emails, they can expect to be ignored.
There is a great deal of attention paid to and controversy about, the promise of training working memory to improve academic skills, a topic I wrote about here
. But working memory is not the only cognitive process that might be a candidate for training. Spatial skills
are a good predictor of success in science, mathematics, and engineering. Now on the basis of a new meta-analysis (Uttal, Meadow, Tipton, Hand, Alden, Warren & Newcombe, in press) researchers claim that spatial skills are eminently trainable. In fact they claim a quite respectable average effect size of 0.47 (Hedge's g)
after training (that's across 217 studies).
Training tasks across these many studies included things like visualizing 2D and 3D objects in a CAD program, acrobatic sports training, and learning to use a laparascope (an angled device used by surgeons). Outcome measures were equally varied, and included standard psychometric measures (like a paper-folding test
), tests that demanded imagining oneself in a landscape, and tests that required mentally rotating objects.
Even more impressive:
1) researchers found robust transfer to new tasks
2) researchers found little, if any effect of delay between training and test--the skills don't seem to fade with time, at least for several weeks. (Only four studies included delays of greater than one month.)
This is a long, complex analysis and I won't try to do it justice in a brief blog post. But the marquee finding is big news. What we'd love to see is an intervention that is relatively brief, not terribly difficult to implement, reliably leads to improvement, and transfers to new academic tasks.
That's a tall order, but spatial skills may fill all the requirements.
The figure below (from the paper) is a conjecture--if spatial training were widely implemented, and once scaled up we got the average improvement we see in these studies, how many more people could be trained as engineers?
The paper is not publicly available, but there is a nice summary here
from the collaborative laboratory responsible for the work. I also recommend this excellent article from American Educator
on the relationship of spatial thinking to math and science, with suggestions for parents and teachers.
Uttal, D. H., Meadow, N. G., Tipton, E., Hand, L. L., Alden, A. R., Warren, C., & Newcombe, N.S. (2012, June 4). The Malleability of Spatial Skills: A Meta-Analysis of Training Studies. Psychological Bulletin
. Advance online publication. doi: 10.1037/a0028446Newcombe, N. S. (2010) Picture this: Increasing math and science learning by improving spatial thinking. American Educator, Summer,
I made another of my garage-band quality videos, this one on the relationship of science and education, titled "Is Education an Art or a Science?"
I want to highlight two incredibly valuable papers, although they are increasingly dated. One paper reports on an enormous project in which observers went into a large sample of US first grade classrooms (827 of them in 295 districts) and simply recorded what was happening.
The other paper reported on a comparable project for third grade classrooms (780 students in 250 districts)Both papers are a treasure trove of information, but I want to highlight one striking datum: the percentage of time spent on science.
In first grade classrooms it was 4%. In third grade classrooms it was
5%.There are a few oddities that might make you wonder about these figures. In the 1st grade paper, the observations typically took place in the morning, so perhaps teachers tend to focus on ELA in the morning and save science for the afternoon. But the third grade project sampled throughout the day.And although there's always some chance that
there's something odd about the method, the estimates accord with estimates using other measures, such as teachers' estimates. (See data from an NSF project here.
) And before you blame NCLB for crowding science out of the classroom, note that the data for these studies were collected before NCLB. (1st grade, mostly '97-98; 3rd grade, mostly '00-'01). I don't think there's much reason to suspect that the time spent on science instruction has increased, and smaller scale studies indicate it hasn't.The fact that so little time is spent on science is, to me, shocking.It's even more surprising when paired with the observation that US kids fare pretty well in international comparisons of science achievement. In 2003, when more or less the same cohort of kids took the TIMMS US kids ranked 6th in science. (They ranked 5th in 2008.)How are US kids doing fairly well in science in the absence of science instruction?Possibly US schools are terribly efficient in science instruction and get a lot done in minimum time. Possibly other countries are doing even less. Possibly US culture offers good support for informal opportunities to learn science. It remains a puzzle.
There is a lot of talk about STEM instruction these days. In most districts, science doesn't get serious until middle school. US schools could be doing a whole lot more with more time devoted to science instruction.
I'll have more to say about time in elementary classrooms next week.NICHD Early Child Care Research Network
(2002). The relation of global first-grade classroom environment to structural classroom features and teacher and student behaviors. The Elementary School Journal, 102
NICHD Early Child Care Research Network (2005). A day in third grade: A large-scale study of classroom quality and teacher and student behavior.The Elementary School Journal, 105