<![CDATA[Daniel Willingham - Daniel Willingham: Science and Education Blog]]>Tue, 15 Dec 2015 07:40:12 -0800Weebly<![CDATA[New report on self-regulation and social competence]]>Tue, 08 Dec 2015 16:50:21 GMThttp://www.danielwillingham.com/daniel-willingham-science-and-education-blog/new-report-on-self-regulation-and-social-competenceI (like everyone else) am always eager for documents that clearly summarize a large, complex literature. One such literature of urgent interest is the role of self-regulation in academic success. A new working paper from Transforming Education (full disclosure: I’m on their advisory board) does a great job of highlighting the important findings regarding non-cognitive skills, a not-very-precise term originating in economics that refers mostly to self-control and social competence.

The report is targeted at policymakers, but should be of interest to teachers and administrators as well.

The paper is organized around nine “headlines;” these are conclusions that the authors suggest are justified by the research literature. These headlines concern the relationship of non-cognitive skills to academics , careers, and general well-being.

1. Non-cognitive skills predict high school and college completion. 
2. Students with strong non-cognitive skills have greater academic achievement within K-12 schooling and college.
3. Fostering non-cognitive skills as early as preschool has both immediate and long-term impact.
4. Employers value non-cognitive skills and seek employees who have them.
5. Higher non-cognitive skills predict a greater likelihood of being employed.
6. Stronger non-cognitive skills in childhood predict higher adult earning and greater financial stability.
7 Adults with stronger non-cognitive skills are less likely to commit a crime and be incarcerated.
8. Strong non-cognitive skills decrease the likelihood of being a single or unplanned teenage parent.
9. The positive health effects associated with stronger non-cognitive skills include reduced mortality and lower rates of obesity, smoking, substance abuser, and mental health disorders.  

You not only get a brief, readable elaboration of each point, you also get the backing citations.
 
My only quibble is that, were I the author of this report, I would have been a bit more cautious in drawing a causal conclusion about the evidence of success in fostering non-cognitive skills in preschool (conclusion #3 above). It is of course possible that self-control is largely heritable and is changed little by the environment, so it’s important to know that the positive outcomes associated with non-cognitive skills can be promoted by practices in schools. The authors cite a 2014 report by Clancy Blair and Cybele Raver showing success, which is encouraging, but it is, according to Blair & Raver, the first experimental demonstration of.
 
That said, I encourage you download it, read it, and refer to it. It neatly sums up a complex and vital research literature. 
]]>
<![CDATA[A Brief Reply to Tim Shanahan]]>Tue, 21 Jul 2015 19:42:23 GMThttp://www.danielwillingham.com/daniel-willingham-science-and-education-blog/a-brief-reply-to-tim-shanahanTim Shanahan recently posted a blog entry in which he evaluated a point I’ve made (see here) namely, that there is no evidence that students benefit from practicing comprehension strategy instruction; there’s a large initial boost to comprehension, but there’s no dosage effect. (He also folded in several points about strategy instruction that I didn’t address, e.g., that some strategies are better than others. I assume this was for the benefit of his readers.)

Tim and I disagree on why dosage effects are not observed. He suggests that the data just don’t exist to draw strong conclusions on this point. He might mean that there are very few studies that directly address the length of the intervention (i.e., that make it an experimental factor in the study). That’s why I cited meta-analyses. But I think he’s more focused on the fact that there have been few studies examining interventions that run many sessions.

I’d counter that the length of the intervention ought to have a huge effect. A dosage effect should be easy to observe, even if there were only a few studies because we’re in the early part of the learning curve. Learning curves are negatively accelerated—the initial gains with practice are large, and then gains taper off.

Instead, as Tim notes, we see pretty sizable effects even after brief interventions. That, coupled with the lack of dosage effect, is why I suggested that the mechanism of reading comprehension strategy instruction is not the improvement of the skill of comprehension. Skills don’t show big improvement with brief instruction, and then a lack of continued improvement with continued practice. 

Instead, strategy instruction is more like a meta-cognitive technique; it’s a way of organizing and controlling cognition during reading. (I also suggested that models of reading comprehension are more consistent with this interpretation, but Tim didn’t get into that.)

This is why I suggested that spending a lot of time on reading comprehension strategy instruction would be wasteful. I suggested two weeks would be enough. Tim suggests six, noting that my lowball estimate seems rash, given that we both agree that strategy instruction helps. I think that’s a fair point, and I’m happy to bow to Tim’s expertise in the classroom particulars, about which he’s much more knowledgeable.

Tim also notes that, according to a study he’s conducting now of practices in nearly 1,000 classrooms, strategy instruction is not much observed. Large-scaled studies of classroom reading practices are very scarce, so I'm delighted to hear this! I’m doubly delighted to learn that he’s not seeing an overemphasis on strategy instruction. My concerns (opportunity cost and a hit to reading motivation) have been based solely on conversations with teachers and administrators; it will be great to have reliable data regarding frequency.

]]>
<![CDATA[A neglected factor in picking a college]]>Fri, 24 Apr 2015 14:21:11 GMThttp://www.danielwillingham.com/daniel-willingham-science-and-education-blog/a-neglected-factor-in-picking-a-collegeWe are one week away from May 1, which is National Candidates Reply Day. That’s the date by which college admission offers must be accepted or turned down. If your child is fortunate enough to have more than one offer of admission, what should go into this decision?

Often, the choice is easy. One school stands out as a favorite, or one is more economical, or one has a program of study that’s a good fit for the student’s interests, or perhaps a visit to one campus prompted a strong emotional attachment. But if a student has made it until April 24 without deciding, it’s possible he or she is torn. He may have overlooked one aspect of college life that could be worth factoring into his decision.

We are all of us more influenced by our social world than we like to think. We believe our actions are a product of our life experiences and personality. But the history of social psychology shows that the situation can make us more cruel, more helpful, more conformist, or more honest. When I taught at Williams college, I was struck by the fitness obsession among students; many students jogged, but those who didn’t regularly engaged in some sort of exercise. I commented on this to a student once, and he said “Yeah, I never jogged before I got here.” I asked “what made you start?” He shrugged and said “Everybody jogs.”

How can you put that principle to work in a college decision?

It’s natural for a student to think that she should pick a college that’s a good fit for her personality and interests. For example, people who are into fitness should attend Williams. Those who aren’t will feel out of step.

I suggest the opposite is true. Think of the social environment as a support to your personality and interests. The student who doesn't exercise should think of Williams as a place where it’s easy to start exercising. Similarly, if a student thinks of herself as social—no problem making friends, enjoys going to parties, etc.—then she doesn't need much support from the college environment to ensure that that aspect of her life will thrive. But going to a school with a reputation for nerdiness might help ensure social support for some focus on academics. Likewise, the socially awkward, perpetual studier (Dan hesitantly raises his hand) might consider closely a school known to have a strong social life.

More generally, encourage the student to consider this: which aspects of your life are you satisfied with? What do you know you’ll make time for, what do you feel confident about. Then in contrast, which aspects of your life do you feel need work? Which ones give you trouble, and you feel uncertain about how to improve? Select a college which you think provides a supportive environment for those aspects of your life that you most want to improve.

This factor will not be determinative, but if the decision is coming down the wire, you might include it in the mix. More generally, encourage the student to consider this: which aspects of your life are you satisfied with? As you contemplate entering college, what do you know you’ll make time for, what do you feel confident about? Then in contrast, which aspects of your life do you feel need work? Which ones give you trouble, and you feel uncertain about how to improve? Select a college which you think provides a supportive environment for those aspects of your life that you most want to improve.

This factor will not be decisive, but if the decision is coming down the wire, you might include it in the mix. 

]]>
<![CDATA[Computational Competence Doesn't Guarantee Conceptual Understanding in Math]]>Wed, 04 Mar 2015 16:37:17 GMThttp://www.danielwillingham.com/daniel-willingham-science-and-education-blog/computational-competence-doesnt-guarantee-conceptual-understanding-in-mathCommenters on the teaching of mathematics sometimes express impatience with the idea that attention ought to be paid to conceptual understanding in math education. I get it: it sounds fuzzy and potentially wrong-headed, as though we’re ready to overlook inaccurate calculation so long as the student seems to understand the concepts—and student understanding sounds likely to be ascertained via our mere guess.

Impatience with the idea that conceptual aspects of math ought to be explicitly taught is often coupled with an assurance that, if you teach students to calculate accurately, the conceptual understanding will come. A new experiment provides evidence that this belief is not justified. People can be adept with calculation, yet have poor conceptual understanding.

Bob Siegler and Hugues Lortie-Forgues asked preservice teachers (Experiment 1) and middle school students (Experiment 2) to make quick judgments (true or false) of inequalities in this form:

N1/M1 + N2/M2> N1/M1

N and M were two digit numbers, making it hard to calculate the problem quickly in one’s head. Instead, you needed to evaluate what must be true. In this case, subjects easily recognized that the sum on the left side of the inequality had to be larger than the value on the right side. Likewise, they made few mistakes with inequalities of this form

N1/M1 - N2/M2> N1/M1

But when multiplication or division was called for, subjects made errors. Specifically, when N2/M2 amounted to less than one and was multiplied by N1/M1, subjects incorrectly thought the result would be larger than N1/M1. And division was required, subjects thought the result would be smaller than N1/M1. In fact, they answered correctly less often than chance.

Yet these same subjects were quite accurate when asked to calculate answers to problems entailed multiplication or division of fractions; middle-school students got about 80% correct. AND they showed quite good understanding of the magnitude of fractions between 0 and 1 (as shown by placing marks on a number line to represent fraction quantities).

This is a small sample, and the absolute level of performance should not be taken as representative of preservice teachers or of middle-school students. But Siegler and Lortie-Forgues suggest that the disconnect between computation and understanding is typical. That conclusion is in line with the evaluation of the National Math Panel.


So what's to be done? Teach concepts. Among other ideas, Siegler and Lortie-Forgues suggest that, once they have some competence in calculation, students might compare the results of 

8/7 * 1/2

7/7 *1/2

6/7 * 1/2

There are sure to be many methods of helping with conceptual understanding, some best introduced before calculation, some concurrent with it, and some after. This latest finding points to the necessity of greater attention to understanding in instruction. 

Siegler, R. S. & Lortie-Forgues, H. (in press) Conceptual knowledge of fraction arithmetic. Journal of Educational Psychology. 
http://dx.doi.org/10.1037/edu0000025

Edit, 4:11 p.m. 3/4/15: Corrected the spelling of the second author's last name. (Terribly embarrassed.) ]]>
<![CDATA[Five mini book reviews: ]]>Mon, 19 Jan 2015 12:33:56 GMThttp://www.danielwillingham.com/daniel-willingham-science-and-education-blog/five-mini-book-reviewsI’ve taken a break from blogging to work on other projects, but I have managed to do some reading. Here are some very brief notes on recent books.

It’s Complicated by Dana Boyd.  Didn’t get it. Just didn’t get why so many people raved about this book. I felt like I had heard many of the book’s core ideas in water-cooler talk. For example, there’s a whole chapter devoted to the idea that online social networking sites lose their appeal to teens once adults populate them. The academic prose doesn’t make the insight any deeper. And a book like this that uses mostly narrative and a selective, not systematic, look at statistics needs deep insights to be interesting. You need to feel challenged by these new ideas, otherwise the book feels like an apologia for a particular point of view. You can’t help think “Well, Sherry Turkel and Nick Carr have a different take on that.”

How We Learn by Benedict Carey. I read this book in manuscript and provided a blurb for the back cover. I enthused, and in fact enjoyed it so much that I read it again about six weeks ago. Learning and memory was my specialization as an empirical researcher, and I keep up with the literature. Thus, almost none of the findings was a surprise to me, but Carey does such a terrific job of pulling it all together in a way that’s practical for everyday use, I wanted to reacquaint myself with his take, for my sake and for when I advise students. It’s a great book.

Building a Better Teacher by Elizabeth Green. I really appreciate the central message of this book: “teaching is a skill that can be communicated and taught.” Sure, some people have abilities that seem to come naturally that others lack: they understand kids’ motivations, for example, or they seem to have a sixth sense about which ideas will be hard or easy for a given student to understand. But that doesn’t mean that you can’t teach people how to improve, and in fact, how to teach. Green’s book is a chronicling of some of the success stories in this vein, including efforts by Doug Lemov and Deborah Lowenberg Ball. It makes for compelling reading. What’s missing for me is a more probing analysis of why these ideas are not more widely adopted. It’s not as though they are unknown; why don’t people use them? That question must be answered before these ideas (or take your pick of which you think ought to be more widely adopted) will make their way into more teacher education programs.

The Opposite of Spoiled by Ron Lieber. How often does a book articulate a problem that you perceived but couldn’t say much about, and then solve the problem for you? That’s what this book did for me. I’m the parent Lieber targets in this book. I want my kids to have the values my wife and I share when it comes to money, but I don’t know how to impart them. Worse, when seven-year-old asks questions about money (“How much do you make?”) I often don’t know what to say. Lieber offers specific advice that strikes you as perfect common sense…once he’s told you what to do or say. For example, he suggests that most money-related questions from younger kids—including “how much do you make”--be answered the same way: with the question “why do you want to know?” Lieber’s point is that kids often ask questions for reasons other than the one adults assume they have in mind. “How much do you make?” may be an effort to figure out whether your family is comparable to others, or to get a ballpark idea of what a grown-up salary is, or any of a hundred other reasons. You won’t agree with every bit of advice the author doles out, but as he says at the outset, the point is to start a conversation. I loved this book.  

From the Ivory Tower to the Schoolhouse by Jack Schneider takes on the question “why do some ideas from academia gain influence among educators whereas others do not?” As someone who seeks to identify and translate ideas from research to practice, this question strikes me as enormously significant and if Schneider doesn’t provide an iron-clad case—it’s not obvious that’s even possible—he at least makes a good start at addressing the problem. Schneider names four key factors. For ideas to be influential, they must be compatible with teachers’ general philosophical orientation regarding childhood, they must seem of potential importance, there must be some hope of realistically acting on them in the classroom, and they must be transportable across contexts. Schneider offers case studies of four influential ideas (multiple intelligences, Bloom’s taxonomy and two others) and compares them to ideas that seem very similar but that, he argues, lack one of the key features. This is an interesting read on a difficult problem.

]]>
<![CDATA[Why Americans Stink at Math]]>Thu, 25 Sep 2014 12:54:53 GMThttp://www.danielwillingham.com/daniel-willingham-science-and-education-blog/why-americans-stink-at-mathThis column originally appeared at RealClearEducation.com on July 29, 2014

Over the weekend the New York Times Magazine ran an article titled “Why do Americans Stink at Math?”  by Elizabeth Green. The article is as much an explanation of why it’s so hard not to stink as an explication of our problems. But I think in warning about the rough road of math improvement, the author may not have even gone far enough.

The nub of her argument is this. American stink at math because the methods used to teach it are rote, don’t lead to transfer to the real world, and lead to shallow understanding. There are pedagogical methods that lead to much deeper understanding. U.S. researchers pioneered these methods and Japanese student achievement took off when the Japanese educational system adopted them.

Green points to a particular pedagogical method as being vital to deeper understanding. Traditional classrooms are characterized by the phrase “I, we, you.” The teacher models a new mathematical procedure, the whole class practices it, and then individual students try it on their own. That’s the method that leads to rote, shallow knowledge. More desirable is “You, Y’all, We.” The teacher presents a problem which students try to solve on their own. Then they meet in small groups to compare and discuss the solutions they’ve devised. Finally, the groups share their ideas as a whole class.

Why don’t US teachers use this method? In the US, initiatives to promote them are adopted every thirty years or so—New Math in the 60’s, National Council of Teachers of Mathematics Standards in the late ‘80’s--but they never gain traction. (Green treats the Common Core as another effort to bring a different pedagogy to classrooms. It may be interpreted that way by some, but it’s a set of standards, not a pedagogical method or curriculum.)

Green says there are two main problems: lack of support for teachers, and the fact that teachers must understand math better to use these methods. I think both reasons are right, but there’s more to it than that.

For a teacher who has not used the “You, Y’all, We” method it’s this bound to be a radical departure from her experience. A few days of professional development is not remotely enough training, but that’s typical of what American school systems provide. As Green notes, Japanese teachers have significant time built into their week to observe one another teach, and to confer.



Green’s also right when she points out that teaching mathematics in a way that leads to deep understanding in children requires that teachers themselves understand math deeply. As products of the American system, most don’t.

Green’s take is that if you hand down a mandate from on high “teach this way” with little training, and hand it to people with a shaky grasp of the foundations of math, the result is predictable; you get the fuzzy crap in classroom that’s probably worse than the mindless memorization that characterizes the worst of the “I, we, you” method.

But I think there are other factors that make improving math even tougher than Green says.

First, the “You, Y’all, We” method is much harder, and not just because you need to understand math more deeply. It’s more difficult because you must make more decisions during class, in the moment. When a group comes up with a solution that is on the wrong track, what do you do? Do you try to get the class to see where it went wrong right away, or do you let them continue, and play out the consequences of the their solution? Once you’ve decided that, what exactly will you say to try to nudge them in that direction?

As a college instructor I’ve always thought that it’s a hell of a lot easier to lecture than to lead a discussion. I can only imagine that leading a classroom of younger students is that much harder.

There are also significant cultural obstacles to American adoption of this method. Green notes that Japanese teachers engage in “lesson study” together, in which one teacher presents a lesson, and the others discuss it in detail. This is a key solution to the problem I mentioned; teachers discuss how students commonly react during a particular lesson, and discuss the best way to respond. That way, they are not thinking in the moment, but know what to do.

The assumption is that teachers are finding, if not the one best way to get an idea across, then a damn good one. As Green notes, that often gets down to details such as which two digit numbers to use for particular example. An expectation goes with this method; that everyone will change their classroom practice according to the outcome of lesson study. This is a significant hit to teacher autonomy, and not one that American teachers are used to. It’s also noteworthy that there is no concept here of honoring or even considering differences among students. It’s assumed they will all do the same work at the same time.

The big picture Green offers is, I think, accurate (even if I might quibble with some details). Most students do not understand math well enough, and the Japanese have offered an example of one way to get there. As much as Green warns of the challenges in Americans broadly emulating this method, I think she may underestimate how hard it would be. It may be more productive to try to find some other technique to give students the math competence we aspire to.

]]>
<![CDATA[Can traditional public schools replicate charters?]]>Wed, 17 Sep 2014 15:32:58 GMThttp://www.danielwillingham.com/daniel-willingham-science-and-education-blog/can-traditional-public-schools-replicate-chartersThis piece was originally published at realcleareducation.com on July 24, 2014

Although the politics concerning charter schools remain contentious, most education observers agree that some charters have had real success in helping children from impoverished homes learn more. If you believe that’s true, a natural next step is to ask what those charters are doing and whether it could be replicated in other schools. A recent study tried to do that, and the results looked disappointing. But I think the authors passed over a telling result in the data.

The researcher is Roland Fryer, and the first study was published in in 2011 with Will Dobbie. They analyzed successful charter schools on a number of dimensions, and concluded that some factors one might expect to be associated with student success were not: class size, per-pupil expenditures, and teacher qualifications, for example. They identified five factors that did seem to matter: frequent feedback to teachers, the use of data to drive instruction, high-dosage tutoring to students, increased instructional time, and high expectations.

Fryer (2014) sought to inject those five factors into some public schools in high-needs districts, starting with twenty schools in Texas. They increased the number of occasions for teacher feedback from 3 times each year to 30. Staff learned instructional techniques developed by Doug Lemov and Robert Marzano. They had parents sign contracts and students wear uniforms, along with other marks of a high-expectations school culture. Outcome measures of interest were school averages on state-wide tests.

So what happened? In math, it helped a little. The effect size was around 0.15. In English Language Arts, there was no effect at all.

Fryer tried the same thing in Denver (7 schools) and got identical results. In Chicago (29 schools) there was no effect in either math or reading.

Two questions arise. Why is the effect so small? And why the difference between math and reading? 

Fryer does not really take on the first question, I guess because there is an effect on math achievement. In the conclusion he claims “These  results  provide  evidence  suggesting  that  charter  school  best  practices  can  be  used systematically in previously  low-performing  traditional public schools to significantly increase student achievement in ways similar to the most  achievement-increasing  charter schools.” Whether or not the cost (about $1,800 per student) was worth the benefit is a judgment call, of course, but the benefit strikes me as modest.

Fryer does address the different impact of the intervention for reading and math. He speculates that it might be harder to move reading scores because many low-income kids hear and speak non-standard English at home. There’s some grounded speculation that hearing different dialects of English at home and at school may impact learning to read—see Seidenberg, 2013. I doubt non-standard English is decisive in fourth grade and up, and those were the students tested in this study.

My guess is that another factor  is relevant to both the size of the math effect and the lack of effect in reading. Much of Fryer’s intervention is directed towards a seriousness about content. But actually getting serious about work was the factor that Fryer was least able to address. The paper says “In an ideal world, we would have lengthened the school day by two hours and used the additional time to provide tutoring in math and reading in every grade level.” But due to budget constraints they could tutor in one grade and one subject per school. They chose 4th, 6th, and 9th grades, and they chose math. Non-tutored grades got a double-dose of whatever students were most behind in, and teachers tried to make the double-dose not cut into academic time.  Thus, it may be that researchers saw puny effects because they had to skimp on the most important factor: sustained engagement with challenging academic content.

This explanation is also relevant to the math/reading difference. In math, if you put a little extra time in, it’s at least obvious where that time should go. If kids are behind in mathematics, it’s not difficult to know what they need to work on.

Once kids reach upper elementary school, reading comprehension is driven primarily by background knowledge; knowing a bit about the topic of the text you’re reading confers a big advantage to comprehension. Kids from impoverished homes suffer primarily from a knowledge deficit (Hirsch, 2007).

So a bit of extra time, while better than nothing, is just a start at an attempt to build the knowledge needed for these students to make significant strides in reading comprehension. And in this particular intervention, no attempt was made to assess what knowledge was needed and to build it systematically.

This problem is not unique the Fryer’s intervention. As he notes, it’s always tougher to move the needle on reading than on math. That’s because experiences outside of the classroom make such an enormous contribution to reading ability.

Thus, I find Fryer’s study perhaps more interesting than Fryer does.  On the face of it, his intervention was a modest success: no improvement in reading, but at least a small bump to math. To me, this study was another in a long series showing the primacy of curriculum to achievement.

References

Dobbie, W., & Fryer Jr, R. G. (2011). Getting beneath the veil of effective schools: Evidence from New York City (No. w17632). National Bureau of Economic Research.

Fryer, R. G. (2014). Injecting Charter School Best Practices into Traditional Public Schools: Evidence from Field Experiments. The Quarterly Journal of Economics, doi: 10.1093/qje/qju011

Hirsch, E. D. (2007). The knowledge deficit: Closing the shocking education gap for American children. Houghton Mifflin Harcourt.

 

Seidenberg, M. S. (2013). The Science of Reading and Its Educational Implications. Language Learning and Development, 9(4), 331-360.

]]>
<![CDATA[Tenure lessons from higher ed]]>Wed, 10 Sep 2014 09:24:35 GMThttp://www.danielwillingham.com/daniel-willingham-science-and-education-blog/tenure-lessons-from-higher-ed This article was originally published at RealClearEducation.com on July 15, 2014

Teacher tenure laws were adopted by most states during the first half of the 20th century. To advocates, tenure provides a guarantee of due process should a teacher be dismissed, and thus offers protection from capricious firings and personal vendettas. To critics, tenure is granted too readily to teachers of marginal skill, and the “due process” is so arduous, time-consuming, and expensive that it constitutes a de facto job guarantee. Thus critics see tenure is a primary reason that poor teachers stay in the profession. Which interpretation is closer to the truth? It’s been very hard to say. Tenure laws have been in place for so long, we haven’t had a counterfactual; guesses about the impact on the teacher labor force subsequent to a change in tenure procedures have been just that—guesses. Now, we’re starting to have some data on the matter (Loeb, Miller & Wyckoff, 2014)

New York City’s Department of Education changed its procedure for granting tenure in the 2009-2010 school year. Some features of the old system were retained. As before, teachers were evaluated at the end of their second year, based on the results of classroom observation, evaluations of teacher work (e.g., lesson plans), and an annual rating sheet completed by principals.

Starting in the 2009-2010 year, new information was available about student progress (including value-added measures calculated from state tests). Another new wrinkle was that principals would be required to write a justification for their decision if the superintendent would draw a different conclusion about a teacher’s case.

In the following years, some small change were added, the most interesting of which was the addition of data about teacher effectiveness based on surveys of students and parents, and feedback from colleagues.

So did these changes affect tenure decisions?

There was a sizable impact. Not in teacher dismissal at tenure decision time, but in extending the time for the tenure decision. In the two years prior to the new system, about 94% of teachers got tenure, with 2 or 3% being terminated. For about the same number, principals elected to delay the decision for a year, so as to have more time and data with which to evaluate the teacher.

As shown on the graph, under the new system, the number of teachers denied tenure remains very small, but there has been a huge increase in the number of teachers for whom the decision was delayed.



Most interesting was the response of the teachers when the decision was delayed. More of them transfer to a new school or exit the profession altogether.



The probability of a teacher transferring schools is 9 percentage points higher if the decision was extended, compared to if they were approved. The probability of exiting the profession is 4 percentage points higher.

So on the one hand, the changes to tenure review which were meant to make the process more rigorous are not prompting principals to deny tenure any more frequently. On the other hand, principals are making much greater use of the option to delay the decision for a year. That, in turn, is having some impact on the workforce. A straightforward interpretation is that teachers rightly interpret the delayed decision as a sign that things are not going as well as they might, and some teachers figure that’s because the school they are in is bad fit (and so transfer) or that the profession is just not for them (and so they exit).

What are we to make of the fact that the more rigorous criteria did not lead to more recommendations that teachers be fired? Although it’s possible that’s a sign of principals being reluctant to fire teachers, I doubt it. I think it’s more likely that the large number of delayed decisions reflects the belief that two years is just too early to tell. Certainly we know that teachers are still on the steep part of their learning curve at that point. They are improving, and how much more they will improve is tough to know.

It’s always tempting to think that one’s own training was optimal, but I do think higher education has a more sensible approach to tenure, simply because we take longer to make the decision. At most universities professors the tenure decision is made in the sixth year (based on the first five years worth of performance data.) There is a less rigorous review at the end of the third year. That review provides useful information to the candidate to know where he or she stands and what needs to be improved in the next few years. It also gives the university a chance to fire someone if things are going really poorly.

Whether tenure makes sense at all today and the possible consequences of eliminating are viable questions, but ones I’m not tackling here. But if you’re going to continue offering tenure, it’s a decision that ought to be made on more data than can be gleaned from the first two years.

Reference:

Loeb, S., Miller, L. C., & Wyckoff, J. (May, 2014) Performance screens for school improvement: The case of teacher tenure reform in New York City. Downloaded from http://cepa.stanford.edu/sites/default/files/NYCTenure%20brief%20FINAL.pdf July 13, 2014.

]]>
<![CDATA["No screen time" study doesn't allow firm conclusion.]]>Mon, 01 Sep 2014 15:23:17 GMThttp://www.danielwillingham.com/daniel-willingham-science-and-education-blog/no-screen-time-study-doesnt-allow-firm-conclusionNPR, the Daily Mail, and other outlets are trumpeting the results of a study published  in Computers and Human Behavior: The spin is that digital devices leave kids emotionally stunted. But that conclusion is not supported by the study which is, in fact, pretty poorly designed.

Researchers examined kids' ability to assess non-verbal emotion cues from still photos and from video scenes from which dialog had been removed. These assessments were made pre- and post-intervention.

The intervention is where things get weird. The press has it that the main intervention was the removal of electronic devices from children's lives for five days. In fact, the experimental group went to a sort of educational nature camp call the Pali Institute. While control subjects went to their regular school, experimental subjects participated in activities like these:
This study could almost serve as a test question in an undergraduate research methods course. In the results section, the authors conclude "We found that children who were away from screens for five days with many opportunities for in-person interaction improved significantly in reading facial emotion." As should be obvious from the Table, there were a host of differences between what the experimental kids and the control kids experienced.

In the discussion the authors do allow "
We recognize that the design of this study makes it challenging to tease out the separate effects of the group experience, the nature experience, and the withdrawal of screen-time." But then go on to say "but it is likely that the augmentation of in-person communication necessitated by the absence of digital communication significantly contributed to the observed experimental effect." That's a mere wish. We in fact cannot draw any conclusions about the source of effect.

It's a shame that news outlets are not more discriminating in how they report this sort of work. 
]]>
<![CDATA[Draft bill of research rights for educators]]>Wed, 20 Aug 2014 14:37:35 GMThttp://www.danielwillingham.com/daniel-willingham-science-and-education-blog/draft-bill-of-research-rights-for-educatorsThis column originally appeared on RealClearEducation.com on July 10, 2014.

When I talk to educators about research, their most common complaint (by a long shot) is that they are asked to implement new interventions (a curriculum, a pedagogical technique, a software product, whatever), and are offered no reason to do so other than a breezy “all the research supports it.” The phrase is used as a blunt instrument to silence questions. As a scientist I find this infuriating because it abuses what ought to be a serious claim—research backs this—and in so doing devalues research. It’s an ongoing problem (see Jessica & Tim Lahey’s treatment here) that’s long concerened me.

In fact, the phrase “research supports it” invites questions. It implies that we can, in a small way, predict the future. It claims “if we do X, Y will happen.” If I take this medication, my ear infection will go away. If we adopt this new curriculum, kids will be more successful in learning math. Saying “research supports it” implies that you know not only what the intervention is, but you have at least a rough idea of what outcome you expect, the likelihood that it will happen, and when it will happen.

I offer the following list of rights for educators who are asked to change what they are doing in the name of research, whether it’s a mandate handed down from administrator to teacher or from lawmaker to administrator.

1.       The right to know what is supposed to improve. What problem is being solved? For example, when I’ve been to schools or districts implementing a one-to-one tablet/laptop policy, I’ve always asked what it’s meant to do. The modal response is a blank look followed by the phrase “we don’t want our kids left behind.” Behind in what? In what way are kids elsewhere with devices zooming ahead?

2.       The right to know the means by which improvement will be measured. How will we know things are getting better? If you’re trying to improve students’ understanding of math, for example, are you confident that you have a metric that captures that construct? Are you sure scores on that metric will be comparable in the future to those you’re looking at now? How big an increase will be deemed a success?

3.       The right to know the approximate time by which this improvement is expected. A commitment to an intervention shouldn’t be open-ended. At some point we must evaluate how it’s going.

4.       The right to know what will be done if the goal is or is not met. Naturally, conditions may change, but let’s have a plan. If we don’t meet our target, will we quit? Keep trying for a while? Tweak it?

5.       The right to know what evidence exists that the intervention will work as expected. Is the evidence from actual classrooms or is it laboratory science (plus some guesswork)? If classrooms, were they like ours? In how many classrooms was it tried?

6.       The right to have your experience and expertise acknowledged. If the intervention sounds to you and your colleagues like it cannot work, this issue should be addressed in detail, not waved away with the phrase “all the research supports it.” The fact that it sounds fishy to experienced people doesn’t mean it can’t work, but whoever is pitching it should have a deep enough understanding of the mechanisms behind the intervention to be able to say why it sounds fishy, and why that’s not a problem.

This list is not meant to dictate criteria that must be met before an intervention should be tried, but rather what information ought to be on the table. In other words, the information provided in each category need not unequivocally support the intervention for it to be legitimate. For example, I can imagine an administrator admitting that the research support for an intervention is absent, yet mounting a case for why it should be tried anyway.

This list should also be considered a work in progress. I invite your additions or emendations.

]]>