Daniel Willingham--Science & Education
Hypothesis non fingo
  • Home
  • About
  • Books
  • Articles
  • Op-eds
  • Videos
  • Learning Styles FAQ
  • Daniel Willingham: Science and Education Blog

Why you shouldn't hire like Google

2/23/2014

 
In today's New York Times Thomas Friedman reports on purported hiring practices at Google, as represented by Laszlo Bock, a senior vice president there.

Bock is an amateur psychometrician. He maintains that "GPA's are worthless as a criteria for hiring, and test scores are worthless."  Rather, they are looking for "general cognitive ability, and it's not IQ. It's learning ability."

Bock is similarly unimpressed by "expertise." According to Bock, someone with high cognitive ability will come up with the same answer as the person with expertise anyway.


They also value "emergent leadership
," which means what it sounds like, and "humility and ownership," which sounds like being a responsible employee; shouldering blame when blame is yours, and trying to learn from your failures.

Everything Bock says is probably not true, and if it were true,
it would not work well in organizations other than Google.

  • Decades of research shows that job performance in many careers is pretty well predicted by standard IQ tests.
  • "Learning to learn" is nebulous because it's domain-specific, and it's domain-specific because the ability to learn new things depends on what you already know.
  • "Emergent leadership" and "humility and ownership" are qualities many organizations prize and would dearly love to reliably predict at hiring time. Maybe Bock has something to teach them about this. I kinda doubt it, but you never know.
  • The idea that smart people can pretty well figure anything out without expertise? Even though IQ predicts job performance (not "learning to learn") experience still matters to performance.

Friedman adds the critical caveat in the last paragraph:
Google attracts so much talent it can afford to look beyond traditional metrics, like G.P.A.

Yes. It reminds me of a conversation I had with a Harvard admissions office who told me "Look, we could fill the freshman class with students who got 800,800 on the SAT. Literally. Every single freshman, 800,800. We're just not interested in doing that."

That doesn't mean that the SAT was irrelevant; you didn't meet many Harvard students with crummy SAT scores. It means that once you're in the 750 range, Harvard figured you're damn smart and whatever "edge" might be represented in the difference between 750 and 800 didn't matter much. They started to look at other qualities. Harvard admissions officers (at least as represented by my friend) were also quite serious in how they tried to do it, and quite humble about their ability to assess them.

Likewise, Google is, I'm willing to guess, selecting from tremendously capable people--capable as defined in standard ways--so it makes perfect sense that further selection is based on other qualities. It doesn't mean that standard metrics are rendered irrelevant.

Friedman is right when (in the last paragraph) he offers this advice:
For most young people, though, going to college and doing well is still the best way to master the tools needed for many careers. I would add that I would expect Google's offbeat hiring practices wouldn't work in most places.

Kristof on marginalized professors--he's partly right

2/16/2014

 
Today in the New York Times Nick Kristof writes that university professors have “marginalized themselves.” We have done so, he suggests, by concerning ourselves with very specialized topics that are removed from practical realities. (Old joke: the secret to academic success is to dig an intellectual trench so narrow and so deep that there is only room for one.)

The second part of the problem, Kristof suggests, is that academics write inaccessible prose, isolating their knowledge from others.  He quotes approvingly Harvard historian Jill Lapore, who says academics have created “a great, heaping mountain of exquisite knowledge surrounded by a vast moat of dreadful prose.”

Kristof’s solution is that academics do more writing for the public, on topics of practical concern. To make this change possible, universities would need to change the systems by which they evaluate faculty for promotion.

I think he’s partly right.

Kristof did not distinguish between faculty in Arts & Sciences and those in professional schools such as law, medicine, education, and engineering. These latter have practical application embedded in their mission and I think are therefore more vulnerable to his charges.

I started writing about the application of cognitive science for teachers exactly because I thought that too many teachers were not learning this information in their training at schools of education.

But for typical Arts & Sciences faculty, application is not part of the mission. I think two factors render impractical Kristof’s suggestions that it be part of the mission. I’m a scientist, so I’ll write from that perspective, and won’t claim that the following applies to the humanities.

Universities and the professors they employ are best seen as part of a larger system that includes government and private industry. The seminal document envisioning that system was written by Vannevar Bush in 1945. Bush was the director of the Office of Scientific Research and Development during World War II, through which virtually all of the scientific research for the war effort was funneled.

It was plain to all that science had played a lead role in the war. The Federal government had funded scientific research at an unprecedented scale, but what was the government role to be in the coming peace? President Roosevelt asked Bush to write a report on the matter.

Bush argued research can either be basic (“pure science,” which boils down to describing the world as it is) or applied (research in service of some practical goal). He argued for two points: First, that basic research lies behind the success of much practical research; e.g., the Manhattan project was a grand practical application made possible by advances in basic physics research. Second, applied research would inevitably crowd out basic research for funding because it offers short term gains.

Bush concluded that the Federal Government should continue its funding of science in peacetime, and that it should focus on basic research. Industry could fund research and development for application, and it was reasonable to expect that industry would do so. The federal investment was justifiable via the pay-off in economic productivity. That’s how the National Science Foundation was born.

Basic research has been housed primarily in the university system. That’s our role in the system. We’re not really here to work on applied problems. If a drug company wants to know the latest findings from molecular biology, they should hire a molecular biologist who will do the translation.

This arrangement actually makes a lot more sense than academics trying to do it.

Translation is more than explaining technical matters in everyday terms. It requires knowing how to exploit the technical findings in a way that serves the practical goal. For example, in education you can’t just take findings from cognitive science and pop them into the classroom, expecting kids will learn better. You need to know something about classrooms to understand how the application might work.

It makes more sense for the translators to be close to the site of the application because application can take so many forms. Cognitive science has applications throughout industry, the military, health care, education, and beyond. You really need to be embedded in the locale to understand the problem that the basic science is meant to solve.

So that’s why so many academics, when asked why they don’t make their work more accessible to the general public, say “that’s not my job.” We might add (and this is relevant to Kristof’s second point) that most of us are not very good at describing what we do in non-technical terms. It's a different skill set. Adding “writes well” to the criteria for promotion won’t get much traction among scientists. (The technical language that comes with any specialization adds to the problem, of course.)

But again, I think Kristof’s blade is much sharper when applied to university schools that claim a mission which includes practical application. Schools of Ed., I’m looking at you.

Single-sex schooling--tiny effects, if any

2/10/2014

 
The idea that students would learn better in single-sex classrooms seems logical. The typical arguments include
  • Boys find girls more distracting in class than they find other boys. Likewise, girls find boys more distracting.
  • Sex differences in math and science achievement are a product of social influence. Those influences will be reduced or eliminated if girls are in classrooms only with girls.
  • Boys dominate classroom discussion, and so girls are denied practice in articulating and defending their views.
  • Boys and girls have different brains, and therefore learn differently. If they are taught separately, teachers can tune their instruction to the way each sex learns.
The last of these is frequently overwrought and over-interpreted, but generally, these reasons seem plausible. But that obviously doesn't prove that single-sex education confers any advantage to students.

A 2005 report written for the Department of Education (Mael et al, 2005) reported mixed effects, but generally a positive conclusion for single-sex classrooms in short-run academic outcomes. There was no indication of a boost to longer-term outcomes.


A new study (Pahlke, Hyde, & Allison, 2014) reports a meta-analysis of 184 studies representing 1.6 million students in K-12 across 21 nations. The authors place considerable emphasis on the problem of control in this research. They end up concluding that, with proper controls, analyses show that single-sex classrooms don't help students much.
Picture
The challenge in this sort of work is that comparisons of single-sex and coed classrooms often do not use random assignment. Students (or parents) choose a single-sex classroom. So for this review, the authors distinguished between controlled experiments (the original study either used random assignment or made some attempt to measure and statistically account for associated variables) and uncontrolled studies.

In controlled studies, there were statistically reliably, but numerically quite modest positive effects of single-sex classrooms for both boys and girls in mathematics achievement, science achievement, and verbal achievement (Hedges g in all cases less than 0.10). Girls showed an edge in single-sex classes for math attitude, science achievement, and overall academic achievement, but again, the gains were modest. If one restricts the analysis to U.S. students, virtually all of these small effects disappear.

There was no effect for attitudes towards school, gender stereotyping, educational aspirations, self-concept, interpersonal relationships, or body image.

Picture
There were not enough controlled studies to examine aggression, body image, interpersonal relations, interest in STEM careers, science attitudes, or victimization.

It's also notable that there was no dosage effect: the advantage was no larger when all classes within a school were single-sex classes, compared to when a single class was.

The authors were also interested in evaluating whether single sex classes were effective for boys of color. They reported that there were not enough controlled studies to answer this question, but even restricting the analysis to uncontrolled studies, the effects were minimal.

When you consider the factors that we know contribute substantially to academic attitudes and performance--the student's prior academic achievement, the curriculum, the home environment, the teacher's skill--it's easy to believe that the sex of the other students would have a modest effect, if any.

That said, it
could be that a single sex school has a profound influence on a few students. A few years ago, friends of mine moved their 15 year old daughter to an all-girls school because she was "boy crazy." According to my friends, she didn't become any less interested in boys, but she did focus on work better during school hours. But then again it's possible my friends were kidding themselves.

References

Mael, F., Alonso, A., Gibson, D., Rogers, K., & Smith, M. (2005).
Single-sex versus coeducational schooling: A systematic review. Washington, DC: American Institutes for Research

Pahlke, E.,Hhyude, J. S, & Allison, C. M. (2014). The effects of single-sex cmopared with coeducational schooling on students' performance and attitudes: A meta-analysis. Psychological Bulletin, Advance online publication, http://dx.doi.org/10.1037/a0035740

PreK research districts should know

2/3/2014

 
Last week Dave Grissmer and I published an op-ed on universal pre-k. We didn’t take it as controversial that government support for pre-K access is a good idea. As Gail Collins noted, when President Obama mentioned early education in his State of the Union address, it was one of the few times John Boehner clapped. Even better, there are good data indicating that, on average, state programs help kids get ready to learn math and to read in Kindergarten (e.g., Gormley et al, 2005; Magnuson et al, 2007).

Dave and I pointed out that the means do show gains, but state programs vary in their effectiveness. It’s not the case that any old preschool is worth doing, and that’s why everyone always says that preschool must be “high quality.” But exactly how to ensure high quality is not so obvious.
Picture
One suggestion we made was made was to capitalize on what is already known. The Department of Education has funded preK research for decades. Dave and I merely claimed that it had yielded useful information. Let me give an example here of the sort of thing we had in mind.

A recent study (Weiland & Yoshikawa, 2013) reported research that was notable in this respect: important decisions and procedures concerning the programs were made by the people and in the way such decisions will likely to be made as state preK programs expand or are initiated. The district was Boston Public Schools, and they offer preK for any child of age—there is no restriction based on income. The district:  

1.       picked the curriculum.
2.       figured out how to implement the curriculum at scale without any input from its developers.
3.       developed its own coaching program for teachers, meant to ensure that the curricula were implemented effectively.

The second and third points are especially important, as the greatest challenge in education research has been bringing what look like useful ideas to scale.  It’s not certain why that’s so, but one good guess is that as you scale up, the people actually implementing the curriculum have little or no contact with the person who developed it. So it’s harder to tell exactly how it’s supposed to go.

Naturally, schools and classrooms will want to tweak the program here and there to make it a better fit for their school or classroom. They will use their judgment as to which changes won’t affect the overall integrity of the program, but the voice of the developer of the curriculum is probably important in this conversation.

Picture
Boston Public Schools picked Opening the World of Literacy for their prereading and language program; there were few data for the program, and they were somewhat mixed. For mathematics, they picked Building Blocks, which had both more research and a stronger track record of success.

Weiland and Yoshikawa measured the progress of 2,018 children in 238 classrooms during the 2008/09 school year. They found moderate to large gains in language, pre-reading, and math skills. There was even a small effect in executive function skills, although the two curricula did not target these directly. Interestingly (and in contrast to other findings) they found no interaction with household income; poor and wealthy children showed the same benefit. There were some interactions with ethnicity: children from Hispanic homes showed larger benefits than others on some measures.

There are questions that could be raised. The comparison children were those who had just missed the age cut-off to attend the preschool. So those children are, obviously, younger, and might be expected to show less development during those 9 months than older children. Another objection concerns what those control kids were doing during the year. The researchers did have data on this question, and reported that many were in setting that typically do not offer much opportunities for cognitive growth, e.g., center-based care (although the researchers argued that Massachusetts imposes stricter regulations for quality on such settings than most states do.)

Despite these caveats, this study represents the kind of thing Dave and I had in mind when we said the Department of Education should make communicating research findings to states a priority. Boston faced exactly the problem that many districts will face, they solved it using their own limited resources as districts will have to, and by all appearances, it’s been a success.

References:

Gormley, W. T., Gayer, T., Phillips, D., & Dawson, B. (2005). The effects of universal pre-K on cognitive development. Developmental Psychology, 41, 872–884.

Magnuson, K., Ruhm, C., & Waldfogel, J. (2007). Does prekindergarten improve school preparation and per-formance? Economics of Education Review, 26,33–51.

Weiland, C. & Yoshikawa, H. (2013). Impacts of a prekindergarten program on children’s mathematics, language, literacy, executive function, and emotional skills. Child Development, 84, 2112-2130.

    Enter your email address:

    Delivered by FeedBurner

    RSS Feed


    Purpose

    The goal of this blog is to provide pointers to scientific findings that are applicable to education that I think ought to receive more attention.

    Archives

    April 2022
    July 2020
    May 2020
    March 2020
    February 2020
    December 2019
    October 2019
    April 2019
    March 2019
    January 2019
    October 2018
    September 2018
    August 2018
    June 2018
    March 2018
    February 2018
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    April 2017
    March 2017
    February 2017
    November 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    December 2015
    July 2015
    April 2015
    March 2015
    January 2015
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    September 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012
    February 2012

    Categories

    All
    21st Century Skills
    Academic Achievement
    Academic Achievement
    Achievement Gap
    Adhd
    Aera
    Animal Subjects
    Attention
    Book Review
    Charter Schools
    Child Development
    Classroom Time
    College
    Consciousness
    Curriculum
    Data Trustworthiness
    Education Schools
    Emotion
    Equality
    Exercise
    Expertise
    Forfun
    Gaming
    Gender
    Grades
    Higher Ed
    Homework
    Instructional Materials
    Intelligence
    International Comparisons
    Interventions
    Low Achievement
    Math
    Memory
    Meta Analysis
    Meta-analysis
    Metacognition
    Morality
    Motor Skill
    Multitasking
    Music
    Neuroscience
    Obituaries
    Parents
    Perception
    Phonological Awareness
    Plagiarism
    Politics
    Poverty
    Preschool
    Principals
    Prior Knowledge
    Problem-solving
    Reading
    Research
    Science
    Self-concept
    Self Control
    Self-control
    Sleep
    Socioeconomic Status
    Spatial Skills
    Standardized Tests
    Stereotypes
    Stress
    Teacher Evaluation
    Teaching
    Technology
    Value-added
    Vocabulary
    Working Memory