Daniel Willingham--Science & Education
Hypothesis non fingo
  • Home
  • About
  • Books
  • Articles
  • Op-eds
  • Videos
  • Learning Styles FAQ
  • Daniel Willingham: Science and Education Blog

The Gates Foundation's "engagement bracelets"

6/26/2012

 
It's not often that an initiative prompts grave concern in some and ridicule in others. The Gates Foundation managed it.

The Foundation has funded a couple of projects to investigate the feasibility of developing a passive measure of student engagement, using galvanic skin response (GSR).

The ridicule comes from an assumption that it won't work.

GSR basically measures how sweaty you are. Two leads are placed on the skin. One emits a very very mild charge. The other measures the charge. The more sweat on your skin, the better it conducts the charge, so the better the second lead will pick up the charge.

Who cares how sweaty your skin is?

Sweat--as well as heart rate, respiration rate and a host of other physiological signs controlled by the peripheral nervous system--vary with your emotional state.

Can you tell whether a student is paying attention from these data? 

It's at least plausible that it could be made to work. There has long been controversy over how separable different emotional states are, based on these sorts of metrics. It strikes me as a tough problem, and we're clearly not there yet, but the idea is far from kooky, and indeed, the people who have been arguing its possible have been making some progress--this lab group says they've successfully distinguished engagement, relaxation and stress. (Admittedly, they gathered a lot more data than just GSR and one measure they collected was EEG, a measure of the central, not peripheral, nervous system.)

The grave concern springs from the possible use to which the device would be put.

A Gates Foundation spokeswoman says the plan is that a teacher would be able to tell, in real time, whether students are paying attention in class. (Earlier the Foundation website indicated that the grant was part of a program meant to evaluate teachers, but that was apparently an error.)

Some have objected that such measurement would be insulting to teachers. After all, can't teachers tell when their students are engaged, or bored, or frustrated, etc.?

I'm sure some can, but not all of them. And it's a good bet that beginning teachers can't make these judgements as accurately as their more experienced colleagues, and beginners are just the ones who need this feedback. Presumably the information provided by the system would be redundant to teachers who can read it by their students faces and body language, and these teachers will simply ignore it.

I would hope that classroom use would be optional--GSR bracelets would enter classrooms only if teachers requested them.

Of greater concern to me are the rights of the students. Passive reading of physiological data without consent feels like an invasion of privacy. Parental consent ought to be obligatory. Then too, what about HIPAA? What is the procedure if a system that measures heartbeat detects an irregularity?

These two concerns--the effect on teachers and the effect on students--strike me as serious, and people with more experience than I have in ethics and in the law will need to think them through with great care.

But I still think the project is a terrific idea, for two reasons, neither of which has received much attention in all the uproar.

First, even if the devices were never used in classrooms, researchers could put them to good use.

I sat in at a meeting a few years ago of researchers considering a grant submission (not to the Gates Foundation) on this precise idea--using peripheral nervous system data as an on-line measure of engagement. (The science involved here is not really in my area of expertise, and had no idea why I was asked to be at the meeting, but that seems to be true of about two-thirds of the meetings I attend.) Our thought was that the device would be used by researchers, not teachers and administrators.

Researchers would love a good measure of engagement because the proponents of new materials or methods so often claim "increased engagement" as a benefit. But how are researchers supposed to know whether or not the claim is true? Teacher or student judgements of engagement are subject to memory loss and to well-known biases.

In addition, I see potentially great value for parents and teachers of kids with disabilities. For example, have a look at these two pictures.
Picture
This is my daughter Esprit. She's 9 years old, and she has Edward's syndrome. As a consequence, she has a host of cognitive and physical challenges, e.g., she cannot speak, and she has limited motor control and bad motor tone (she can't sit up unaided).

Esprit can never tell me that she's engaged either with words or signs. But I'm comfortable concluding that she is engaged at moments like that captured in the top photo--she's turning the book over in her hands and staring at it intently.

In the photo at the bottom, even I, her dad, am unsure of what's on her mind. (She looks sleepy, but isn't--ptosis, or drooping upper eyelids, is part of the profile).  If Esprit wore this expression while gazing towards a video for example, I wouldn't be sure whether she was engaged by the video or was spacing out.

Are there moments that I would slap a bracelet on her if I thought it could measure whether or not she was engaged?

You bet your sweet bippy there are. 

I'm not the first to think of using physiologic data to measure engagement in people with disabilities that make it hard to make their interests known. In this article, researchers sought to reduce the communication barriers that exclude children with disabilities from social activities; the kids might be present, but because of their difficulties describing or showing their thoughts, they cannot fully participate in the group.  Researchers reported some success in distinguishing engaged from disengaged states of mind from measures of blood volume pulse, GSR, skin temperature, and respiration in nine young adults with muscular dystrophy or cerebral palsy.

I respect the concerns of those who see the potential for abuse in the passive measurement of physiological data. At the same time, I see the potential for real benefit in such a system, wisely deployed.

When we see the potential for abuse, let's quash that possibility, but let's not let it blind us to the possibility of the good that might be done.

And finally, because Esprit didn't look very cute in the pictures above, I end with this picture.

Picture

A Grade for UVa's Board of Visitors

6/21/2012

 
The President of the University of Virginia was forced to resign on June 10. The UVa student newspaper, the Cavalier Daily, used a Freedom of Information request to obtain emails between the Rector (i.e., chair) and Vice Rector of the the University Board of Visitors. The emails from Rector Helen Dragas and Vice Rector Mark Kington expressed great concern that a digital revolution was mounting in higher education, and UVa might be left behind. Their perception that the President was not moving aggressively enough into digital learning was apparently important in their decision to oust Ms. Sullivan.

Here I grade Ms. Dragas and Mr. Kington on their project.

Dear Ms. Dragas & Mr. Kington:

I'm writing to let you know your grade for the Digital Learning Project, as part of your larger grade as Rector and Vice Rector. I wish I brought better news.

On the bright side, let me complement you on your font choice and the formatting of your emails. Further, they feature some unusual words, and a spirit of verve throughout.

But I'm afraid these bright spots pale in comparison to the problems: an immature analysis brought on by terribly shallow research.

On the analysis:

You are right that digital media and especially communication via the Internet will change higher education, and, indeed, has already done so. Unfortunately, your thinking, far from being forward-looking, is at least five years out of date.

Thus, your conclusion that President Sullivan was moving in the wrong direction or at the wrong pace is inaccurate.

Students may flock to online learning providers, but if they do UVa is very much a late-comer to this game. You cite Stanford, MIT, and Harvard, and indeed, all are far ahead of UVa in one aspect of distance education.

But more important, they are engaged in what we might call on-line education 1.0; stick a camera in a traditional lecture, and offer multiple-choice questions later.

This doesn't really fully exploit the power of the Web, does it? How will UVa be better off chasing leaders in this one area, rather than leveraging areas in which we are already a leader? (More on that in a moment.)

I'm not surprised you drew this conclusion, give the sources you cite. Wall Street Journal editorials and New York Times op-eds are not considered primary sources in this context, Ms. Dragas and Mr. Kington. These are non-experts pulling together the opinions of experts as best they can. That's what you are supposed to do, rather than parrot the opinions of others, however highly regarded they may be.

This casual disregard for good research is especially surprising given the rich resources that surround you.

Did you know that the University of Virginia is a leader in digital scholarship, having established the Institute for Advanced Technology in the Humanities in 1992?

Did you know that the University has offered fellowships in digital scholarships to faculty for the last two decades, spawning countless faculty-initated innovations in digital learning. Have you seen the digital model of ancient Rome, or the transcription project of the Salem Witch trials, for example?

Did you know that the Curry School of education has a strong program of Instructional Technology?

Did you know that the Center for the Advanced Study of Teaching and Learning at UVa has several ongoing projects in distance learning for teachers that are being replicated throughout the country?

I could go on and on, but you get the idea. You had tremendous intellectual capital on this subject ready and available. All you had to do was make a few phone calls, but it didn't occur to you to ask.

Now, I can just hear your protests: "You can't judge my views on this matter from a few emails! And they were not based solely on a few New York Times columns!"

In other words, the project submitted does not reflect your best thinking on the subject.

I hear that a lot from students.

But I can only grade you on what you submitted, not based on my best guess as to what you were thinking. And maybe that's part of the problem. If you had let me know what you were thinking, I might have been able to help make this project better. I'm not entirely ignorant on the subject of learning myself.

You have earned a grade of F for this project.

Ms. Dragas, given your performance in the remainder of the Rector's job, I'm afraid I don't see how you're going to pass. Perhaps you should, as Mr. Kington has already done, drop the course.
 

New study: Fluid intelligence not trainable

6/19/2012

 
A few months ago the New York Times published an article on the training of working memory titled "Can You Make Yourself Smarter?" I suggested that the conclusions of the article might be a little too sunny--I pointed out that reviews of the literature by scientists suggested that having subjects practice working memory tasks (like the n-back task, shown below) led to improvement in the working memory task, but not in fluid intelligence.
N-back task
I also pointed out that a significant limitation of many of these studies was the use of a single measure of intelligence. A new study solves that problem.

The study, by Thomas Redick and seven other researchers, offers a negative result--training doesn't help--which often is not considered news (link is 404 as I write this--I hope it will be back up soon). There are lots of ways of screwing up a study, most of which would lead to null results. But this null result ended up published in the excellent Journal of Experimental Psychology: General because the study is so well-designed.

Researchers gave the training plenty of opportunity to have an impact--subjects underwent 20 sessions. There were enough subjects (N=75) to afford decent statistical power to detect an effect, were one present.  Researchers used a placebo control group (visual search) as well as a no-contact control group. They used multiple measures of fluid intelligence, crystallized intelligence, multi-tasking, and perceptual speed. These measures were administered before, during, and after training.

The results: people got better at what they practiced--either n-back or visual search--but there was no transfer to any other task, as shown in the Table (click for larger version).

data table
One study is never fully conclusive on any issue. But given the previous uneven findings of the effects, this study represents another piece of the emerging picture: either fluid intelligence is trainable only in some specialized yet-to-be-defined circumstances, or it's not possible to make a substantial improvement in fluid intelligence through training at all.

These results make me skeptical of commercial programs offering to improve general cognitive processing.

Redick, T. S., Shipstead, Z., Harrison, T. L., Hicks, K. L., Fried, D. E., Hambrick, D. Z., Kane, M. J., & Engle, R. W. (in press). No evidence of intelligence improvement after working memory training: A randomized, placebo-controlled study. Journal of Experimental Psychology: General.



Neuroplasticity--what's up with that?

6/15/2012

 
You hear a lot of talk these days about neural plasticity--that is, changes in neural structure and connectivity. Wouldn't it be great to have an article that provides a nice overview of the different types of neural plasticity?

Two of my colleagues (Lillard & Erisir, 2011) published something that fits the bill quite nicely.

I can't find a version that is not behind a paywall--my apologies. If you can't access it, let me give you at least two of the highlights. First, they offer this wonderful table that summarizes different varieties of changes to the brain. (Note that some of these changes are consequences of neural development (orchestrated mainly by genetic information) as opposed to neuroplasticity (which is a consequence of external stimulation and/or reverberation with neural circuits. Click the table to see a larger version.

Picture
Second, the authors provide a list of learning effects that have been associated with neuroplaticity in animal models and in humans.
  • Sensitization & habituation (increased or decreased neural response due to repeated exposure)
  • Enriched environments: animals learn more (and show more neural connectivity) in environments where there is a lot to be learned.
  • Attention: Plasticity is enhanced by attention
  • Visuo-motor learning: the famous London cabbies study, but others too
  • Neurogenesis: tons of data in rats.
  • Myelination: A new technology (diffusion tensor imaging) allows visualization of white matter volume and organization. That myelination is not complete until later than thought captured the headlines, but equally important, other work shows that experience changes myelination.

What does all this mean for educators?

Nothing. Directly. But ongoing work in this area contributes to our understanding of learning, and is part of the larger project to help us better understand students and how to make schooling more effective.

Lillard, A. S. & Erisir, A. (2011). Old dogs learning new tricks: Neuroplasticity beyond the juvenile period. Developmental Review, 31








A good book. . . 

6/14/2012

 
Picture
Corner library in the Philippines (via Reddit)

The Good News About Spatial Skills

6/12/2012

 
There is a great deal of attention paid to and controversy about, the promise of training working memory to improve academic skills, a topic I wrote about here.

But working memory is not the only cognitive process that might be a candidate for training. Spatial skills are a good predictor of success in science, mathematics, and engineering.

Now on the basis of a new meta-analysis (Uttal, Meadow, Tipton, Hand, Alden, Warren & Newcombe, in press) researchers claim that spatial skills are eminently trainable. In fact they claim a quite respectable average effect size of 0.47 (Hedge's g) after training (that's across 217 studies).

Training tasks across these many studies included things like visualizing 2D and 3D objects in a CAD program, acrobatic sports training, and learning to use a laparascope (an angled device used by surgeons). Outcome measures were equally varied, and included standard psychometric measures (like a paper-folding test), tests that demanded imagining oneself in a landscape, and tests that required mentally rotating objects.

Even more impressive:

1) researchers found robust transfer to new tasks
2) researchers found little, if any effect of delay between training and test--the skills don't seem to fade with time, at least for several weeks. (Only four studies included delays of greater than one month.)

This is a long, complex analysis and I won't try to do it justice in a brief blog post. But the marquee finding is big news. What we'd love to see is an intervention that is relatively brief, not terribly difficult to implement, reliably leads to improvement, and transfers to new academic tasks.

That's a tall order, but spatial skills may fill all the requirements.

The figure below (from the paper) is a conjecture--if spatial training were widely implemented, and once scaled up we got the average improvement we see in these studies,  how many more people could be trained as engineers?
Picture
The paper is not publicly available, but there is a nice summary here from the collaborative laboratory responsible for the work. I also recommend this excellent article from American Educator on the relationship of spatial thinking to math and science, with suggestions for parents and teachers.

Uttal, D. H., Meadow, N. G., Tipton, E., Hand, L. L., Alden, A. R., Warren, C., & Newcombe, N.S. (2012, June 4). The Malleability of Spatial Skills: A Meta-Analysis of Training Studies. Psychological Bulletin. Advance online publication. doi: 10.1037/a0028446

Newcombe, N. S. (2010) Picture this: Increasing math and science learning by improving spatial thinking. American Educator, Summer, 29-35, 43.

Teachers shouldn't need to learn neuroscience

6/4/2012

 
This article from Education Week suggests that teachers ought to learn neuroscience.

That strikes me as a colossal waste of teachers' time.

The offered justification is that a high percentage of teacher's hold false beliefs about the brain, and thus ought to be "armed" to evaluate claims that they encounter in professional development sessions, the media, etc.

But it takes an awful lot of work for any individual to become knowledgeable enough about neuroscience to evaluate new ideas. And why would it stop at neuroscience? One could make the same case for cognitive psychology, developmental psychology, social psychology, sociology, cultural studies, and economics, among other fields

Further, this suggestion seems like unnecessary duplication of effort. What's really needed is for a few trusted educators to evaluate new ideas, and to periodically bring their colleagues up to date.

In fact, that's how the system is set up. But it's not working.

First, the neuro-myths mentioned in the article ought to be defused during teacher training. Some programs do so, I'm sure, but most appear not to be doing a good enough job. It's certainly true that textbooks aimed at teachers don't do enough in this regards. Learning styles, for example, go unmentioned, or perhaps get a paragraph in which the theory is (accurately) said to be lacking evidence. Given the pervasiveness of these myths, schools of education ought to address the problem with more vigor.

Second, there is virtually always someone in the district central office who is meant to be the resource person for professional development: is this PD session likely to be legit, or is this person selling snake oil?  If teachers are exposed to PD with sham science, the right response, it seems to me, is not to suggest that teachers learn some neuroscience. The right response is outrage directed at the person who brought the knucklehead in there to do the PD session.

Third, it would make perfect sense if professional groups helped out in this regard. The Department of Education has tried with the What Works Clearinghouse and with it's various practice guides. These have had limited success. It might be time for teachers to take a try at this themselves.

Teachers don't need to learn neuroscience, or better put, teachers shouldn't need to learn neuroscience--not to be protected from charlatans. Teachers need to learn things that will directly help their practice. Charlatan protection ought to come from institutions: from schools of education, from district central offices, and (potentially) from institutions of teachers' own creation.

    Enter your email address:

    Delivered by FeedBurner

    RSS Feed


    Purpose

    The goal of this blog is to provide pointers to scientific findings that are applicable to education that I think ought to receive more attention.

    Archives

    April 2022
    July 2020
    May 2020
    March 2020
    February 2020
    December 2019
    October 2019
    April 2019
    March 2019
    January 2019
    October 2018
    September 2018
    August 2018
    June 2018
    March 2018
    February 2018
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    April 2017
    March 2017
    February 2017
    November 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    December 2015
    July 2015
    April 2015
    March 2015
    January 2015
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    September 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012
    February 2012

    Categories

    All
    21st Century Skills
    Academic Achievement
    Academic Achievement
    Achievement Gap
    Adhd
    Aera
    Animal Subjects
    Attention
    Book Review
    Charter Schools
    Child Development
    Classroom Time
    College
    Consciousness
    Curriculum
    Data Trustworthiness
    Education Schools
    Emotion
    Equality
    Exercise
    Expertise
    Forfun
    Gaming
    Gender
    Grades
    Higher Ed
    Homework
    Instructional Materials
    Intelligence
    International Comparisons
    Interventions
    Low Achievement
    Math
    Memory
    Meta Analysis
    Meta-analysis
    Metacognition
    Morality
    Motor Skill
    Multitasking
    Music
    Neuroscience
    Obituaries
    Parents
    Perception
    Phonological Awareness
    Plagiarism
    Politics
    Poverty
    Preschool
    Principals
    Prior Knowledge
    Problem-solving
    Reading
    Research
    Science
    Self-concept
    Self Control
    Self-control
    Sleep
    Socioeconomic Status
    Spatial Skills
    Standardized Tests
    Stereotypes
    Stress
    Teacher Evaluation
    Teaching
    Technology
    Value-added
    Vocabulary
    Working Memory