Daniel Willingham--Science & Education
Hypothesis non fingo
  • Home
  • About
  • Books
  • Articles
  • Op-eds
  • Videos
  • Learning Styles FAQ
  • Daniel Willingham: Science and Education Blog

Can you multitask on a treadmill?

5/2/2016

 
One of my graduate school mentors noted that if he was walking when presented with a really difficult cognitive challenge, he would stop, as though walking drew, however slightly, on his attention. Rousseau, in contrast, claimed “I can only meditate when I am walking.”
​
The advent of the treadmill desk makes the question of walking and cognition more urgent. Okay, there may be health benefits, but if walking is not fully automatic, it siphons away some of your thinking capacity; it demands multitasking, so why put one in the workplace? 
Picture
Researchers have caught up to business trends, and a couple of recent studies indicate that dedicated office walkers can relax—treadmills don’t seem to compromise cognition. Probably.

In one, researchers administered well-normed measures of working memory and executive function (digit span forwards and backwards, digit-symbol coding, letter-number substitution, and others) to 45 college undergraduates. Each completed the tasks while sitting, standing, and walking (in random order), with 1 week elapsing between sessions. Participants could set the walking speed as they preferred, between 1 and 3 km/hour. Performance on the working memory/executive function tasks was statistically indistinguishable in the three conditions.

That study had people walk (or not) and measured the impact on working memory. Another approach is for researchers to tax working memory (or not) and observe the impact on walking. Other researchers used that method, having subjects either just walk (again, at a self-selected pace), or walk while performing a working memory task, or walk while reading. Researchers recorded several aspects of gait, focusing on variability. Again, they found no evidence of interference.

The neurophysiology of walking would seem consistent with these results. Humans have central pattern generators in the spinal cord—neural circuits that, even in the absence of input of the brain, can generate patterns of flexion and extension in muscles that look like walking. Thus, if the spinal cord can handle walking on its own, it’s easy to see why walking is not compromised when the brain is doing something else.

But central pattern generators set up pretty crude motor output; locomotion (like all movement) requires close monitoring of perceptual feedback (from vision, from balance) which is used to fine-tune walking movements (For a review, see Clark, 2015). We notice the need for perceptual information when we walk on ice, and for motor tuning when we pick our way through a rocky beach, but the fine-tuning goes on in a less obtrusive way in everyday situations. To get a feel for that, find yourself a nice long hallway, pick a spot about 50 feet away, and walk towards it with your eyes closed. If walking were a completely automatic program that could run without visual input from the brain, this would be no problem, yet most sighted people feel uneasy just a few steps in.

So if walking actually can’t run with total automaticity, why does treadmill walking show no attentional cost?

It may be that there is a cost, but it’s so small that it’s not detected in these experiments. And that may mean it’s not worth worrying about. Follow-up experiments with greater statistical power to detect small effects would be needed to address that possibility. Three other caveats are worth considering before we all buy treadmill desks.

First, the studies to date have been of relatively brief duration—less than an hour. It’s possible that subjects can with some effort of concentration, walk without cognitive cost for a short period of time, but a few hours would reveal a deficit; tiredness might make walking a little sloppy, and thus more attention-demanding.

Second, at least one study has shown a movement task (key tapping) was compromised when people walk (Oblinger et al, 2011). That’s not an effect of attention, but of trying to do two motor actions at once, like rubbing your stomach while patting your head. Hence, although office activities like typing or data entry have not been tested on treadmills, I’d be willing to bet they would be compromised. 

Third, my graduate mentor and Rousseau may have been talking about different types of thought. My mentor referred to answering a question, whereas Rousseau may have meant more creative thought. Walking may not be helpful when the environment presents pressing problems in need of timely answers. But a meandering gait may promote meandering thought, which in turn promotes creativity. The latter has not be tested on office treadmills either.

What's behind stereotype threat?

11/11/2013

 
"Stereotype threat" refers to a phenomenon in which people perform worse on tasks (especially mental tasks) in line with stereotypes, if they are are reminded of this stereotype.

Hence, the stereotype for women (in American culture) is that they are not as good at math as men; for older people, that they are more forgetful than the young; and for African-Americans, that they are less proficient at academic tasks. Members of each group do indeed perform worse at that type of task if the stereotype is made salient just before they undertake it (e.g. Appel & Kronberger, 2012).

Why does it happen? Most researchers have thought that the mechanism is via working memory. When the stereotype becomes active, people are concerned that they will verify the stereotype. These fears occupy working memory, thereby reducing task performance (e.g.,
Hutchison, Smith & Ferris, 2013).

But a new experiment offers an alternative account. Sarah Barber & Mara Mather (2013) suggests that stereotype threat might operate through a mechanism
called regulatory fit. That's a theory of how people pursue goals. If the way you conceive of task goals matches the goal structure of the task, you're more likely to do well than if it's a poor fit.

Picture
Stereotype threat makes you focus on prevention; you don't want to make mistakes (and thus confirm the stereotype). But, Barber & Mather argue, most experiments emphasize doing well, not avoiding mistakes. Thus, you'd be better off with a promotion focus, not a prevention one.

To test this idea, Barber & Mather tested fifty-six older (around age 70) subjects on a combined memory/working memory task. Subjects read sentences, some of which made sense, others which were nonsensical either syntactically or semantically.

Subjects indicated with a button press whether the sentence made sense or not. In addition, they were told to remember the last word of the sentence for as many of the sentences as they could. Task performance was measured by a combined score: how many sentences were correctly identified (sensible/nonsensical) and how many final words were remembered.

Next, subjects read one of two fictitious news articles. The one meant to invoke stereotype threat described the loss of memory due to aging. The control article described preservation of memory with aging.

Then, subjects performed the sentence task again. We would expect that stereotype threat would lead to worse performance.

BUT the experimenters also varied the reward structure of the task. Some subjects were told they would get a monetary reward for good performance. Others were told they were starting with a set amount of money, and that each memory error would incur a penalty. 

The instructions made a big difference in the outcome. As shown in the graph, framing in terms of costs for errors didn't just remove stereotype threat; it actually lead to an improvement.

Picture

This outcome makes sense, according to the regulatory fit hypothesis. Subjects were worried about errors, and the task rewarded them for avoiding errors.

These data are the first to test this new hypothesis as to the mechanism of stereotype threat, and should not be seen as definitive.

But if this new explanation holds up (and if it applies to other groups) it should have significant implications for how threat
can be avoided.

References:
Appel, M., & Kronberger, N. (2012). Stereotypes and the achievement gap: Stereotype threat prior to test taking. Educational Psychology Review, 24(4), 609-635.

Barber, S. J., & Mather, M. (2013). Stereotype Threat Can Both Enhance and Impair Older Adults’ Memory. Psychological science, published online Oct. 22, 2013. DOI: 10.1177/0956797613497023.
Hutchison, K. A., Smith, J. L., & Ferris, A. (2013). Goals Can Be Threatened to Extinction Using the Stroop Task to Clarify Working Memory Depletion Under Stereotype Threat. Social Psychological and Personality Science, 4(1), 74-81.

Working memory training: Are the studies accurate?

10/15/2012

 
Last June I posted a blog entry about training working memory, focusing on a study by Tom Redick and his colleagues, which concluded that training working memory might boost performance on whatever task was practiced, but it would not improve fluid intelligence.

(Measures of fluid intelligence are highly correlated with measures of working memory, and improving intelligence would be most people's purpose in undergoing working memory training.)

I recently received an email from Martin Walker, of MindSparke.com, which offers brain training. Walker sent me a polite email arguing that the study is not ecologically valid: that is, the conclusions may be accurate for the conditions used in the study, but the conditions used in the study do not match those typically encountered outside the laboratory. Here's the critical text of his email, reprinted with his permission:

"There is a significant problem with the design of the study that invalidates all of the hard work of the researchers--training frequency.  The paper states that the average participant completed his or her training in 46 days.  This is an average frequency of about 3 sessions per week.  In our experience this frequency is insufficient.  The original Jaeggi study enforced a training frequency of 5 days per week.  We recommend at least 4 or 5 days per week.

With the participants taking an average of 46 days to complete the training, the majority of the participants did not train with sufficient frequency to achieve transfer.  The standard deviation was 13.7 days which indicates that about 80% of the trainees trained less frequently than necessary.  What’s more, the training load was further diluted by forcing each session to start at n=1 (for the first four sessions) or n=2, rather than starting where the trainee last left off.
"

I forwarded the email to Tom Redick, who replied:

"Your comment about the frequency of training was something that, if not in the final version of the manuscript, was questioned during the review process. Perhaps it would’ve been better to have all subjects complete all 20 training sessions (plus the mid-test transfer session) within a shorter prescribed amount of time, which would have led to the frequency of training sessions being increased per week. Logistically, having subjects from off-campus come participate complicated matters, but we did that in an effort to ensure that our sample of young adults was broader in cognitive ability than other cognitive training studies that I’ve seen. This was particularly important given that our funding came from the Office of Naval Research – having all high-ability 18-22 year old Georgia Tech students would not be particularly informative for the application of dual n-back training to enlisted recruits in the Army and Marines.

However, I don’t really know of literature that indicates the frequency of training sessions is a moderating factor of the efficacy of cognitive training, especially in regard to dual n-back training. If you know of studies that indicate 4-5 days per week is more effective than 2-3 days week, I’d be interested in looking at it.

As mentioned in our article, the Anguera et al. (2012) article that did not include the matrix reasoning data reported in the technical report by Seidler et al. (2010) did not find transfer from dual n-back training to either BOMAT or RAPM [Bochumer Matrices Test and Raven's Advanced Progressive Matrices, both measures of fluid intelligence], despite the fact that “Participants came into the lab 4–5 days per week (average = 4.5 days) for approximately 25 min of training per session” (Anguera et al., 2012), for a minimum of 22 training sessions. In addition, Chooi and Thompson (2012) administered dual n-back to participants for either 8 or 20 days, and “Participants trained once a day (for about 30 min), four days a week”. They found no transfer to a battery of gF and gC tests, including RAPM.

In our data, I correlated the amount of dual n-back practice gain (using the same method as Jaeggi et al) during training and the number of days it took to finish all 20 practice sessions (and 1 mid-test session). I would never really trust a correlation of N = 24 subjects, but the correlation was r = -.05.'.


I re-analyzed our data, looking only at those dual n-back and visual search training subjects that completed the 20 training and 1 mid-test session within 23-43 days, meaning they did an average of at least 3 sessions of training per week. For the 8 gF tasks (the only ones I analyzed), there was no hint of an interaction or pattern suggesting transfer from dual n-back.

So to boil Redick's response down to a sentence, he's  pointing out that other studies have observed no impact on intelligence when using a training regimen closer to that advocated by Walker, and Redick finds no such effect in a follow-up analysis of his own data (although I'm betting he would acknowledge that the experiment was not designed to address this question, and so does not offer the most powerful means of addressing it.)

So it does not seem that training frequency is crucial. 

A final note: Walker commented in another email that customers of MindSparke consistently feel that the training helps, and Redick remarked that subjects in his experiments have the same impression. It just doesn't bear out in performance.




New data cast doubt on dominant theory of vocab. learning

9/20/2012

 
This is a negative finding, so I'll keep it brief.

How do kids acquire new vocabulary? This process is poorly understood.

An influential theory has been that the phonological loop in working memory provides essential support. The phonological loop is like a little tape loop lasting perhaps two seconds; it allows you to keep active a sound you hear.

The idea is that a new unfamiliar word can be placed on the loop for practice and to keep it around while the surrounding context helps you figure out the meaning.

If so, you'd predict that the larger the capacity of the phonological loop and the greater the fidelity with which it "records" the better children will be able to learn new vocabulary.

The efficacy of the phonological loop is measured by having kids repeat nonsense words. Initially they are short--tozzy--but they increase in length to pose greater challenge to the phonological loop--liddynappish.

Several studies have shown correlations between phonological loop capacity and vocabulary size in children (for a review, see Melby-Lervag & Lervag, 2012).

The problem: it could be that having a big vocabulary makes the phonological loop test easier, because it makes it more likely that some of the nonsense words remind you of a word you already know. (And so you have the semantics of that word helping you remember the to-be-remembered word.) Indeed, even proponents of the hypothesis argue that's what happens when kids get older.

What you really need is a study that measures phonological loop capacity at time 1, and finds that it predicts vocabulary size at time 2. There is one such study (Gathercole et al, 1992) but it used a statistical analysis (cross-lagged correlation) that is now considered less than ideal.

A new study (Melby-Lervag et al, 2012) used probably the best methodology of any used to date. It was a longitudinal study that tested nonword repetition ability and vocabulary once each year between the ages of 3 and 7.

They used a different statistical technique--simplex models--to assess causal relationships. They found that both nonword repetition and vocabulary show growth, both show stability across children, and both are moderately correlated, but there was no evidence that one influenced the growth of the other over time.

The group then reanalyzed the Gathercole et al (1992) data and found the same pattern.

This is one depressing paper. Something we thought we knew--the phonological loop contributes to vocabulary learning--may well be wrong.

If anyone is working on a remediation program for young children that centers on improving the working of the phonological loop, it's probably time to rethink that idea.




Gathercole, S. E., Willis, C., Emslie, H., & Baddeley, A. (1992). Phonological memory and vocabulary development during the early school years: A longitudinal study. Developmental Psychology, 28, 887–898.

Melby-Lervåg, M., & Lervåg, A. (2012). Oral language skills mod-erate nonword repetition skills in children with dyslexia: A meta-analysis of the role of nonword repetition skills in dyslexia. Scientific Studies of Reading, 16, 1–34.

Melby-Lervåg, M., & Lervåg, A., Lyster, S-A H., Klem, M., Hagtvet, B., & Hulme, C. (in press). Nonword-repetition ability does not appear to be a causal influence on children's vocabulary development. Psychological Science.

Tools of the Mind: Promising pre-k curriculum looking less promising

8/27/2012

 
A lot of data from the last couple of decades shows a strong association between executive functions (the ability to inhibit impulses, to direct attention, and to use working memory) and positive outcomes in school and out of school (see review here).  Kids with stronger executive functions get better grades, are more likely to thrive in their careers, are less likely to get in trouble with the law, and so forth. Although the relationship is correlational and not known to be causal, understandably researchers have wanted to know whether there is a way to boost executive function in kids.

Tools of the Mind (Bedrova & Leong, 2007) looked promising. It's a full preschool curriculum consisting of some 60 activities, inspired by the work of psychologist Lev Vygotsky. Many of the activities call for the exercise of executive functions through play. For example, when engaged in dramatic pretend play, children must use working memory to keep in mind the roles of other characters and suppress impulses in order to maintain their own character identity. (See Diamond & Lee, 2011, for thoughts on how and why such activities might help students.)

A few studies of relatively modest scale (but not trivial--100-200 kids) indicated that Tools of the Mind has the intended effect (Barnett et al, 2008; Diamond et al, 2007). But now some much larger scale followup studies (800-2000 kids) have yielded discouraging results.

These studies were reported at a symposium this Spring at a meeting of the Society for Research on Educational Effectiveness. (You can download a pdf summary here.) Sarah Sparks covered this story for Ed Week when it happened in March, but it otherwise seemed to attract little notice.

Researchers at the symposium reported the results of three studies. Tools of the Mind did not have an impact in any of the three.

What should we make of these discouraging results?

It's too early to conclude that Tools of the Mind simply doesn't work as intended. It could be that there are as-yet unidentified differences among kids such that it's effective for some but not others. It may also be that the curriculum is more difficult to implement correctly than would first appear to be the case. Perhaps the teachers in the initial studies had more thorough training.

Whatever the explanation, the results are not cheering. It looked like we might have been on to a big-impact intervention that everyone could get behind. Now we are left with the dispiriting conclusion "More study is needed."



Barnett, W., Jung, K., Yarosz, D., Thomas, J., Hornbeck, A., Stechuk, R., & Burns, S.(2008). Educational effects of the Tools of the Mind curriculum: A randomized trial. Early Childhood Research Quarterly, 23, 299–313.

Bedrova, E. & Leong, D. (2007) Tools of the Mind: The Bygotskian appraoch to early childhood education. Second edition. New York: Merrill.

Diamond, A. & Lee, K. (2011). Interventions shown to aid executive function development in children 4-12 years old. Science, 333,  959-964.

Diamond, A., Barnett, W. S., Thomas, J., & Munro, S. (2007). Preschool program improves cognitive control. Science, 318, 1387-1388.



New study: Fluid intelligence not trainable

6/19/2012

 
A few months ago the New York Times published an article on the training of working memory titled "Can You Make Yourself Smarter?" I suggested that the conclusions of the article might be a little too sunny--I pointed out that reviews of the literature by scientists suggested that having subjects practice working memory tasks (like the n-back task, shown below) led to improvement in the working memory task, but not in fluid intelligence.
N-back task
I also pointed out that a significant limitation of many of these studies was the use of a single measure of intelligence. A new study solves that problem.

The study, by Thomas Redick and seven other researchers, offers a negative result--training doesn't help--which often is not considered news (link is 404 as I write this--I hope it will be back up soon). There are lots of ways of screwing up a study, most of which would lead to null results. But this null result ended up published in the excellent Journal of Experimental Psychology: General because the study is so well-designed.

Researchers gave the training plenty of opportunity to have an impact--subjects underwent 20 sessions. There were enough subjects (N=75) to afford decent statistical power to detect an effect, were one present.  Researchers used a placebo control group (visual search) as well as a no-contact control group. They used multiple measures of fluid intelligence, crystallized intelligence, multi-tasking, and perceptual speed. These measures were administered before, during, and after training.

The results: people got better at what they practiced--either n-back or visual search--but there was no transfer to any other task, as shown in the Table (click for larger version).

data table
One study is never fully conclusive on any issue. But given the previous uneven findings of the effects, this study represents another piece of the emerging picture: either fluid intelligence is trainable only in some specialized yet-to-be-defined circumstances, or it's not possible to make a substantial improvement in fluid intelligence through training at all.

These results make me skeptical of commercial programs offering to improve general cognitive processing.

Redick, T. S., Shipstead, Z., Harrison, T. L., Hicks, K. L., Fried, D. E., Hambrick, D. Z., Kane, M. J., & Engle, R. W. (in press). No evidence of intelligence improvement after working memory training: A randomized, placebo-controlled study. Journal of Experimental Psychology: General.



Does chewing gum help you concentrate? Maybe briefly.

4/24/2012

 
Should kids be allowed to chew gum in class? If a student said "but it helps me concentrate. . ." should we be convinced?

If it provides a boost, it's short-lived.

It's pretty well established that a burst of glucose provides a brief cognitive boost (see review here), so the question is whether chewing gum in particular provides any edge over and above that, or whether a benefit would be observed when chewing sugar-free gum.
Picture
One study (Wilkinson et al., 2002) compared gum-chewing to no-chewing (and "sham chewing" in which subjects were to pretend to chew gum, which seems awkward). Subjects performed about a dozen tasks, including some of vigilance (i.e., sustaining attention), short-term and long term memory.

Researchers reported some positive effect of gum-chewing for four of the tests. It's a little hard to tell from the brief write-up, but it appears that the investigators didn't correct their statistics for the multiple tests.

This throw-everything-at-the-wall-and-see-what-sticks may be a characteristic of this research. Another study (Smith, 2010) took that same approach and concluded that there were some positive effects of gum chewing for some of the tasks, especially for feelings of alertness. (This study did not use sugar-free gum so it's hard to tell whether the effect is due to the gum or the glucose.)

A more recent study (Kozlov, Hughes & Jones, 2012) using a more standard short-term memory paradigm, found no benefit for gum chewing.

What are we to make of this grab-bag of results? (And please note this blog does not offer an exhaustive review.)

A recent paper (Onyper, Carr, Farrar & Floyd, 2011) offers a plausible resolution. They suggest that the act of mastication offers a brief--perhaps ten or twenty minute--boost to cognitive function due to increased arousal. So we might see benefit (or not) to gum chewing depending on the timing of the chewing relative to the timing of cognitive tasks.

The upshot: teachers might allow or disallow gum chewing in their classrooms for a variety of reasons. There is not much evidence to allow it for a significant cognitive advantage.

EDIT: Someone emailed to ask if kids with ADHD benefit. The one study I know of reported a cost to vigilance with gum-chewing for kids with ADHD

Kozlov, M. D., Hughs, R. W. & Jones, D. M. (2012). Gummed-up memory: chewing gum impairs short-term recall. Quarterly Journal of Experimental Psychology, 65, 501-513.

Onyper, S. V., Carr, T. L, Farrar, J. S. & Floyd, B. R. (2011). Cognitive advantages of chewing gum. Now you see them now you don't. Appetite, 57,  321-328.

Smith, A. (2010). Effects of chewing gum on cognitive function, mood and physiology in stressed an unstressed volunteers. Nutritional Neuroscience, 13, 7-16.

Wilkinson, L., Scholey, A., & Wesnes, K. (2002). Chewing gum selectively improves aspects of memory in healthy volunteers. Appetite, 38, 235-236.

Training working memory *might* make you smarter

4/20/2012

 
The New York Times Magazine has an article on working memory training and the possibility that it boosts on type of intelligence.

I think the article is a bit--but only a bit--too optimistic in its presentation.

The article correctly points out that a number of labs have replicated the basic finding: training with one or another working memory task leads to increases in standard measures of fluid intelligence, most notably, Raven's Progressive Matrices.
Picture
Working memory is often trained with a N-back task, shown in the figure at left from the NY Times article. You're presented with a series of stimuli, e.g. you're hearing letters. You press a button if a stimulus is the same as the one before (N=1) or the time before last (N=2) or. the time before that (N=3). You start with N=1 and N increases if you are successful. (Larger N makes the task harder.) To make it much harder, researchers can add a second stream of stimuli (e.g., the colored squares shown at left) and ask you to monitor BOTH streams of stimuli in an N-back task.

That is the training task that you are to practice. (And although the figure calls it a "game" it's missing one usual feature of a game; it's no fun at all.)

There are two categories of outcome measures taken after training. In a near-transfer task, subjects are given some other measure of working memory
to see if their capacity has increased. In a far-transfer task, a task is administered that isn't itself a test of working memory, but of a process that we think depends on working memory capacity.

All the excitement has been about far-transfer measures, namely that this training boosts intelligence, about which more in a moment. But it's actually pretty surprising and interesting that labs are reporting near-transfer. That's a novel finding, and contradicts a lot of work that's come before, showing that working memory training tends to benefit only the particular working memory task used during training, and doesn't even transfer to other working memory tasks.

The far-transfer claim has been that the working memory training boosts fluid intelligence. Fluid intelligence is one's ability to reason, see patterns, and think logically, independent of specific experience. Crystallized intelligence, in contrast, is stuff that you know, knowledge that comes from prior experience. You can see why working memory capacity might lead to more fluid intelligence--you've got a greater workspace in which to manipulate ideas.

A standard measure of fluid intelligence is the Ravens Progressive Matrices task, in which you see a pattern of figures, and you are to say which of a several choices would complete the pattern, as shown below.

Picture
So, is this finding legit? Should you buy an N-back training program for your kids?

I'd say the jury is still out.

The Times quotes Randy Engle--a highly regarded working memory researcher--on the subject, and he can hardly conceal his scorn:  “May I remind you of ‘cold fusion’?”

Engle--who is not one of those scientists who has made a career out of criticizing others--has a lengthy review of the working memory training literature which you can read here.

Another recent review (which was invited for the journal Brain & Cognition) concluded "Sparse evidence coupled with lack of scientific rigor, however,  leaves claims concerning the impact and duration of such brain training  largely unsubstantiated. On the other hand, at least some scientific findings seem to support the effectiveness and sustainability of training for higher  brain functions such as attention and working memory."

My own take is pretty close to that conclusion.

There are enough replications of this basic effect that it seems probable that something is going on. The most telling criticism of this literature is that the outcome measure is often a single task.

You can't use a single task like the Ravens and then declare that fluid intelligence has increased because NO task is a pure measure of fluid intelligence. There are always going to be other factors that contribute to task performance.

The best measure of an abstract construct like "fluid intelligence" is one that uses several measures of what look to be quite different tasks, but which you have reason to think all call on fluid intelligence. Then you use statistical methods to look for shared variance among the tasks.

So what we'd really like to see is better performance after working memory training on a few tasks.

The fact is that in many of these studies, researchers have tried to show transfer to more than one task, and the training transfers to one, but not the other.

Here's a table from a 2010 review by Torkel Klingberg showing this pattern. (Click the image to see a larger version.)
Picture
This work is really just getting going, and the inconsistency of the findings means one of two things. Either the training regimens need to be refined, whereupon we'll see the transfer effects more consistently, OR the benefits we've seen thus far were mostly artifactual, a consequence of uninteresting quirks in the designs of studies or the tasks

My guess is that the truth lies somewhere between these two--there's something here, but less than many people are hoping. But it's too early to say with much confidence.

    Enter your email address:

    Delivered by FeedBurner

    RSS Feed


    Purpose

    The goal of this blog is to provide pointers to scientific findings that are applicable to education that I think ought to receive more attention.

    Archives

    April 2022
    July 2020
    May 2020
    March 2020
    February 2020
    December 2019
    October 2019
    April 2019
    March 2019
    January 2019
    October 2018
    September 2018
    August 2018
    June 2018
    March 2018
    February 2018
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    April 2017
    March 2017
    February 2017
    November 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    December 2015
    July 2015
    April 2015
    March 2015
    January 2015
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    September 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012
    February 2012

    Categories

    All
    21st Century Skills
    Academic Achievement
    Academic Achievement
    Achievement Gap
    Adhd
    Aera
    Animal Subjects
    Attention
    Book Review
    Charter Schools
    Child Development
    Classroom Time
    College
    Consciousness
    Curriculum
    Data Trustworthiness
    Education Schools
    Emotion
    Equality
    Exercise
    Expertise
    Forfun
    Gaming
    Gender
    Grades
    Higher Ed
    Homework
    Instructional Materials
    Intelligence
    International Comparisons
    Interventions
    Low Achievement
    Math
    Memory
    Meta Analysis
    Meta-analysis
    Metacognition
    Morality
    Motor Skill
    Multitasking
    Music
    Neuroscience
    Obituaries
    Parents
    Perception
    Phonological Awareness
    Plagiarism
    Politics
    Poverty
    Preschool
    Principals
    Prior Knowledge
    Problem-solving
    Reading
    Research
    Science
    Self-concept
    Self Control
    Self-control
    Sleep
    Socioeconomic Status
    Spatial Skills
    Standardized Tests
    Stereotypes
    Stress
    Teacher Evaluation
    Teaching
    Technology
    Value-added
    Vocabulary
    Working Memory