Daniel Willingham--Science & Education
Hypothesis non fingo
  • Home
  • About
  • Books
  • Articles
  • Op-eds
  • Videos
  • Learning Styles FAQ
  • Daniel Willingham: Science and Education Blog

Can gaming make us more cooperative, empathic?

2/27/2013

 
Daphne Bavelier and Richard Davidson have a Comment in Nature today on the potential for video games to "do you good."

The authors note that video gaming has been linked to obesity, aggressiveness, and antisocial behavior, but there is a burgeoning literature showing some cognitive benefits accrue from gaming. Even though the data on these benefits is not 100% consistent (as I noted here) I'm with Bavelier & Davidson in their general orientation: so many people spend so much time gaming, we would be fools not to consider ways that games might be turned to purposes of personal and societal benefit.

Could games help to make people smarter, or more empathic, or more cooperative?

The authors suggest three developments are necessary.

  1. Game designers and neuroscientists must collaborate to determine which game components "foster brain plasticity." (I believe they really mean "changes behavior.")
  2. Neuroscientists ought to collaborate more closely with game designers. Presumably, the first step will not get off the ground if this doesn't happen.
  3. There needs to translational game research, and a path to market. We expect that some research advances (and clinical trials) of the positive effects of gaming will be made in academic circles. This work must get to market if it is to have an impact, and there is not a blazed trial by which this travel can take place.

This is all fine, as far as it goes, but it ignores two glaring problems, both subsets of their first point.

We have to bear in mind that Bavelier & Davidson's enthusiasm for the impact of gaming is coming from experiments with people who already liked gaming; you compare gamers with non-gamers and find some cognitive edge for the former. Getting people to play games is no easy matter, because designing good games is hard.

This idea of harnessing interest in gaming for personal benefit is old stuff in education. Researchers have been at it for twenty years, and one of the key lessons they've learned is that it's hard build a game that students really like and from which they also learn (as I've noted in reviews here and here.)

Second, Bavelier & Davidson are also a bit too quick to assume that measured improvements to basic cognitive processes will transfer to more complex processes. They cite a study in which playing a game improved mental rotation performance. Then they point out that mental rotation is important in fields like navigation and research chemistry.

But one of the great puzzles (and frustrations) of attempts to improve working memory has been the lack of transfer; even when  working memory is improved by training, you don't see a corresponding improvement in tasks that are highly correlated with working memory (e.g., reasoning).

In sum, I'm with Bavelier & Davidson in that I think this line of research is well worth pursuing. But I'm less sanguine than they are, because I think their point #1--getting the games to work--is going to be a lot tougher than they seem to anticipate.

Bavelier, D, & Davidson,  R. J. (2013). Brain training: Games to do you good. Nature, 494,  425-426.


Cone of learning or cone of shame?

2/25/2013

 
A math teacher and Twitter friend from Scotland asked me about about this figure.
Picture
I'm sure you've seen a figure like this. It is variously called the "learning pyramid," the "cone of learning," "the cone of experience," and others. It's often attributed to the National Training Laboratory, or to educator Edgar Dale.

You won't be surprised to learn that there are different versions out there with different percentages and some minor variations in the ordering of ac

Certainly, some mental activities are better for learning than others. And the ordering offered here doesn't seem crazy. Most people who have taught agree that long-term contemplation of how to help others understand complicated ideas is a marvelous way to improve one's own understanding of those ideas--certainly better than just reading them--although the estimate of 10% retention of what one reads seems kind of low, doesn't it?

If you enter "cone of experience" in Google scholar the first page offers a few papers that critique the idea, e.g., this one and this one, but you'll also see papers that cite it as if it's reliable.

It's not.

So many variables affect memory retrieval, that you can't assign specific percentages of recall without specifying many more of them:
  • what material is recalled (gazing out the window of a car is an audiovisual experience just like watching an action movie, but your memory for these two audiovisual experiences will not be equivalent)
  • the age of the subjects
  • the delay between study and test (obviously, the percent recalled usually drops with delay)
  • what were subjects instructed to do as they read, demonstrated, taught, etc. (you can boost memory considerably for a reading task by asking subjects to summarize as they read)
  • how was memory tested (percent recalled is almost always much higher for recognition tests than recall).
  • what subjects know about the to-be-remembered material (if you already know something about the subject, memory will be much better.
Picture
This is just an off-the-top-of-my-head list of factors that affect memory retrieval. They not only make it clear that the percentages suggested by the cone can't be counted on, but that the ordering of the activities could shift, depending on the specifics.



The cone of learning may not be reliable, but that doesn't mean that memory researchers have nothing to offer educators. For example, monograph published in January offers an extensive review of the experimental research on different study techniques. If you prefer something briefer, I'm ready to stand by the one-sentence summary I suggested in Why Don't Students Like School?: It's usually a good bet to try to think about material at study in the same way that you anticipate that you will need to think about it later.

And while I'm flacking my books I'll mention that When Can you Trust the Experts was written to help you evaluate the research basis of educational claims, cone-shaped or otherwise.

"Freedom of inquiry" and intelligent design in the classroom

2/22/2013

 
A new bill just passed the Education committee in the Oklahoma House of representatives, as reported in the Oklahoman. Titled "The Scientific Education and Academic Freedom Act," the bill purports to protect the rights of students, teachers and administrators to explore fully scientific controversies.

The bill supposes that some people currently feel inhibited in their pursuit of truth regarding "biological evolution, the chemical origins of life, global warming, and human cloning" and so the bill forbids school administrators and boards of education from disallowing such "exploration."

According to opinion pieces in the Daily Beast, The Week, and Mother Jones, the bill is a fairly transparent attempt to allow intelligent design into science classrooms, one that is being pursued in other states as well.

Yeah, that's what it sounds like to me too.

But even if we take the purported motive of the bill at face value, it's still a terrible idea.

Why shouldn't science teachers "teach the controversy?" Isn't it the job of teachers to sharpen students critical thinking skills? Isn't it part of the scientific method to evaluate evidence? If evolution proponents are so sure their theory is right, why are they afraid of students scrutinizing the ideas?

Imagine this logic applied in other subjects. Why shouldn't students study and evaluate the version of US history offered by white supremacists? Rather than just reading Shakespeare and assuming he's a great playwright, why not ask students to read Shakespeare and the screenplay to Battlefield Earth, and let students decide? And hey, why is such deference offered to Euclid? My uncle Leon has an alternative version of plane geometry and it shows Euclid was all wrong. I think that theory deserves a hearing.

You get the point. Not every theory merits the limited time students have in school. There is a minimum bar of quality that has to be met in order to compete. I'm not allowed to show up at the Olympics, hoping to jump in the pool and swim the 100 m butterfly against Michael Phelps.

Indeed, the very inclusion of a theory in a school discussion signals to students that it must have some validity--why else would the teacher discuss it?

The obvious retort from supporters of the bill is that intelligent design is actually a good theory, much better than the comparisons I've drawn.

That belief may be sincere, but it's due, I think, to a lack of understanding of scientific theory.  So here are a few of the important features of how scientists think about theories, and how they bear on this debates.

1) It's not telling that legitimate scientists point out unanswered questions, problems, or lacunae in the theory of evolution. Every theory, even the best theories, have problems. People who make this point may be thinking about the status of scientific laws as scientists did until the early part of the 20th century--as immutable laws. Scientists today think of all theories as provisional, and open to emendation and improvement.

2) A vital aspect of a good scientific theory is that it be open to falsification. It's not obvious what sort of data would falsify intelligent design theories, especially young-earth theories, which make predictions that are already disconfirmed by geology, astrophysics, etc., and yet are maintained by their adherents. Evolution, in contrast, has survived tests and challenges for 100 years--indeed, the theory has changed and improved in response to those challenges.

3) In the case of old-earth intelligent design theories, the focus is much more on the putative beginnings of the universe of or life on Earth, and these don't have the feel of a scientific theory at all. They seem much more like philosophical queries because they focus on large-scale questions and how these questions ought to formulated--they never get to detailed questions that might be answerable by experiment, the meat-and-potatoes of science.

4) Good scientific theories are not static. They not only change in the face of new evidence, they continue to spawn new and interesting hypotheses. Evolution has been remarkably successful on this score for over 100 years. Intelligent design has been static and unfruitful.

These are some of the reasons that scientists think that intelligent design does not qualify as a good scientific theory, and therefore does not merit close attention in K-12 science classes, and more than my uncle's theory of geometry does.

If you're going to write bills about what happens in science class, it's useful to know a little science.

EDIT: 2/22/13 1:20 p.m. EST: typos

What predicts college GPA?

2/18/2013

 
What aspects of background, personality, or achievement predict success in college--at least, "success" as measured by GPA?

A recent meta-analysis (Richardson, Abraham, & Bond, 2012) gathered articles published between 1997 and 2010, the products of 241 data sets. These articles had investigated these categories of predictors:
  • three demographic factors (age, sex, socio-economic status)
  • five traditional measures of cognitive ability or prior academic achievement (intelligence measures, high school GPA, SAT or ACT, A level points)
  • No fewer than forty-two non-intellectual measures of personality, motivation, or the like, summarized into the categories shown in the figure below (click for larger image).
Picture
Make this fun. Try to predict which of the factors correlate with college GPA.

Let's start with simple correlations.

41 out of the 50 variables examined showed statistically significant correlations. But statistical significance is a product of the magnitude of the effect AND the size of the sample--and the samples are so big that relatively puny effects end up being statistically significant. So in what follows I'll mention correlations of .20 or greater.

Among the demographic factors, none of the three were strong predictors. It seems odd that socio-economic status would not be important, but bear in mind that we are talking about college students, so this is a pretty select group, and SES likely played a significant role in that selection. Most low-income kids didn't make it, and those who did likely have a lot of other strengths.

The best class of predictors (by far) are the traditional correlates, all of which correlate at least r = .20 (intelligence measures) up to r = .40 (high school GPA; ACT scores were also correlated r = .40).

Personality traits were mostly a bust, with the exception of consientiousness (r = .19), need for cognition (r = .19), and tendency to procrastinate (r = -.22). (Procrastination has a pretty tight inverse relationship to conscientiousness, so it strikes me as a little odd to include it.)

Motivation measures were also mostly a bust but there were strong correlations with academic self-efficacy (r = .31) and performance self-efficacy (r = .59). You should note, however, that the former is pretty much like asking students "are you good at school?" and the latter is like asking "what kind of grades do you usually get?" Somewhat more interesting is "grade goal" (r = .35) which measures whether the student is in the habit of setting a specific goal for test scores and course grades, based on prior feedback.

Self-regulatory learning strategies likewise showed only a few factors that provided reliable predictors, including time/study management (r = .22) and effort regulation (r = .32), a measure of persistence in the face of academic challenges.

Not much happened in the Approach to learning category nor in psychosocial contextual influences.

We would, of course, expect that many of these variables would themselves be correlated, and that's the case, as shown in this matrix.
Picture
So the really interesting analyses are regressions that try to sort out which matter more.

The researchers first conducted five hierarchical linear regressions, in each case beginning with SAT/ACT, then adding high school GPA, and then investigating whether each of the five non-intellective predictors would add some predictive power. The variables were conscientiousness, effort regulation, test anxiety, academic self efficacy, and grade goal, and each did, indeed, add power in predicting college GPA after "the usual suspects" (SAT or ACT, and high school GPA) were included.

But what happens when you include all the non-intellective factors in the model?

The order in which they are entered matters, of course, and the researchers offer a reasonable rationale for their choice; they start with the most global characteristic (conscientiousness) and work towards the more proximal contributors to grades (effort regulation, then test anxiety, then academic self-efficacy, then grade goal).

As they ran the model, SAT and high school GPA continued to be important predictors. So were effort regulation and grade goal.

You can usually quibble about the order in which variables were entered and the rationale for that ordering, and that's the case here.  As they put the data together, the most important predictors of college grade point average are: your grades in high school, your score on the SAT or ACT, the extent to which you plan for and target specific grades, and your ability to persist in challenging academic situations.

There is not much support here for the idea that demographic or psychosocial contextual variables matter much. Broad personality traits, most motivation factors, and learning strategies matter less than I would have guessed.

No single analysis of this sort will be definitive. But aside from that caveat, it's important to note that most admissions officers would not want to use this study as a one-to-one guide for admissions decisions. Colleges are motivated to admit students who can do the work, certainly. But beyond that they have goals for the student body on other dimensions: diversity of skill in non-academic pursuits, or creativity, for example.

When I was a graduate student at Harvard, an admissions officer mentioned in passing that, if Harvard wanted to, the college could fill the freshman class with students who had perfect scores on the SAT. Every single freshman-- 800, 800. But that, he said, was not the sort of freshman class Harvard wanted.

I nodded as though I knew exactly what he meant. I wish I had pressed him for more information.

References:
Richardson, M., Abraham, C., Bond, R. (2012). Psychological correlates of university students' academic performance: A systematic review and meta-analysis. Psychological Bulletin, 138,  353-387.


Meta-analysis: Learning from Gaming

2/10/2013

 
What people learn (or don't) from games is such a vibrant research area we can expect fairly frequent literature reviews. It's been about a year since the last one, so I guess we're due.

The last time I blogged on this topic Cedar Riener  remarked that it's sort of silly to frame the question as "does gaming work?" It depends on the game.

The category is so broad it can include a huge variety of experiences for students. If there were NO games from which kids seemed to learn anything, you probably ought not to conclude "kids can't learn from games." To do so would be to conclude that distribution of learning for all possible games and all possible teaching would look something like this.
Picture
But this pattern of data seems highly unlikely. It seems much more probable that the distributions overlap more, and that whether kids learn more from gaming or traditional teaching is a function of the qualities of each.

So what's the point of a meta-analysis that poses the question "do kids learn more from gaming or traditional teaching?

I think of these reviews not as letting us know whether kids can learn from games, but as an overview of where we are--just how effective are the serious games offered to students?
Picture
The latest meta-analysis (Wouters et al, 2013) includes data from 56 studies and examined both learning outcomes (77 effect sizes), retention (17 effect sizes) and motivation (31 effect sizes).

The headline results featured in the abstract is "games work!" Games are reported to be superior to conventional instruction in terms of learning (d = 0.29) and retention (d = .36) but somewhat surprisingly, not motivation (d = .26).

The authors examined a large set of moderator variables and this is where things get interesting. Here are a few of these findings:
  1. Students learn more when playing games in groups than playing alone.
  2. Peer-reviewed studies showed larger effects than others. (This analysis is meant to address the bias not to publish null results. . . but the interpretation in this case was clouded by small N's.)
  3. Age of student had no impact.

But two of the most interesting moderators significantly modify the big conclusions.

First, gaming showed no advantage over conventional instruction when the experiment used random assignment. When non-random assignment was used, gaming showed a robust advantage. So it's possible (or even likely) that games in these studies were more effective only when they interacted with some factor in the gamer that is self-selected (or selected by the experimenter or teacher). And we don't know yet what that factor is.

Second the researchers say that gaming showed and advantage over "conventional instruction" but followup analyses show that gaming showed no advantage over what they called "passive instruction"--that it, the teacher talk or reading a textbook. All of the advantage accrued when games were compared to "active instruction," described as "methods that explicitly prompt learners to learning activities (e.g., exercises, hypertext training.)" So gaming (in this data set) is not really better than conventional instruction; it's better than one type of instruction (which in the US is probably less often encountered.)

So yeah, I think the question in this review is ill-posed. What we really want to know is how do we structure better games? That requires much more fine-grained experiments on the gaming experience, not blunt variables. This will be painstaking work.

Still, you've got to start somewhere and this article offers a useful snapshot of where we are.

EDIT 5:00 a.m. EST 2/11/13. In the original post I failed to make explicit another important conclusion--there may be caveats on when and how the games examined are superior to conventional instruction, but they were almost never worse. This is not an unreasonable bar, and as a group the games tested pass it.

Wouters, P, van Nimwegen, C, van Oostendorp, H., & van der Spek, E. G. (2013). A meta-analysis of the cognitive and motivational effects of serious games. Journal of Educational Psychology. Advance online publication. doi: 10.1037/a0031311

The Science in Gove's Speech

2/6/2013

 
Michael Gove, Secretary of Education in Great Britain, certainly has a flair for oratory.

In his most recent speech, he accused his political opponents of favoring "Downton Abbey-style" education (meaning one that perpetuates class differences), he evoked a 13 year old servant girl reading Keats, and he cited as an inspiration the late British reality TV star Jade Goody (best known for being ignorant), and Marxist writer and political theorist Antonio Gramsci.

Predictably, press coverage in Britain has focused on these details. (So, of course, have the Tweets.) The Financial Times and the Telegraph pointed to Gove's political challenge to Labour. The Guardian led with the Goody & Gramsci angle.

But these points of color distract from the real aim. The fulcrum of the speech is the argument that a knowledge-based curriculum is essential to bring greater educational opportunity to disadvantaged children. (The BBC got half the story right.)

The logic is simple:

1) Knowledge is crucial to support cognitive processes. (e.g., Carnine & Carnine, 2004; Hasselbring, 1988; Willingham, 2006).

2) Children who grow up in disadvantaged circumstances have fewer opportunities to learn important background knowledge at home (Walker et al, 1994) and they come to school with less knowledge, which has an impact on their ability to learn new information at school (Grissmer et al 2010) and likely leads to a negative feedback cycle whereby they fall farther and farther behind (Stanovich, 1986).

Gove is right.  And he's right to argue for a knowledge-based curriculum. The curriculum is most likely to meliorate achievement gaps between advantaged and disadvantaged students because a good fraction of that difference is fueled by differences in cultural capital in the home--differences that schools must try to make up. (Indeed, a knowledge-based curriculum is a critical component of KIPP and other "no excuses" schools in the US.)

I'm not writing to defend all education policies undertaken by the current British government--I'm not knowledgeable enough about those policies to defend or attack them.

But I find the response from Stephen Twigg (Labour's shadow education secretary) disquieting, because he seems to have missed Gove's point.

"Instead of lecturing others, he should listen to business leaders, entrepreneurs, headteachers and parents who think his plans are backward looking and narrow. We need to get young people ready for a challenging and competitive world of work, not just dwell on the past." (As quoted in the Financial Times.)

It's easy to scoff at a knowledge-based curriculum as backward-looking. Memorization of math facts when we have calculators? Knowledge in the age of Google?

But if you mistake advocacy for a knowledge-based curriculum as wistful nostalgia for a better time, or as "old fashioned" you just don't get it.

Surprising though it may seem, you can't just Google everything. You actually need to have knowledge in your head to think well. So a knowledge-based curriculum is the best way to get young people "ready for the world of work."

Mr. Gove is rare, if not unique, among high-level education policy makers in understanding the scientific point he made in yesterday's speech. You may agree or disagree with the policies Mr. Gove sees as the logical consequence of that scientific point, but education policies that clearly contradict it are unlikely to help close the achievement gap between wealthy and poor.

References

Carnine, L., & Carnine, D. (2004). The interaction of reading skills and science content knowledge when teaching struggling secondary students. Reading & Writing Quarterly, 20(2), 203-218.

Grissmer, D., Grimm, K. J., Aiyer, S. M., Murrah, W. M., & Steele, J. S. (2010). Fine motor skills and early comprehension of the world: Two new school readiness indicators. Developmental psychology, 46(5), 1008.

Hasselbring, T. S. (1988). Developing Math Automaticity in Learning Handicapped Children: The Role of Computerized Drill and Practice. Focus on Exceptional Children, 20(6), 1-7.

Stanovich, K. E. (1986). Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy. Reading research quarterly, 360-407.

Walker, D., Greenwood, C., Hart, B., & Carta, J. (1994). Prediction of school outcomes based on early language production and socioeconomic factors. Child development, 65(2), 606-621.

Willingham, D. T. (2006). How knowledge helps. American Educator, 30(1), 30-37.

How to Make a Young Child Smarter

2/4/2013

 
If the title of this blog struck you as brash, I came by it honestly: it's the title of a terrific new paper by three NYU researchers (Protzko, Aronson & Blair, 2013). The authors sought to review all interventions meant to boost intelligence, and they cast a wide net, seeking any intervention for typically-developing children from birth to kindergarten age that used a standard IQ test as the outcome measure, and that was evaluated in a random control trial (RCT) experiment.

A feature of the paper I especially like is that none of the authors publish in the exact areas they review. Blair mostly studies self-regulation, and Aronson, gaps due to race, ethnicity or gender. (Protzko is a graduate student studying with Aronson.) So the paper is written by people with a lot of expertise, but who don't begin their review with a position they are trying to defend. They don't much care which way the data come out.

So what did they find? The paper is well worth reading in its entirety--they review a lot in just 15 pages--but there are four marquee findings.
Picture
First, the authors conclude that infant formula supplemented with long chain polyunsaturated fatty acids boosts intelligence by about 3.5 points, compared to formula without. They conclude that the same boost is observed if pregnant mothers receive the supplement. There are not sufficient data to conclude that other supplements--riboflavin, thiamine, niacin, zinc, and B-complex vitamins--have much impact, although the authors suggest (with extreme caution) that B-complex vitamins may prove helpful.

Picture
Second, interactive reading with a child raises IQ by about 6 points. The interactive aspect is key; interventions that simply encouraged reading or provided books had little impact. Effective interventions provided information about how to read to children: asking open-ended questions, answering questions children posed, following children's interests, and so on.

Picture
Third, the authors report that sending a child to preschool raises his or her IQ by a little more than 4 points. Preschools that include a specific language development component raise IQ scores by more than 7 points. There were not enough studies to differentiate what made some preschools more effective than others.

Picture
Fourth, the authors report on interventions that they describe as "intensive," meaning they involved more than preschool alone. The researchers sought to significantly alter the child's environment to make it more educationally enriching. All of these studies involved low-SES children (following the well-established finding that low-SES kids have lower IQs than their better-off counterparts due to differences in opportunity. I review that literature here.)  Such interventions led to a 4 point IQ gain, and a 7 point gain if the intervention included a center-based component. The authors note the interventions have too many features to enable them to pinpoint the cause, but they suggest that the data are consistent with the hypothesis that the cognitive complexity of the environment may be critical. They were able to confidently conclude (to their and my surprise) that earlier interventions helped no more than those starting later.

Those are the four interventions with the best track record. (Some others fared less well. Training working memory in young children "has yielded disappointing results." )

The data are mostly unsurprising, but I still find the article a valuable contribution. A reliable, easy-to-undertand review on an important topic.

Even better, this looks like the beginning of what the authors hope will be a longer-term effort they are calling the Database on Raising Intelligence--a compendium of RCTs based on interventions meant to boost IQ. That may not be everything we need to know about how to raise kids, but it's a darn important piece, and such a Database will be a welcome tool.

    Enter your email address:

    Delivered by FeedBurner

    RSS Feed


    Purpose

    The goal of this blog is to provide pointers to scientific findings that are applicable to education that I think ought to receive more attention.

    Archives

    July 2020
    May 2020
    March 2020
    February 2020
    December 2019
    October 2019
    April 2019
    March 2019
    January 2019
    October 2018
    September 2018
    August 2018
    June 2018
    March 2018
    February 2018
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    April 2017
    March 2017
    February 2017
    November 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    December 2015
    July 2015
    April 2015
    March 2015
    January 2015
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    September 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012
    February 2012

    Categories

    All
    21st Century Skills
    Academic Achievement
    Academic Achievement
    Achievement Gap
    Adhd
    Aera
    Animal Subjects
    Attention
    Book Review
    Charter Schools
    Child Development
    Classroom Time
    College
    Consciousness
    Curriculum
    Data Trustworthiness
    Education Schools
    Emotion
    Equality
    Exercise
    Expertise
    Forfun
    Gaming
    Gender
    Grades
    Higher Ed
    Homework
    Instructional Materials
    Intelligence
    International Comparisons
    Interventions
    Low Achievement
    Math
    Memory
    Meta Analysis
    Meta-analysis
    Metacognition
    Morality
    Motor Skill
    Multitasking
    Music
    Neuroscience
    Obituaries
    Parents
    Perception
    Phonological Awareness
    Plagiarism
    Politics
    Poverty
    Preschool
    Principals
    Prior Knowledge
    Problem-solving
    Reading
    Research
    Science
    Self-concept
    Self Control
    Self-control
    Sleep
    Socioeconomic Status
    Spatial Skills
    Standardized Tests
    Stereotypes
    Stress
    Teacher Evaluation
    Teaching
    Technology
    Value-added
    Vocabulary
    Working Memory