Daniel Willingham--Science & Education
Hypothesis non fingo
  • Home
  • About
  • Books
  • Articles
  • Op-eds
  • Videos
  • Learning Styles FAQ
  • Daniel Willingham: Science and Education Blog

Collateral damage of excessive reading comprehension strategy instruction

4/30/2012

 
I was just at a reading conference and gave a talk on reading comprehension strategies.

I’ve written about them before (article here). The next paragraph provides just a brief summary of what I’ve written. The figure below shows the strategies themselves, if you’re not familiar with them (click the image for a larger version).

The short version of my conclusion is that they don’t really improve the comprehension process per se. Rather, they help kids who have become good decoders to realize that the point of reading is communication. And that if they can successfully say written words aloud but cannot understand what they’ve read, that’s a problem. Evidence for this point of view include data that kids don’t benefit much from reading comprehension instruction after 7th grade, likely because they’ve all drawn this conclusion, and that increased practice with reading comprehension strategies doesn’t bring any improved benefit. It’s a one-time increment.

Picture
How much time is devoted to reading comprehension strategy instruction? I can’t find good (or poor) data on this question, and I doubt it exists. There is so much variation among districts (and probably even classrooms) on this issue, it’s hard to draw a conclusion with much confidence. Any time I talk about reading, a lot of teachers, coaches, and administrators tell me that enormous amounts of time go to reading comprehension strategy instruction in their district—but I’m sure the people who make sure to mention this to me are not a random sample. 

Whatever the proportion of time, much of it is wasted, at least if educators think it’s improving comprehension, because the one-time boost to comprehension can be had for perhaps five or ten sessions of 20 or 30 minutes each.

Some reading comprehension strategies might be useful for other reasons. For example, a teacher might want her class create a graphic organizer as a way of understanding how an author builds narrative arc

The wasted time obviously represents a significant opportunity cost. But has anyone ever considered that implementing these strategies make reading REALLY BORING? Everyone agrees that one of our long-term goals in reading instruction is to get kids to love reading. We hope that more kids will spend more time reading and less time playing video games, watching TV, etc.
Picture
How can you get lost in a narrative world if you think you’re supposed to be posing questions to yourself all the time? How can a child get really absorbed in a book about ants or meteorology if she thinks that reading means pausing every now and then to anticipate what will happen next, or to question the author’s purpose?

To me, reading comprehension strategies seem to take a process that could bring joy, and turn it into work.

Does chewing gum help you concentrate? Maybe briefly.

4/24/2012

 
Should kids be allowed to chew gum in class? If a student said "but it helps me concentrate. . ." should we be convinced?

If it provides a boost, it's short-lived.

It's pretty well established that a burst of glucose provides a brief cognitive boost (see review here), so the question is whether chewing gum in particular provides any edge over and above that, or whether a benefit would be observed when chewing sugar-free gum.
Picture
One study (Wilkinson et al., 2002) compared gum-chewing to no-chewing (and "sham chewing" in which subjects were to pretend to chew gum, which seems awkward). Subjects performed about a dozen tasks, including some of vigilance (i.e., sustaining attention), short-term and long term memory.

Researchers reported some positive effect of gum-chewing for four of the tests. It's a little hard to tell from the brief write-up, but it appears that the investigators didn't correct their statistics for the multiple tests.

This throw-everything-at-the-wall-and-see-what-sticks may be a characteristic of this research. Another study (Smith, 2010) took that same approach and concluded that there were some positive effects of gum chewing for some of the tasks, especially for feelings of alertness. (This study did not use sugar-free gum so it's hard to tell whether the effect is due to the gum or the glucose.)

A more recent study (Kozlov, Hughes & Jones, 2012) using a more standard short-term memory paradigm, found no benefit for gum chewing.

What are we to make of this grab-bag of results? (And please note this blog does not offer an exhaustive review.)

A recent paper (Onyper, Carr, Farrar & Floyd, 2011) offers a plausible resolution. They suggest that the act of mastication offers a brief--perhaps ten or twenty minute--boost to cognitive function due to increased arousal. So we might see benefit (or not) to gum chewing depending on the timing of the chewing relative to the timing of cognitive tasks.

The upshot: teachers might allow or disallow gum chewing in their classrooms for a variety of reasons. There is not much evidence to allow it for a significant cognitive advantage.

EDIT: Someone emailed to ask if kids with ADHD benefit. The one study I know of reported a cost to vigilance with gum-chewing for kids with ADHD

Kozlov, M. D., Hughs, R. W. & Jones, D. M. (2012). Gummed-up memory: chewing gum impairs short-term recall. Quarterly Journal of Experimental Psychology, 65, 501-513.

Onyper, S. V., Carr, T. L, Farrar, J. S. & Floyd, B. R. (2011). Cognitive advantages of chewing gum. Now you see them now you don't. Appetite, 57,  321-328.

Smith, A. (2010). Effects of chewing gum on cognitive function, mood and physiology in stressed an unstressed volunteers. Nutritional Neuroscience, 13, 7-16.

Wilkinson, L., Scholey, A., & Wesnes, K. (2002). Chewing gum selectively improves aspects of memory in healthy volunteers. Appetite, 38, 235-236.

Training working memory *might* make you smarter

4/20/2012

 
The New York Times Magazine has an article on working memory training and the possibility that it boosts on type of intelligence.

I think the article is a bit--but only a bit--too optimistic in its presentation.

The article correctly points out that a number of labs have replicated the basic finding: training with one or another working memory task leads to increases in standard measures of fluid intelligence, most notably, Raven's Progressive Matrices.
Picture
Working memory is often trained with a N-back task, shown in the figure at left from the NY Times article. You're presented with a series of stimuli, e.g. you're hearing letters. You press a button if a stimulus is the same as the one before (N=1) or the time before last (N=2) or. the time before that (N=3). You start with N=1 and N increases if you are successful. (Larger N makes the task harder.) To make it much harder, researchers can add a second stream of stimuli (e.g., the colored squares shown at left) and ask you to monitor BOTH streams of stimuli in an N-back task.

That is the training task that you are to practice. (And although the figure calls it a "game" it's missing one usual feature of a game; it's no fun at all.)

There are two categories of outcome measures taken after training. In a near-transfer task, subjects are given some other measure of working memory
to see if their capacity has increased. In a far-transfer task, a task is administered that isn't itself a test of working memory, but of a process that we think depends on working memory capacity.

All the excitement has been about far-transfer measures, namely that this training boosts intelligence, about which more in a moment. But it's actually pretty surprising and interesting that labs are reporting near-transfer. That's a novel finding, and contradicts a lot of work that's come before, showing that working memory training tends to benefit only the particular working memory task used during training, and doesn't even transfer to other working memory tasks.

The far-transfer claim has been that the working memory training boosts fluid intelligence. Fluid intelligence is one's ability to reason, see patterns, and think logically, independent of specific experience. Crystallized intelligence, in contrast, is stuff that you know, knowledge that comes from prior experience. You can see why working memory capacity might lead to more fluid intelligence--you've got a greater workspace in which to manipulate ideas.

A standard measure of fluid intelligence is the Ravens Progressive Matrices task, in which you see a pattern of figures, and you are to say which of a several choices would complete the pattern, as shown below.

Picture
So, is this finding legit? Should you buy an N-back training program for your kids?

I'd say the jury is still out.

The Times quotes Randy Engle--a highly regarded working memory researcher--on the subject, and he can hardly conceal his scorn:  “May I remind you of ‘cold fusion’?”

Engle--who is not one of those scientists who has made a career out of criticizing others--has a lengthy review of the working memory training literature which you can read here.

Another recent review (which was invited for the journal Brain & Cognition) concluded "Sparse evidence coupled with lack of scientific rigor, however,  leaves claims concerning the impact and duration of such brain training  largely unsubstantiated. On the other hand, at least some scientific findings seem to support the effectiveness and sustainability of training for higher  brain functions such as attention and working memory."

My own take is pretty close to that conclusion.

There are enough replications of this basic effect that it seems probable that something is going on. The most telling criticism of this literature is that the outcome measure is often a single task.

You can't use a single task like the Ravens and then declare that fluid intelligence has increased because NO task is a pure measure of fluid intelligence. There are always going to be other factors that contribute to task performance.

The best measure of an abstract construct like "fluid intelligence" is one that uses several measures of what look to be quite different tasks, but which you have reason to think all call on fluid intelligence. Then you use statistical methods to look for shared variance among the tasks.

So what we'd really like to see is better performance after working memory training on a few tasks.

The fact is that in many of these studies, researchers have tried to show transfer to more than one task, and the training transfers to one, but not the other.

Here's a table from a 2010 review by Torkel Klingberg showing this pattern. (Click the image to see a larger version.)
Picture
This work is really just getting going, and the inconsistency of the findings means one of two things. Either the training regimens need to be refined, whereupon we'll see the transfer effects more consistently, OR the benefits we've seen thus far were mostly artifactual, a consequence of uninteresting quirks in the designs of studies or the tasks

My guess is that the truth lies somewhere between these two--there's something here, but less than many people are hoping. But it's too early to say with much confidence.

Should boys have male teachers?

4/18/2012

 
In primary school, a student's relationship with his or her teacher has a significant impact on the student's academic progress. Students with positive relationships are more engaged and learn more (e.g., Hughes et al, 2008). In addition, teachers are more likely to have negative relationships with boys than with girls (e.g., Hamre & Pianta, 2001).
Picture
Previous research has not, however, accounted for the gender of the teacher. Perhaps conflict is more likely when teacher and student are of different sexes, and because there are more female than male teachers, we end up concluding that boys tend not to get along with their teachers.

A new study (Split, Koomen & Jak, in press) indicates that's not the case.

This appears to be the first large-scale study that examined teacher-student relationships in primary school while accounting for the sex of teachers.

Teachers completed questionnaires about their relationships with their students. The questionnaires measured three constructs:
  • Closeness Warmth and open communication. Sample item "If upset, this child will seek comfort from me."
  • Conflict Negative interactions, need for the teacher to correct student behavior. Sample item "This child remains angry or resentful after being disciplined."
  • Dependency Clinginess on the part of the student; sample item "This child asks for my help when he or she really does not need help."
All in all, the data did not support the idea that boys connect emotionally  with male teachers.

For Closeness, female teachers generally felt closer to their students than male teachers. Male teachers did not feel closer to either boys or girls, but female teachers felt closer to girls than they did to boys.

For Conflict, female teachers reported less conflict than male teachers did. Both male and female teachers reported less conflict with girls than with boys.

For Dependency, female teachers reported less dependency than male teachers did. There were no differences among boys and girls on this measure.

This research has been difficult to conduct, simply because most groups of teachers don't have enough male teachers in elementary grades to conduct a meaningful analysis. This is just one study, but the results indicate that all teachers--male and female--have a tougher time with boys. More conflictual relationships are reported with boys than with girls, and female teachers report less close relationships with boys.


Hamre, B. K., & Pianta, R. C. (2001). Early teacher–child relationships and the  trajectory of children's school outcomes through eighth grade. Child Development, 72, 625–638.

Hughes, J. N., Luo, W., Kwok, O. M., & Loyd, L. K. (2008). Teacher–student support, effortful engagement, and achievement: A 3-year longitudinal study. Journal of Educational Psychology, 100, 1–14.

Split, J. L., Koomen, H. M. Y., & Jak, S. (in press) Are boys better off with male and girls with female teachers? A multilevel investigation of measurement invariance and gender match in teacher-student relationship quality. Journal of School Psychology.

Teaching students about plagiarism reduces plagiarism.

4/16/2012

 
Most colleges have strict polices about student plagiarism, often including stringent penalties for those who violate the rules. (At the University of Virginia, where I teach, the penalty is expulsion.) Yet infractions occur. Why?

My own intuition has been that plagiarism is often due to oversight or panic. A student will fall behind and, with a deadline looming, get sloppy in the writing of a paper: a few sentences or even a paragraph makes its way into the student paper without attribution. In the rush to finish the student forgets about it, or decides it doesn't matter.

Thomas Dee and Brian Jacob had a different idea.

Some data (e.g., Power, 2009) indicate that even college students are not very knowledgeable about what constitutes plagiarism and how to avoid it, and so many instances of plagiarism may actually be accidental.

Given the stiff penalties, why don't students bone up on the rules? Dee & Jacob point out that this may be an instance of rational ignorance.  That is, it's logical for students not to try to obtain better information about plagiarism; the cost of learning this information is relatively high because the rules seem complex, and the payoff seems small because the odds of punishment for plagiarism are low.

Dee and Jacob's idea: reduce plagiarism by reducing the cost of learning about what constitutes plagiarism.

Their experiment included 1,256 papers written by 573 students in a total of 28 humanities and social-science courses during a semester a selective liberal arts college. Half of the students were required to complete a "short but detailed interactive tutorial on understanding and avoiding plagiarism."

The student papers were analyzed with plagiarism detection software. In the control group, plagiarism was observed in 3.3 percent of papers. (Almost every instance was a matter of using sentences without attribution.) Students who had completed the tutorial had a plagiarism rate of about 1.3% 

Thus, a relatively simple and quite inexpensive intervention may be highly effective in reducing at least one variety of plagiarism. Replicating this finding in other types of coursework--science and mathematics--would be important, as would replication at other institutions, including less selective colleges, and high schools. Even with those limitations, this is a promising start.

This paper was just published as:

Dee, T. S. & Jacob, B. A. (2012) Rational ignorance in education: A field experiment in student plagiarism. Journal of Human Resources, 47, 397-434.

(I've linked to the NBER publication above because it's freely downloadable.)

Power, L. G. (2009). University Students’ Perceptions of Plagiarism. Journal of Higher Education, 80, 643-662.

Is the internet killing books?

4/12/2012

 
A lot of people have been tweeting or posting to Facebook a link to a graph which was posted to the Atlantic Monthly website on April 6, with this provocative headline:
Picture
And here's the chart you are urged to show, derived from Gallup poll numbers over the years.
Picture
We're invited to conclude that because the percentage of readers increased after the advent of the Internet, the Internet did not have a negative impact on book reading.

All of the postings I've seen have apparently taken this conclusion at face value, so it seems like it's worth going through why this conclusion is not justified.

First, lots of stuff happened between 1949 and 2005. For example, household income increased for middle- and high-income families. It could be that the internet has negatively affected book reading, but a number of other factors have increased it, so overall we see an increase. The idea that other factors are having a big impact on reading seems legitimate, given the big increase in reading from 1957 to 1990, a year in which very few people had internet access. So perhaps those factors are continuing to boost reading, despite the negative impact of the internet.

The type of analysis we're being asked to perform implicitly is a variety of time series analysis. It's useful in situations where one can't conduct an experiment with a control group. For example, I might track classroom behavior daily for one month using a scale that runs from 1-10. I find that it ranges from 4 to 6 every day. The teacher implements a new classroom management scheme, and from that day forward, classroom behavior ranges from 7 to 9 every day.

Interpreting the new classroom management scheme as the cause of the change is still not certain--some outside-of-class-factor could have just happened to have occurred on that same day that prompted the change in classroom behavior. But the fact that the change in behavior happened over such a narrow time window makes us more confident that such a coincidence is unlikely.

And of course it helps my confidence that it's the same class. In the chart above, we're looking at events that happened over years, with different people who we hope had similar characteristics.

To really get at the effect of the internet on reading habits, you need more finely controlled data. A number of studies were done in the 1950s, examining the effect of television on reading habits. The best of these (see Coffin, 1955) measured reading habits in a city that did not have a television station but was poised to get one, and then measured reading habit again after people in the city had access to television. (The results showed an impact on reading, especially light fiction. Newspaper reading was mostly unaffected, as television news was still in its infancy. Radio listening took a big hit.)

Second, we might note that the most recent year on the chart is 2005. According to the Pew Internet and American Life Project, only 55% of Americans had internet access at that point, and only 30% had broadband. So perhaps the negative impact wouldn't be observed until more people had internet access.

This brings us to more serious studies of whether use of the Internet displaces other activities. The studies I know of (e.g., Robinson, 2010) conclude that Internet use does not displace other activities, but rather enhances them. The data are actually a bit weird--IT use is associated with more of everything: more leisure reading, more newspaper reading, more visits to museums, playing music, volunteering, and participating in sports.

The obvious explanation would be that heavy IT users have higher incomes and more leisure time, but the relationships held after the author controlled for education, age, race, gender, and income--though most of these effects were much attenuated.

This research is really just getting going, and I don't think we're very close to understanding problem.

In sum, the question of what people do with their leisure time and how one activity influences another is complicated. One chart will not settle it.




Coffin, T. E. (1955) Television's impact on society. American Psychologist, 10, 630-641.
Robinson, J. P. (2011) Arts and leisure participation among IT users: Further evidence of time enhancement over time displacement. Social Science Computer Review, 29,  470-480.

How to abuse standarized tests

4/9/2012

 
The insidious thing about tests is that they seem so straightforward. I write a bunch of questions. My students try to answer them. And so I find out who knows more and who knows less.

But if you have even a minimal knowledge of the field of psychometrics, you know that things are not so simple.

And if you lack that minimal knowledge, Howard Wainer would like a word with you.

Picture
Wainer is a psychometrician who spent many years at the Educational Testing Service and now works at the National Board of Medical Examiners. He describes himself as the kind of guy who shouts back at the television when he sees something to do with standardized testing that he regards as foolish. These one-way shouting matches occur with some regularity, and Wainer decided to record his thoughts more formally.

The result is an accessible book, Uneducated Guesses, explaining the source of his ire on 10 current topics in testing. They make for an interesting read for anyone with even minimal interest in the topic.

For example, consider the making of a standardized test like the SAT or ACT optional for college applicants, a practice that seems egalitarian and surely harmless. Officials at Bowdoin College have made the SAT optional since 1969. Wainer points out the drawback--useful information about the likelihood that students will succeed at Bowdoin is omitted.

Here's the analysis. Students who didn't submit SAT scores with their application nevertheless took the test. They just didn't submit their scores. Wainer finds that, not surprisingly, students who chose not to submit their scores did worse than those who did, by about 120 points.

Picture
Figure taken from Wainer's blog.

Wainer also finds that those who didn't submit their scores had worse GPAs in their freshman year, and by about the amount that one would predict, based on the lower scores.

So although one might reject the use of a standardized admissions tests out of some conviction, if the job of admissions officers at Bowdoin is to predict how students will fare there, they are leaving useful information on the table.

The practice does bring a different sort of advantage to Bowdoin, however. The apparent average SAT score of their students increases, and average SAT score is one factor in the quality rankings offered by US News and World Report.

In another fascinating chapter, Wainer offers a for-dummies guide to equating tests. In a nutshell, the problem is that one sometimes wants to compare scores on tests that use different items—for example, different versions of the SAT. As Wainer points out, if the tests have some identical items, you can use performance on those items as “anchors” for the comparison. Even so, the solution is not straightforward, and Wainer deftly takes the reader through some of the issues.

But what if there is very little overlap on the tests?

Wainer offers this analogy. In 1998, the Princeton High School football team was undefeated. In the same year, the Philadelphia Eagles won just three games. If we imagine each as a test-taker, the high school team got a perfect score, whereas the Eagles got just three items right. But the “tests” each faced contained very different questions and so they are not  comparable. If the two teams competed, there's not much doubt as to who would win.

The problem seems obvious when spelled out, yet one often hears calls for uses of tests that would entail such comparisons—for example, comparing how much kids learn in college, given that some major in music, some in civil engineering, and some in French.

And yes, the problem is the same when one contemplates comparing student learning in a high school science class and a high school English class as a way of evaluating their teachers. Wainer devotes a chapter to value-added measures. I won't go through his argument, but will merely telegraph it: he's not a fan.

In all, Uneducated Guesses is a fun read for policy wonks. The issues Wainer takes on are technical and controversial—they represent the intersection of an abstruse field of study and public policy. For that reason, the book can't be read as a definitive guide. But as a thoughtful starting point, the book is rare in its clarity and wisdom.

Learning styles FAQ

4/5/2012

 
I get so many questions about learning styles that I added an FAQ to my website. You can find it here.

Electronic textbooks: What's the rush?

4/2/2012

 
David Daniel and I have a letter in latest issue of Science. It's behind a paywall, so I thought I'd provide a summary of the substance.

David and I note that there is, in some quarters, a rush to replace conventional paper textbooks with electronic textbooks. It is especially noteworthy that members of the Obama administration are eager to speed this transition. (See this report.)

On the face of it, this transition is obvious: most people seem to like reading on their Nook, Kindle or iPad--certainly sales of the devices and of ebooks are booming. And electronic textbooks offer obvious advantages that traditional textbooks don't, most notably easy updates, and embedded features such as hyperlinks, video, and collaboration software.

Picture
But David and I urged more caution.

We should note that there are not many studies out there regarding the use of electronic textbooks, but those that exist show mixed results. A consistent finding is that, given the choice, students prefer traditional textbooks. That's true regardless of their experience with ebooks, so it's not because students are unfamiliar with them (Woody, Daniel & Baker, 2010). Further, some data indicate that reading electronic textbooks, although it leads to comparable comprehension, takes longer (e.g., Dillon, 1992; Woody et al, 2010).

Why don't students like electronic textbooks if they like ebooks? The two differ. Ebooks typically often have a narrative structure,  they are usually pretty easy to read, and we read them for pleasure. Textbooks in contrast, have a hierarchical structure, the material is difficult and unfamiliar, and we read them for learning and retention. Students likely interact with textbooks differently than books they read for pleasure.

That may be why the data for electronic books are more promising for early grades. Elementary reading books tend of have a narrative structure, and students are not asked to study from the books as older kids are.

Further, many publishers are not showing a lot of foresight in how they integrate video and other features in the electronic textbooks. A decade of research (much of it by Rich Mayer and his collaborators and students) show that multimedia learning is more complex than one would think. Videos, illustrative simulations, hyperlinked definitions--all these can aid comprehension OR hurt comprehension, depending on sometimes subtle differences in how they are placed in the text, the specifics of the visuals, the individual abilities of readers, and so on.

None of this is to say that electronic textbooks are a bad thing, or indeed to deny that they ought to replace traditional textbooks. But two points ought to be kept in mind.
(1) The great success of ebooks as simply the porting over of traditional books into another format may not translate to electronic textbooks. Textbooks have different content, different structure, and they are read for different purposes.
(2) Electronic textbooks stand a much higher chance of success if publishers will exploit the rich research literature on multimedia learning, but most are not doing so.

For these two reasons, it's too early to pick the flag and shout "Hurrah!" on electronic textbooks.


A. Dillon, Ergonomics 35, 1297 (1992).
W. D. Woody, D. B. Daniel, C. Baker , Comput. Educ. 55, 945 (2010)

    Enter your email address:

    Delivered by FeedBurner

    RSS Feed


    Purpose

    The goal of this blog is to provide pointers to scientific findings that are applicable to education that I think ought to receive more attention.

    Archives

    April 2022
    July 2020
    May 2020
    March 2020
    February 2020
    December 2019
    October 2019
    April 2019
    March 2019
    January 2019
    October 2018
    September 2018
    August 2018
    June 2018
    March 2018
    February 2018
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    April 2017
    March 2017
    February 2017
    November 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    December 2015
    July 2015
    April 2015
    March 2015
    January 2015
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    September 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012
    February 2012

    Categories

    All
    21st Century Skills
    Academic Achievement
    Academic Achievement
    Achievement Gap
    Adhd
    Aera
    Animal Subjects
    Attention
    Book Review
    Charter Schools
    Child Development
    Classroom Time
    College
    Consciousness
    Curriculum
    Data Trustworthiness
    Education Schools
    Emotion
    Equality
    Exercise
    Expertise
    Forfun
    Gaming
    Gender
    Grades
    Higher Ed
    Homework
    Instructional Materials
    Intelligence
    International Comparisons
    Interventions
    Low Achievement
    Math
    Memory
    Meta Analysis
    Meta-analysis
    Metacognition
    Morality
    Motor Skill
    Multitasking
    Music
    Neuroscience
    Obituaries
    Parents
    Perception
    Phonological Awareness
    Plagiarism
    Politics
    Poverty
    Preschool
    Principals
    Prior Knowledge
    Problem-solving
    Reading
    Research
    Science
    Self-concept
    Self Control
    Self-control
    Sleep
    Socioeconomic Status
    Spatial Skills
    Standardized Tests
    Stereotypes
    Stress
    Teacher Evaluation
    Teaching
    Technology
    Value-added
    Vocabulary
    Working Memory