Part One: Does Processing Fluency Really Matter for Metacognition in Actual Learning Situations?

By Michael J. Serra, Texas Tech University

Part I: Fluency in the Laboratory

Much recent research demonstrates that learners judge their knowledge (e.g., memory or comprehension) to be better when information seems easy to process and worse when information seems difficult to process, even when eventual test performance is not predicted by such experiences. Laboratory-based researchers often argue that the misuse of such experiences as the basis for learners’ self-evaluations can produce metacognitive illusions and lead to inefficient study. In the present post, I review these effects obtained in the laboratory. In the second part of this post, I question whether these outcomes are worth worrying about in everyday, real-life learning situations.

What is Processing Fluency?

Have you ever struggled to hear a low-volume or garbled voicemail message, or struggled to read small or blurry printed text? Did you experience some relief after raising the volume on your phone or putting on your reading glasses and trying again? What if you didn’t have your reading glasses with you at the time? You might still be able to read the small printed text, but it would take more effort and might literally feel more effortful than if you had your glasses on. Would the feeling of effort you experienced while reading without your glasses affect your appraisal of how much you liked or how well you understood what you read?

When we process information, we often have a co-occurring experience of processing fluency: the ease or difficulty we experience while physically processing that information. Note that this experience is technically independent of the innate complexity of the information itself. For example, an intricate and conceptually-confusing physics textbook might be printed in a large and easy to read font (high difficulty, perceptually fluent), while a child might express a simple message to you in a voice that is too low to be easily understood over the noise of a birthday party (low difficulty, perceptually disfluent).

Fluency and Metacognition

Certainly, we know that the innate complexity of learning materials is going to relate to students’ acquisition of new information and eventual performance on tests. Put differently, easy materials will be easy for students to learn and difficult materials will be difficult for students to learn. And it turns out that perceptual disfluency – difficulty processing information – can actually improve memory under some limited conditions (for a detailed examination, see Yue et al., 2013). But how does processing fluency affect students’ metacognitive self-evaluations of their learning?

In the modal laboratory-based examination of metacognition (for a review, see Dunlosky & Metcalfe, 2009), participants study learning materials (these might be simple memory materials or complex reading materials), make explicit metacognitive judgments in which they rate their learning or comprehension for those materials, and then complete a test over what they’ve studied. Researchers can then compare learners’ judgments to their test performance in a variety of ways to determine the accuracy of their self-evaluations (for a review, see Dunlosky & Metcalfe, 2009). As you might know from reading other posts on this website, we usually want learners to accurately judge their learning so they can make efficient decisions on how to allocate their study time or what information to focus on when studying. Any factor that can reduce that accuracy is likely to be problematic for ultimate test performance.

Metacognition researchers have examined how fluency affects participants’ judgments of their learning in the laboratory. The figure in this post includes several examples of ways in which researchers have manipulated the visual perceptual fluency of learning materials (i.e., memory materials or reading materials) to be perceptually disfluent compared to a fluent condition. fluencyThese manipulations involving visual processing fluency include presenting learning materials in an easy-to-read versus difficult-to-read typeface either by literally blurring the font (Yue et al., 2013) or by adjusting the colors of the words and background to make them easy versus difficult to read (Werth & Strack, 2003), in an upside-down versus right-side up typeface (Sungkhasettee et al., 2011), and using normal capitalization versus capitalizing every other letter (Mueller et al., 2013). (A conceptually similar manipulation for auditory perceptual fluency might include making the volume high versus low, or the auditory quality clear versus garbled.).

A wealth of empirical (mostly laboratory-based) research demonstrates that learners typically judge perceptually-fluent learning materials to be better-learned than perceptually-disfluent learning materials, even when learning (i.e., later test performance) is the same for the two sets of materials (e.g., Magreehan et al., 2015; Mueller et al., 2013; Rhodes & Castel, 2008; Susser et al., 2013; Yue et al., 2013). Although there is a current theoretical debate as to why processing fluency affects learners’ metacognitive judgments of their learning (i.e., Do the effects stem from the experience of fluency or from explicit beliefs about fluency?, see Magreehan et al., 2015; Mueller et al., 2013), it is nevertheless clear that manipulations such as those in the figure can affect how much students think they know. In terms of metacognitive accuracy, learners are often misled by feelings of fluency or disfluency that are neither related to their level of learning nor predictive of their future test performance.

As I previously noted, laboratory-based researchers argue that the misuse of such experiences as the basis for learners’ self-evaluations can produce metacognitive illusions and lead to inefficient study. But, this question has yet to receive much empirical scrutiny in more realistic learning situations. I explore the possibility that such effects will also obtain with realistic learning situations in the second part of this post.

References

Dunlosky, J., & Metcalfe, J. (2009). Metacognition. Thousand Oaks, CA US: Sage Publications, Inc.

Magreehan, D. A., Serra, M. J., Schwartz, N. H., & Narciss, S. (2015, advanced online publication). Further boundary conditions for the effects of perceptual disfluency on judgments of learning. Metacognition and Learning.

Mueller, M. L., Tauber, S. K., & Dunlosky, J. (2013). Contributions of beliefs and processing fluency to the effect of relatedness on judgments of learning. Psychonomic Bulletin & Review, 20, 378-384.

Rhodes, M. G., & Castel, A. D. (2008). Memory predictions are influenced by perceptual information: evidence for metacognitive illusions. Journal of Experimental Psychology: General, 137, 615-625.

Sungkhasettee, V. W., Friedman, M. C., & Castel, A. D. (2011). Memory and metamemory for inverted words: Illusions of competency and desirable difficulties. Psychonomic Bulletin & Review, 18, 973-978.

Susser, J. A., Mulligan, N. W., & Besken, M. (2013). The effects of list composition and perceptual fluency on judgments of learning (JOLs). Memory & Cognition, 41, 1000-1011.

Werth, L., & Strack, F. (2003). An inferential approach to the knew-it-all-along phenomenon. Memory, 11, 411-419.

Yue, C. L., Castel, A. D., & Bjork, R. A. (2013). When disfluency is—and is not—a desirable difficulty: The influence of typeface clarity on metacognitive judgments and memory. Memory & Cognition, 41, 229-241.


Unskilled and Unaware: A Metacognitive Bias

by John R. Schumacher, Eevin Akers, & Roman Taraban (all from Texas Tech University).

In 1995, McArthur Wheeler robbed two Pittsburgh banks in broad daylight, with no attempt to disguise himself. When he was arrested that night, he objected “But I wore the juice.” Because lelemonmon juice can be used as an invisible ink, Wheeler thought that rubbing his face with lemon juice would make it invisible to surveillance cameras in the banks. Kruger and Dunning (1999) used Wheeler’s story to exemplify a metacognitive bias through which relatively unskilled individuals overestimate their skill, being both unaware of their ineptitude and holding an inflated sense of their knowledge or ability. This is called the Dunning-Kruger effect, and it also seems to apply to some academic settings. For example, Kruger and Dunning found that some students are able to accurately predict their performance prior to taking a test. That is, these students predict that they will do well on the test and actually perform well on the test. Other students predict that they will do well on a test, but do poorly on the test. These students tend to have an inflated sense of how well they will do but do poorly, thus they fit the Dunning-Kruger effect. Because these students’ predictions do not match their performance, we describe them as poorly calibrated. Good calibration involves metacognitive awareness. This post explores how note taking relates to calibration and metacognitive awareness.

Some of the experiments in our lab concern the benefits of note taking. In these experiments, students were presented with a college lecture. Note takers recalled more than non-notetakers, who simply watched the video (Jennings & Taraban, 2014). The question we explored was whether good note taking skills improved students’ calibration of how much they know and thereby reduced the unskilled and unaware effect reported by Kruger and Dunning (1999).

In one experiment, participants watched a 30-minute video lecture while either taking notes (notetakers) or simply viewing the video (non-notetakers). They returned 24 hours later. They predicted the percentage of information they believed they would recall, using a scale of 0 to 100, and then took a free-recall test, without being given an opportunity to study their notes or mentally review the prior day’s video lecture. They then studied their notes (notetakers) or mentally reviewed the lecture (non-notetakers) from the previous day, for12 minutes, and took a second free-recall test. In order to assess the Dunning-Kruger effect, we subtracted the actual percent of lecture material that was recalled in each test (0 to 100) from participants’ predictions of how much they would recall on each test (0 to 100). For example, if a participant predicted he or she would correctly recall 75% of the material on a test and actually recalled 50% the calibration score would be +25 (75 – 50 = 25). Values close to +100 indicated extreme overconfidence, values close to -100 indicated extreme underconfidence, and values close to 0 indicated good calibration. To answer our question about how note taking relates to calibration, we compared the calibration scores for the two groups (note takers and non-notetakers) for the two situations (before reviewing notes or reflecting, and after reviewing notes or reflecting). These analyses indicated that the two groups did not differ in calibration for the first, free recall test. However, to our surprise, note takers became significantly more overconfident, and thus less calibrated in their predictions, than non-notetakers on the second test. After studying, notetakers’ calibration became worse.

Note taking increases test performance. So why doesn’t note taking improve calibration? Since note takers are more “skilled”, that is, have encoded and stored more information from the lecture, shouldn’t they be more “aware”, that is, better calibrated, as the Dunning-Kruger effect would imply? One possible explanation is that studying notes immediately increases the amount of information processed in working memory. The information that participants will be asked to recall shortly is highly active and available. This sense of availability produces the inflated (and false) prediction that much information will be remembered on the test. Is this overconfidence harmful to the learner? It could be harmful to the extent that individuals often self-generate predictions of how well they will do on a test in order to self-regulate their study behaviors. Poor calibration of these predictions could lead to the individual failing to recognize that he or she requires additional study time before all material is properly stored and able to be recalled.

If note taking itself is not the problem, then is there some way students can improve their calibration after studying in order to better regulate subsequent study efforts? The answer is “yes.” Research has shown that predictions of future performance improve if there is a short delay between studying information and predicting subsequent test performance (Thiede, Dunlosky, Griffin, & Wiley, 2005). In order to improve calibration after studying notes, students should be encouraged to wait, after studying their notes, before judging whether they need additional study time. In order to improve metacognitive awareness with respect to calibration, students need to understand that immediate judgments of how much they know may be inflated. They need to be aware that waiting a short time before judging whether they need more study will result in more effective self-regulation of study time.

References
Jennings, E., & Taraban, R. (May, 2014). Note-taking in the modern college classroom: Computer, paper and pencil, or listening? Paper presented at the Midwestern Psychological Association (MPA), Chicago, IL.

Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of personality and social psychology, 77(6), 1121.

Thiede, K. W., Dunlosky, J., Griffin, T. D., & Wiley, J. (2005). Understanding the delayed-keyword effect on metacomprehension accuracy. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(6), 1-25.


When is Metacognitive Self-Assessment Skill “Good Enough”?

Ed Nuhfer Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029 (with Steve Fleisher CSU-Channel Islands; Christopher Cogan, Independent Consultant; Karl Wirth, Macalester College and Eric Gaze, Bowdoin College)

We noted the statement by Zell and Krizan (2014, p. 111) that: “…it remains unclear whether people generally perceive their skills accurately or inaccurately” in Nuhfer, Cogan, Fleisher, Gaze and Wirth (2016) In our paper, we showed why innumeracy is a major barrier to the understanding of metacognitive self-assessment.

Another barrier to progress exists because scholars who attempt separately to do quantitative measures of self-assessment have no common ground from which to communicate and compare results. This occurs because there is no consensus on what constitutes “good enough” versus “woefully inadequate” metacognitive self-assessment skills. Does overestimating self-assessment skill by 5% allow labeling a person as “overconfident?” We do not believe so. We think that a reasonable range must be exceeded before those labels should be considered to apply.

The five of us are working now on a sequel to our above Numeracy paper. In the sequel, we interpret the data taken from 1154 paired measures from a behavioral science perspective. This extends our first paper’s describing of the data through graphs and numerical analyses. Because we had a database of over a thousand participants, we decided to use it to propose the first classification scheme for metacognitive self-assessment. It defines categories based on the magnitudes of self-assessment inaccuracy (Figure 1).

metacogmarchfig1
Figure 1. Draft of a proposed classification scheme for metacognitive self-assessment based upon magnitudes of inaccuracy of self-assessed competence as determined by percentage points (ppts) differences between ratings of self-assessed competence and scores from testing of actual competence, both expressed in percentages.

If you wonder where the “good” definition comes from in Figure 1, we disclosed on page 19 of our Numeracy paper: “We designated self-assessment accuracies within ±10% of zero as good self-assessments. We derived this designation from 69 professors self-assessing their competence, and 74% of them achieving accuracy within ±10%.”

The other breaks that designate “Adequate,” “Marginal,” “Inadequate,” and “Egregious” admittedly derive from natural breaks based upon measures expressed in percentages. By distribution through the above categories, we found that over 2/3 of our 1154 participants had adequate self-assessment skills, a bit over 21% exhibited inadequate skills, and the remainder lay within the category of “marginal.” We found that less than 3% qualified by our definition as “unskilled and unaware of it.”

These results indicate that the popular perspectives found in web searches that portray people in general as having grossly overinflated views of their own competence may be incomplete and perhaps even erroneous. Other researchers are now discovering that the correlations between paired measures of self-assessed competence and actual competence are positive and significant. However, to establish the relationship between self-assessed competency and actual competency appears to require more care in taking the paired measures than many of us researchers earlier suspected.

Do the categories as defined in Figure 1 appear reasonable to other bloggers, or do these conflict with your observations? For instance, where would you place the boundary between “Adequate” and “Inadequate” self-assessment? How would you quantitatively define a person who is “unskilled and unaware of it?” How much should a person overestimate/underestimate before receiving the label of “overconfident” or “underconfident?”

If you have measurements and data, please compare your results with ours before you answer. Data or not, be sure to become familiar with the mathematical artifacts summarized in our January Numeracy paper (linked above) that were mistakenly taken for self-assessment measures in earlier peer-reviewed self-assessment literature.

Our fellow bloggers constitute some of the nation’s foremost thinkers on metacognition, and we value their feedback on how Figure 1 accords with their experiences as we work toward finalizing our sequel paper.


The Challenge of Deep Learning in the Age of LearnSmart Course Systems

by Lauren Scharff, Ph.D. (U. S. Air Force Academy)

One of my close friends and colleague can reliably be counted on to point out that students are rational decision makers. There is only so much time in their days and they have full schedules. If there are ways for students to spend less time per course and still “be successful,” they will find the ways to do so. Unfortunately, their efficient choices may short-change their long-term, deep learning.

This tension between efficiency and deep learning was again brought to my attention when I learned about the “LearnSmart” (LS) text application that automatically comes with the e-text chosen by my department for the core course I’m teaching this semester. As a plus, the publisher has incorporated learning science (metacognitive prompts and spacing of review material) into the design of LearnSmart. Less positive, some aspects of the LearnSmart design seem to lead many students to choose efficiency over deep learning.

In a nutshell, the current LS design prompts learning shortcuts in several ways. Pre-highlighted text discourages reading from non-highlighted material, and the fact that the LS quiz questions primarily come from highlighted material reinforces those selective reading tendencies. A less conspicuous learning trap results from the design of the LS quiz credit algorithm that incorporates the metacognitive prompts. The metacognition prompts not only take a bit of extra time to answer, but students only get credit for completing questions for which they indicate good understanding of the question material. If they indicate questionable understanding, even if they ultimately answer correctly, that question does not count toward the required number of pre-class reading check questions. [If you’d like more details about the LS quiz process design, please see the text at the bottom of this post.]

Last semester, the fact that many of our students were choosing efficiency over deep learning became apparent when the first exam was graded. Despite very high completion of the LS pre-class reading quizzes and lively class discussions, exam grades on average were more than a letter grade lower than previous semesters.

The bottom line is, just like teaching tools, learning tools are only effective if they are used in ways that align with objectives. As instructors, our objectives typically are student learning (hopefully deep learning in most cases). Students’ objectives might seem to be correlated with learning (e.g. grades) or not (e.g. what is the fastest way to complete this assignment?). If we instructors design our courses or choose activities that allow students to efficiently (quickly) complete them while also obtaining good grades, then we are inadvertently supporting short-cuts to real learning.

So, how do we tackle our efficiency-shortcut challenge as we go into this new semester? There is a tool that the publisher offers to help us track student responses by levels of self-reported understanding and correctness. We can see if any students are showing the majority of their responses in the “I know it” category. If many of those are also incorrect, it’s likely that they are prioritizing short-term efficiency over long-term learning and we can talk to them one-on-one about their choices. That’s helpful, but it’s reactionary.

The real question is, How do we get students to consciously prioritize their long-term learning over short-term efficiency? For that, I suggest additional explicit discussion and another layer of metacognition. I plan to regularly check in with the students, have class discussions aimed at bringing their choices about their learning behaviors into their conscious awareness, and positively reinforcing their positive self-regulation of deep-learning behaviors.

I’ll let you know how it goes.

——————————————–

Here is some additional background on the e-text and the complimentary LearnSmart (LS) text .

There are two ways to access the text. One way is an electronic version of the printed text, including nice annotation capabilities for students who want to underline, highlight or take notes. It’s essentially an electronic version of a printed text. The second way to access the text is through the LS chapters. As mentioned above, when the students open these chapters, they will find that some of the text has already been highlighted for them!

As they read through the LS chapters, students are periodically prompted with some LS quiz questions (primarily from highlighted material). These questions are where some of the learning science comes in. Students are given a question about the material. But, rather than being given the multiple choice response options right away, they are first given a metacognitive prompt. They are asked how confident they are that they know the answer to the question without seeing the response options. They can choose “I know it,” “Think so,” “Unsure,” or “No idea.” Once they answer about their “awareness” of their understanding, then they are given the response options and they try to correctly answer the question.

This next point is key: it turns out that in order to get credit for question completion in LS, students must do BOTH of the following: 1) choose “I know it” when indicating understanding, and 2) answer the question correctly. If students indicate any other level of understanding, or if they answer incorrectly, LS will give them more questions on that topic, and the effort for that question won’t count towards completion of the required number of questions for the pre-class activity.

And there’s the rub. Efficient students quickly learn that they can complete the pre-class reading quiz activity much more quickly if they chose “I know it” to all the metacognitive understanding probes prior to each question. If they guess at the subsequent question answer and get it correct, it counts toward their completion of the activity and they move on. If they answer incorrectly, LS would give them another question from that topic, but they weren’t any worse off with respect to time and effort than if they had indicated that they weren’t sure of the answer.

If students actually take the time to take advantage of rather than shortcut the LS quiz features (there are additional ones I haven’t mentioned here), their deep learning should be enhanced. However, unless they come to value deep learning over efficiency and short-term grades (e.g. quiz completion), then there is no benefit to the technology. In fact it might further undermine their learning through a false sense of understanding.


Metacognitive Judgments of Knowing

Roman Taraban, Ph.D., Dmitrii Paniukov, John Schumacher, Michelle Kiser, at Texas Tech University

“The more you know, the more you know you don’t know.” Aristotle

Students often make judgments of learning (JOLs) when studying. Essentially, they make a judgment about future performance (e.g., a test) based on a self-assessment of their knowledge of studied items. Therefore, JOLs are considered metacognitive judgments. They are judgments about what the person knows, often related to some future purpose. Students’ accuracy in making these metacognitive judgments is academically important. If students make accurate JOLs, they will apply just the right amount of time to mastering academic materials. If students do not devote enough time to study, they will underperform on course assessments. If students spend more time than necessary, they are being inefficient.

As instructors, it would be helpful to know how accurate students are in making these decisions. There are several ways to measure the accuracy of JOLs. Here we will focus on one of these measures, termed calibration. Calibration is the difference between a student’s JOL related to some future assessment and his actual performance on that assessment. In the study we describe here, college students made JOLs (“On a scale of 0 to 100, what percent of the material do you think you can recall?”) after they read a brief expository text. Actual recall was measured in idea units (IUs) (Roediger & Karpicke, 2006). Idea units are the chunks of meaningful information in the text.   Calibration is here defined as JOL – Recalled IUs, or simply, predicted recall minus actual recall. If the calibration calculation leads to a positive number, you are overconfident to some degree; if the calculation result is negative, then you are underconfident to some degree. If the result is zero, then you are perfectly calibrated in your judgment.

The suggestion from Aristotle (see quote above) is that gains in how much we know lead us to underestimate how much we know, that is, we will be underconfident. Conversely, when we know little, we may overestimate how much we know, that is, we will be overconfident. Studies using JOLs have found that children are overconfident (predicted recall minus actual recall is positive) (Lipko, Dunlosky, & Merriman, 2009; Was, 2015). Children think they know more than they know, even after several learning trials with the material. Studies with adults have found an underconfidence with practice (UWP) effect (Koriat et al., 2002), that is, the more individuals learn, the more they underestimate their knowledge. The UWP effect is consistent with Aristotle’s suggestion. The question we ask here is ‘which is it’: If you lack knowledge, do your metacognitive judgments reflect overconfidence or underconfidence, and vice versa? Practically, as instructors, if students are poorly calibrated, what can we do to improve their calibration, that is, to recalibrate this metacognitive judgment.

We addressed this question with two groups of undergraduate students, as follows. Forty-three developmental-reading participants were recruited from developmental integrated reading and writing courses offered by the university, including Basic Literacy (n = 3), Developmental Literacy II (n = 29), and Developmental Literacy for Second Language Learners (n = 11). Fifty-two non-developmental participants were recruited from the Psychology Department subject pool. The non-developmental and developmental readers were comparable in mean age (18.3 and 19.8 years, respectively) and the number of completed college credits (11.8 and 16.7, respectively), and each sample represented roughly fifteen academic majors. All participants participated for course credit. The students were asked to read one of two expository passages and to recall as much as they could immediately. The two texts used for the study were each about 250 words in length and had an average Flesch-Kincaid readability score of 8.2 grade level. The passages contained 30 idea units each.

To answer our question, we first calculated calibration (predicted recall – actual recall) for each participant. Then we divided the total sample of 95 participants into quartiles, based on the number of idea units each participant recalled. The mean proportion of correct recalled idea units, out of 30 possible, and standard deviation in each quartile for the total sample were as follows:

Q1: .13 (.07); Q2: .33 (.05); Q3: .51 (.06); Q4: .73 (.09). Using quartile as the independent variable and calibration as the dependent variable, we found that participants were overconfident (predicted recall > actual recall) in all four quartiles. However, there was also a significant decline in overconfidence from Quartile 1 to Quartile 4 as follows: Q1: .51; Q2: .39; Q3: .29; Q4: .08. Very clearly, the participants in the highest quartile were nearly perfectly calibrated, that is, they were over-predicting their actual performance by only about 8%, compared to the lowest quartile, who were over-predicting by about 51%. This monotonic trend of reducing overconfidence and improving calibration was also true when we analyzed the two samples separately:

NON-DEVELOPMENTAL: Q1: .46; Q2: .39; Q3: .16; Q4: .10;

DEVELOPMENTAL: Q1: .57; Q2: .43; Q3: .39; Q4: .13.

The findings here suggest that Aristotle may have been wrong when he stated that “The more you know, the more you know you don’t know.” Our findings would suggest that the more you know, the more you know you know. That is, calibration gets better the more you know. What is striking here is the vulnerability of weaker learners to overconfidence. It is the learners who have not encoded a lot of information from reading that have an inflated notion of how much they can recall. This is not unlike the children in the Lipko et al. (2009) research mentioned earlier. It is also clear in our analyses that typical college students as well as developmental college students are susceptible to overestimating how much they know.

It is not clear from this study what variables underlie low recall performance. Low background knowledge, limited vocabulary, and difficulty with syntax, could all contribute to poor encoding of the information in the text and low subsequent recall. Nonetheless, our data do indicate that care should be taken in assisting students who fall into the lower performance quartiles to make better calibrated metacognitive judgments. One way to do this might be by asking students to explicitly make judgments about future performance and then encouraging them to reflect on the accuracy of those judgments after they complete the target task (e.g., a class test). Koriat et al. (1980) asked participants to give reasons for and against choosing responses to questions before the participants predicted the probability that they had chosen the correct answer. Prompting students to consider the amount and strength of the evidence for their responses reduced overconfidence. Metacognitive exercises like these may lead to better calibration.

References

Koriat, A., Lichtenstein, S., Fischoff, B. (1980). Reasons for confidence. Journal of Experimental Psychology: Human Learning and Memory, 6(2), 107-118.

Koriat, A., Sheffer, L., & Ma’ayan, H. (2002). Comparing objective and subjective learning curves: Judgments of learning exhibit increased underconfidence with practice. Journal of Experimental Psychology: General, 131, 147–162.

Lipko, A. R., Dunlosky, J., & Merriman, W. E. (2009). Persistent overconfidence despite practice: The role of task experience in preschoolers’ recall predictions. Journal of Experimental Child Psychology, 102(2), 152-166.

Roediger, H., & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17(3), 249-255.

Was, C. (2015). Some developmental trends in metacognition. Retrieved from

https://www.improvewithmetacognition.com/some-developmental-trends-in-metacognition/.

 


Pausing Mid-Stride: Mining Metacognitive Interruptions In the Classroom

By Amy Ratto Parks, Ph.d., University of Montana

Metacognitive interventions are often the subject of research in educational psychology because researchers are curious about how these planned, curricular changes might impact the development of metacognitive skills over time. However, as a researcher in the fields of metacognition and rhetoric and composition, I am sometimes struck by the fact that the planned nature of empirical research makes it difficult for us to take advantage of important kairic moments in learning.

The rhetorical term kairic, taken from the Greek concept of kairos, generally represents a fortuitous window in time in which to take action toward a purpose. In terms of learning, kairic moments are those perfect little slivers in which we might suddenly gain insight into our own or our students’ learning. In the classroom, I like to think of these kairic moments as metacognitive interruptions rather than interventions because they aren’t planned ahead of time. Instead, the “interruptions” arise out of the authentic context of learning. Metacognitive interruptions are kairic moments in which we, as teachers, might be able to briefly access a point in which the student’s metacognitive strategies have either served or not served them well.

A few days ago I experienced a very typical teaching moment that turned out to be an excellent example of a fruitful metacognitive interruption: I asked the students to take out their homework and the moment I began asking discussion questions rooted in the assignment, I sensed that something was off. I saw them looking at each other’s papers and whispering across the tables, so I asked what was going on. One brave student said, “I think a bunch of us did the homework wrong.”

They were supposed to have completed a short analysis of a peer-reviewed article titled, “The Daily Show Effect: Candidate Evaluations, Efficacy, and American Youth” (Baumgartner & Morris, 2014). I got out the assignment sheet and asked the brave student, Rasa*, to read it aloud. She said, “For Tuesday, September 15. Read The Daily Show Effect: Candidate Evaluations…. oh wait. I see what happened. I read the other Jon Stewart piece in the book.” Another student jumped in and said, “I just analyzed the whole show” and a third said, “I analyzed Jon Stewart.”

In that moment, I experienced two conflicting internal reactions. The teacher in me was annoyed. How could this simple set of directions have caused confusion? And how far was this confusion going to set us back? If only half of the class had done the work, the rest of my class plan was unlikely to go well. However, the researcher in me was fascinated. How, indeed, had this simple set of instructions caused confusion? All of these students had completed a homework assignment, so they weren’t just trying to “get out of work.” Plus, they also seemed earnestly unsure about what had gone wrong.

The researcher in me won out. I decided to let the class plan go and I began to dig into the situation. By a show of hands I saw that 12 of the 22 students had done the correct assignment and 10 had completed some customized, new version of the homework. I asked them all to pause for a moment and engage in a metacognitive activity: they were to think back to moment they read the assignment and ask themselves, where did I get mixed up?

Rasa said that she just remembered me saying something about The Daily Show in class, and when she looked in the table of contents, she saw a different article, “Political Satire and Postmodern Irony in the Age of Stephen Colbert and Jon Stewart” (Colletta, 2014), and read it instead. Other students said that they must not have read closely enough, but then another student said something interesting. She said, “I did read the correct essay, but it sounded like it was going to be too hard to analyze and I figured that you hadn’t meant for this to be so hard, so I just analyzed the show.” Other students nodded in agreement. I asked the group to raise their hands if had read the correct essay. Many hands went up. Then I asked if they thought that the analysis they chose to do was easier than the one I assigned. All of them raised their hands.

Again, I was fascinated. In this very short conversation I had just watched rich, theoretical research play out before me. First, here was an example of the direct effect of power browsing (Kandra, Harden, & Babbra, 2012) mistakenly employed in the academic classroom. Power browsing is a relatively recently coined term that describes “skimming and scanning through text, looking for key words, and jumping from source to source” (Kandra et al., 2012).  Power browsing can be a powerful overviewing strategy (Afflerbach & Cho, 2010) in an online reading environment where a wide variety of stimuli compete for the reader’s attention. Research shows that strong readers of non-electronic texts also employ pre-reading or skimming strategies (Dunlosky & Metcalfe, 2009), however, when readers mistakenly power browse in academic settings, it may result in “in missed opportunities or incomplete knowledge” (Kandra et al., 2012, par. 18). About metacognition and reading strategies, Afflerbach and Cho (2010) write, “the good strategy user is always aware of the context of reading” (p. 206); clearly, some of my students had forgotten their reading context. Some of the students knew immediately that they hadn’t thoroughly read the assignment. As soon as I described the term “power browse” their faces lit up. “Yes!” said, Rasa, “that’s exactly what I did!” Here was metacognition in action.

Second, as students described the reasoning behind choosing to read the assigned essay, but analyze something unassigned, I heard them offering a practical example of Flower and Hayes’ (1981/2011) discussion of goal-setting in the writing process. Flower and Hayes (1981/2011) said that writing includes, “not only the rhetorical situation and audience which prompts one to write, it also includes the writer’s own goals in writing” (p. 259). They went on to say that although some writers are able to “juggle all of these demands” others “frequently reduce this large set of restraints to a radically simplified problem” (p. 259). Flower and Hayes allow that this can sometimes cause problems, but they emphasize that “people only solve the problems they set for themselves” (p. 259).

Although I had previously seen many instances of students “simplifying” larger writing assignments in my classroom, I had never before had a chance to talk with students about what had happened in the moment when they realized something hadn’t worked. But here, they had just openly explained to me that the assignment had seemed too difficult, so they had recalibrated, or “simplified” it into something they thought they could do well and/or accomplish during their given timeframe.

This metacognitive interruption provided an opportunity to “catch” students in the moment when their learning strategies had gone awry, but my alertness to the kairic moment only came as a result of my own metacognitive skills: when it became clear that the students had not completed the work correctly, I paused before reacting and that pause allowed me to be alert to a possible metacognitive learning opportunity. When I began to reflect on this class period, I realized that my own alertness came as a result of my belief in the importance of teachers being metacognitive professionals so that we can interject learning into the moment of processing.

There is yet one more reason to mine these metacognitive interruptions: they provide authentic opportunities to teach students about metacognition and learning. The scene I described here could have had a very different outcome. It can be easy to see student behavior in a negative light. When students misunderstand something we thought we’d made clear, we sometimes make judgments about them being “lazy” or “careless” or “belligerent.” In this scenario it seems like it would have been justifiable to have gotten frustrated and lectured the students about slowing down, paying attention to details, and doing their homework correctly.

Instead, I was able to model the kind of cognitive work I would actually want to teach them: we slowed down and studied the mistake in a way that led the class to a conversation about how our minds work when we learn. Rather than including a seemingly-unrelated lecture on “metacognition in learning” I had a chance to teach them in response to a real moment of misplaced metacognitive strategy. Our 15-minute metacognitive interruption did not turn out to be a “delay” in the class plan, but an opening into a kind of learning that might sometimes just have to happen when the moment presents itself.

References

Baumgartner, J., & Morris, J., (2014). The Daily Show effect: Candidate evaluations, efficacy, and American youth. In C. Cucinella (Ed.), Funny. Southlake, Fountainhead Press. (Reprinted from American Politics Journal, 34(3), (2006), pp.341-67).

Colletta, L. (2014). Political satire and postmodern irony in the age of Stephen Colbert and Jon Stewart. In C. Cucinella (Ed.), Funny. Southlake, Fountainhead Press. (Reprinted from The Journal of Popular Culture, 42(5), (2009), pp. 856-74).

Dunlosky, J., & Metcalfe, J. (2009). Metacognition. Thousand Oaks, CA: Sage.

Flower, L., & Hayes, J. (2011). A cognitive process theory of writing. In V. Villanueva & K. Arola (Eds.), Cross-talk in comp theory: A reader, (3rd ed.), (pp. 253-277). Urbana, IL: NCTE. (Reprinted from College Composition and Communication, 32(4), (Dec., 1981), pp. 365-387).

Kandra, K. L., Harden, M., & Babbra, A. (2012). Power browsing: Empirical evidence at the college level. National Social Science Journal, 2, article 4. Retrieved from http://www.nssa.us/tech_journal/volume_2-2/vol2-2_article4.htm

Waters, H. S., & Schneider, W., (Eds.). (2010). Metacognition, strategy use, and instruction. New York, NY: The Guilford Press.

* Names have been changed to protect the students’ privacy.


Metacognition as Part of a Broader Perspective on Learning

This article includes six instructional strategies that promote self-regulation and ways that motivational cognitive and metacognitive skills can be enhanced using these strategies.

Research in Science Education, 2006, Volume 36, Number 1-2, Page 111. Gregory Schraw, Kent J. Crippen, Kendall Hartley

 

Promoting Self-Regulation in Science Education: Metacognition as Part of a Broader Perspective on Learning


Metacognition and Self-Regulated Learning Constructs

This article contains findings from several different studies, and the “Findings indicated convergence of self-report measures of metacognition, significant correlations between metacognition and academic monitoring, negative correlations between self-reported metacognition and accuracy ratings, and positive correlations between metacognition and strategy use and metacognition and motivation.”

Rayne A. Sperling, Bruce C. Howard, Richard Staley & Nelson DuBois

(2004) Metacognition and Self-Regulated Learning Constructs, Educational Research and

Evaluation: An International Journal on Theory and Practice, 10:2, 117-139

Metacognition and Self-Regulated Learning Constructs


Metacognition: What Makes Humans Unique

by

Arthur L. Costa, Professor Emeritus, California State University, Sacramento

And

Bena Kallick, Educational Consultant, Westport, CT

————–

 

“I cannot always control what goes on outside.But I can always control what goes on inside.”  Wayne Dyer

————–

Try to solve this problem in your head:

How much is one half of two plus two?

Did you hear yourself talking to yourself? Did you find yourself having to decide if you should take one half of the first two (which would give the answer, three) or if you should sum the two’s first (which would give the answer, two)?

If you caught yourself having an “inner” dialogue inside your brain, and if you had to stop to evaluate your own decision making/problem-solving processes, you were experiencing metacognition.

The human species is known as Homo sapiens, sapiens, which basically means “a being that knows their knowing” (or maybe it is “knows they are knowing”). What distinguishes humans from other forms of life is our capacity for metacognition—the ability to be a spectator of own thoughts while we engage in them.

Occurring in the neocortex and therefore thought by some neurologists to be uniquely human, metacognition is our ability to know what we know and what we don’t know. It is our ability to plan a strategy for producing what information is needed, to be conscious of our own steps and strategies during the act of problem solving, and to reflect on and evaluate the productiveness of our own thinking. While “inner language,” thought to be a prerequisite, begins in most children around age five, metacognition is a key attribute of formal thought flowering about age eleven.

Interestingly, not all humans achieve the level of formal operations (Chiabetta, 1976). And as Alexander Luria, the Russian psychologist found, not all adults metacogitate.

Some adults follow instructions or perform tasks without wondering why they are doing what they are doing. They seldom question themselves about their own learning strategies or evaluate the efficiency of their own performance. They virtually have no idea of what they should do when they confront a problem and are often unable to explain their strategies of decision making, There is much evidence, however, to demonstrate that those who perform well on complex cognitive tasks, who are flexible and persevere in problem solving, who consciously apply their intellectual skills, are those who possess well-developed metacognitive abilities. They are those who “manage” their intellectual resources well: 1) their basic perceptual-motor skills; 2) their language, beliefs, knowledge of content, and memory processes; and 3) their purposeful and voluntary strategies intended to achieve a desired outcome; 4) self-knowledge about one’s own leaning styles and how to allocate resources accordingly.

When confronted with a problem to solve, we develop a plan of action, we maintain that plan in mind over a period of time, and then we reflect on and evaluate the plan upon its completion. Planning a strategy before embarking on a course of action helps us keep track of the steps in the sequence of planned behavior at the conscious awareness level for the duration of the activity. It facilitates making temporal and comparative judgments; assessing the readiness for more or different activities; and monitoring our interpretations, perceptions, decisions, and behaviors. Rigney (1980) identified the following self-monitoring skills as necessary for successful performance on intellectual tasks:

  • Keeping one’s place in a long sequence of operations;
  • Knowing that a subgoal has been obtained; and
  • Detecting errors and recovering from those errors either by making a quick fix or by retreating to the last known correct operation.

Such monitoring involves both “looking ahead” and “looking back.” Looking ahead includes:

  • Learning the structure of a sequence of operations;
  • Identifying areas where errors are likely;
  • Choosing a strategy that will reduce the possibility of error and will provide easy recovery; and
  • Identifying the kinds of feedback that will be available at various points, and evaluating the usefulness of that feedback.

Looking back includes:

  • Detecting errors previously made;
  • Keeping a history of what has been done to the present and thereby what should come next; and
  • Assessing the reasonableness of the present immediate outcome of task performance.

A simple example of this might be drawn from reading. While reading a passage have you ever had your mind “wander” from the pages? You “see” the words but no meaning is being produced. Suddenly you realize that you are not concentrating and that you’ve lost contact with the meaning of the text. You “recover” by returning to the passage to find your place, matching it with the last thought you can remember, and, once having found it, reading on with connectedness.

Effective thinkers plan for, reflect on, and evaluate the quality of their own thinking skills and strategies. Metacognition means becoming increasingly aware of one’s actions and the effects of those actions on others and on the environment; forming internal questions in the search for information and meaning; developing mental maps or plans of action; mentally rehearsing before a performance; monitoring plans as they are employed (being conscious of the need for midcourse correction if the plan is not meeting expectations); reflecting on the completed plan for self- evaluation; and editing mental pictures for improved performance.

This inner awareness and the strategy of recovery are components of metacognition. Indicators that we are becoming more aware of our own thinking include:

  • Are you able to describe what goes on in your head when you are thinking?
  • When asked, can you list the steps and tell where you are in the sequence of a problem-solving strategy?
  • Can you trace the pathways and dead ends you took on the road to a problem solution?
  • Can you describe what data are lacking and your plans for producing those data?

When students are metacognitive, we should see them persevering more when the solution to a problem is not immediately apparent. This means that they have systematic methods of analyzing a problem, knowing ways to begin, knowing what steps must be performed and when they are accurate or are in error. We should see students taking more pride in their efforts, becoming self-correcting, striving for craftsmanship and accuracy in their products, and becoming more autonomous in their problem-solving abilities.

Metacognition is an attribute of the “educated intellect.” Learning to think about their thinking can be a powerful tool in shaping, improving, internalizing and habituating their thinking.

REFERENCES

Chiabetta, E. L. A. (1976). Review of piagetian studies relevant to science instruction at the secondary and college level. Science Education, 60, 253-261.

Costa, A. and Kallick B.(2008). Learning and Leading with Habits of Mind: 16 Characteristics for Success. Alexandria, VA: ASCD

Rigney, J. W. (1980). Cognitive learning strategies and qualities in information processing. In R. Snow, P. Federico & W. Montague (Eds.), Aptitudes, Learning, and Instruction, Volume 1. Hillsdale, NJ: Erlbaum.

 


Reciprocal Peer Coaching for Self-Reflection, Anyone?

By Cynthia Desrochers, California State University Northridge

I once joked with my then university president that I’d seen more faculty teach in their classrooms than she had. She nodded in agreement. I should have added that I’d seen more than the AVP for Faculty Affairs, all personnel committees, deans, or chairpersons. For some reason, university teaching is done behind closed doors, no peering in on peers unless for personnel reviews. We attempted to change that at CSU Northridge when I directed their faculty development center from 1996-2005. Our Faculty Reciprocal Peer Coaching program typically drew a dozen or more cross-college dyads over the dozen semesters it was in existence. The program’s main goal was teacher self-reflection.

I believe I first saw the term peer coaching when reading a short publication by Joyce and Showers (1983). What stuck me was their assertion that to have any new complex teaching innovation become part of one’s teaching repertoire required four steps: 1) understanding the theory/knowledge base undergirding the innovation, 2) observing an expert who is modeling how to do the innovation, 3) practicing the innovation in a controlled setting with coaching (e.g., micro-teaching in a workshop) and 4) practicing the innovation in one’s own classroom with coaching. They maintained that without all four steps, the innovation taught in a workshop would likely not be implemented in the classroom. Having spent much of my life teaching workshops about using teaching innovations, these steps became my guide, and I still use them today. In addition, after many years of coaching student teachers at UCLA’s Lab School, I realized that they were more likely to apply teaching alternatives that they identified and reflected upon in the post-conference than ones that I singled out. That is, they learned more from using metacognitive practices than from my direct instruction, so I began formulating some of the thoughts summarized below.

Fast forward many years to this past year, where I co-facilitated a yearlong eight-member Faculty Learning Community (FLC) focused on implementing the following Five Gears for Activating Learning: Motivating Learning, Organizing Knowledge, Connecting Prior Knowledge, Practicing with Feedback, and Developing Mastery [see previous blog]. With this FLC, we resurrected peer coaching on a voluntary basis in order to promote conscious use of the Five Gears in teaching. All eight FLC members not only volunteered to pair up for reciprocal coaching of one another, but they were eager to do so.

I was asked by one faculty member why is it called coaching, because an athletic coach often tells players what to do, versus helping them self-reflect. I responded that it’s because Joyce and Showers’ study looked at the research on training athletes and what that required for skill transfer. They showed the need for many practice sessions combined with coaching in order to achieve mastery of any new complex move, be it on the playing field or in the classroom. However, their point of confusion was noted, so now I refer to the process as Reciprocal Peer Coaching for Self-Reflection. This reflective type of peer coaching applies to cross-college faculty dyads who are seeking to more readily apply a new teaching innovation.

Reciprocal Peer Coaching for Self-Refection applies all or some of the five phases of the Clinical Supervision model described by Goldhammer(1969), which include: pre-observation conference, observation and data collection, data analysis and strategy, post-observation conference, and post-conference analysis. However, it is in the post-conference phase where much of the teacher self-reflection occurs and where the coach can benefit from an understanding of post-conference messages.

Prior to turning our FLC members loose to peer coach, we held a practicum on how to do it. And true to my statement above, I applied Joyce and Showers’ first three steps in our practicum (i.e., I explained the theory behind peer coaching, modeled peer coaching, and then provided micro-practice of a videotaped lesson taught by one of our FLC members). But in the micro-practice, right out of the gate, faculty coaches began telling the teacher how she used the Five Gears versus prompting her to reflect upon her own use first. Although I gently provided feedback in an attempt to redirect the post-conferences from telling to asking, it was a reminder of how firmly ingrained this default position has become with faculty, where the person observing a lesson takes charge and provides all the answers when conducting the post-conference. The reasons for this may include 1)prior practice as supervisors who are typically charged with this role, 2) the need to show their analytic prowess, or 3) the desire to give the teacher a break from doing all the talking. Whatever the reason, we want the teacher doing the reflective analysis of her own teaching and growing those dendrites as a result.

After this experience with our FLC, I crafted the conference-message matrix below and included conversation-starter prompts. Granted, I may have over-simplified the process, but it illustrates key elements for promoting Reciprocal Peer Coaching for Self-Reflection. Note that the matrix is arranged into four types of conference messages: successful and unsuccessful teaching-learning situations, where the teacher identifies the topic of conversation after being prompted by the coach (messages #1 and #3) and successful and unsuccessful teaching-learning situations, where the coach identifies the topic of conversation after being prompted by the teacher (messages #2 and #4). The goal of Reciprocal Peer Coaching for Self-Reflection is best achieved when the balance of the post-conference contains teacher self-reflection; hence, messages #1 and #3 should dominate the total post-conference conversation. Although the order of messages #1 through #4 is a judgment call, starting with message #1permits the teacher to take the lead in identifying and reflecting upon her conscious use of the Gears and their outcome –using her metacognition—versus listening passively to the coach. An exception to beginning with message #1 may be that the teacher is too timid to sing her own praises, and in this instance the coach may begin with message #2 when this reluctance becomes apparent. Note further that this model puts the teacher squarely in the driver’s seat throughout the entire post-conference; this is particularly important when it comes to message #4, which is often a sensitive discussion of unsuccessful teaching practices. If the teacher doesn’t want another’s critique at this time, she is told not to initiate message #4, and the coach is cautioned to abide this decision.

Reciprocal Peer Coaching for Self-Reflection

The numbered points under each of the four types of messages are useful components for discussion during each message in order to further cement an understanding of which Gear is being used and its value for promoting student learning: 1) Identifying the teaching action from the specific objective data collected by the coach (e.g., written, video, or audio) helps to isolate the cause-effect teaching episode under discussion and its effect on student learning. 2) Naming the Gear (or naming any term associated with the innovating being practiced) increases our in-common teaching vocabulary, which is considered useful for any profession. 3) Discussing the generalization about how the Gear helps students learn reiterates its purpose, fostering motivation to use it appropriately. And 4) crafting together alternative teaching-learning practices for next time expands the teacher’s repertoire.

The FLC faculty reported that their classroom Reciprocal Peer Coaching for Self-Reflection sessions were a success. Specifically, they indicated that they used the Five Gears more consciously after discussing them during the post-conference; that the Five Gears were beginning to become part of their teaching vocabulary; and that they were using the Five Gears more automatically during instruction. Moreover, unique to message #2, it provided the benefit of having one’s coach identify a teacher’s unconscious use of the Five Gears, increasing the teacher’s awareness of themselves as learners of an innovation, all of which serve to increase metacognition.

When reflecting upon how we might assist faculty in implementing the most promising research-based teaching-learning innovations, I see a system where every few years we allot reassigned time for faculty to engage in Reciprocal Peer Coaching for Self-Reflection.

References

Goldhammer, R. (1969). Clinical supervision. New York: Holt, Rinehart and Winston.

Joyce, B. & Showers, B. (1983). Power in staff development though research on training. Alexandria, VA: Association for Supervision and Curriculum Development.

 

 

 

 

 

 


Exploring the relationship between awareness, self-regulation, and metacognition

Thinking about thinking, awareness, and self-regulation Share on Xby John Draeger (SUNY Buffalo State)

Recent blog posts have considered the nature of metacognition and metacognitive instruction. Lauren Scharff, for example, defines metacognition as “the intentional and ongoing interaction between awareness and self-regulation” (Scharff, 2015). This post explores the relationship between the elements of this definition.

Scharff observes that a person can recognize that a pedagogical strategy isn’t working without changing her behavior (e.g., someone doesn’t change because she is unaware of alternative strategies) and a person can change her behavior without monitoring its efficacy (e.g., someone tries a technique that she heard about in a workshop without thinking through whether the technique makes sense within a particular learning environment). Scharff argues that a person engaging in metacognition will change her behavior when she recognizes that a change is needed. She will be intentional about when and how to make that change. And she will continue the new behavior only if there’s reason to believe that it is the achieving the desired result. Metacognition, therefore, can be found in the interaction between awareness and self-regulated action. Moreover, because learning environments are fluid, the interaction between awareness and self-regulation must be ongoing. This suggests that awareness and self-regulation are necessary for metacognition.

In response, I offered what might seem to be a contrary view (Draeger, 2015). I argued that the term ‘metacognition’ is vague in two ways. First, it is composed of overlapping sub-elements. Second, each of these sub-elements falls along a continuum. For example, metacognitive instructors can be more (or less) intentional, more (or less) informed about evidence-based practice, more (or less) likely to have alternative strategies ready to hand, and more (or less) nimble with regards to when and how to shift strategies based on their “in the moment” awareness of student need. Sub-elements are neither individually necessary nor jointly sufficient for a full characterization of metacognition. Rather, a practice is metacognitive if it has “enough” of the sub-elements and they are far “enough” along the various continua.

Scharff helpfully suggests that metacognition must involve both awareness and action. I would add that awareness can be divided into sub-elements (e.g., reflection, mindfulness, self-monitoring, self-knowledge) and behavior can be divided into sub-elements (e.g., self-regulation, collective actions, institutional mandates). While I suspect that no one of the sub-elements is individually necessary for metacognition, Scharff has correctly identified two broad clusters of elements that are required for metacognition.

As I continue to think through the relationship between awareness and self-regulation, I am reminded of an analogy between physical exercise and intellectual growth. As I have said in a previous post, I am a gym rat. Among other things, I swim several times a week. A few years ago, however, I noticed that my stroke needed refinement. So, I contacted a swimming instructor. She found a handful of areas where I could improve, including my kick and the angle of my arms. As I worked on these items, it was often helpful to focus on my kick without worrying about the angle of my arms and vice versa. With time and effort, I got gradually better. Because my kick had been atrocious, focusing on that one area resulted in dramatic improvement. Because my arm angle hadn’t been all that bad, improvements were far less dramatic. Working on my kick and my arm angle combined to make me a better swimmer. Separating the various elements of my stroke allowed me to identify areas for improvement and allowed me to tackle my problem areas without feeling overwhelmed. However, even after working on the parts, I found that I still needed to put it together. Eventually, I found a swim rhythm that brought elements into alignment.

Likewise, it is often useful to separate elements of our pedagogical practice (e.g., awareness, self-regulation) because separation allows us identify and target areas in need of improvement. If a person knows what she is doing isn’t working but doesn’t know what else to do, then she might focus on identifying alternative strategies. If a person knows of alternative strategies but does not know when or how to use them, then she might focus on her “in the moment awareness” and her ability to shift to new strategies as needed during class. Focusing on the one element can give a person something concrete to work on without feeling overwhelmed by all the other moving parts. The separation is useful, but it is also somewhat artificial. By analogy, my kick and my arm angle are elements of my swim stroke, but they are also part of an interrelated process. While it is important to improve the parts, the ultimate goal is finding a way to integrate the changes into an effective whole. Metacognitive instructors seek to become more explicit, more intentional, more informed about evidence-based practice, and better able to make “in the moment” adjustments. Focusing on each of these elements can improve practice. Separating these elements can be useful, but somewhat artificial because the ultimate goal is finding a way to integrate these elements into an effective whole.

References

Draeger, John (2015). “So what if ‘metacognition’ is vague!” Retrieved from https://www.improvewithmetacognition.com/so-what-if-metacognition-is-vague/

Scharff, Lauren (2015). “What do we mean by ‘metacognitive instruction?” Retrieved from https://www.improvewithmetacognition.com/what-do-we-mean-by-metacognitive-instruction/

 


Metacognition and Specifications Grading: The Odd Couple?

By Linda B. Nilson, Clemson University

More than anything else, metacognition is awareness of what’s going on in one’s mind. This means, first, that a person sizes up a task before beginning it and figures out what kind of a task it is and what strategies to use. Then she monitors her thinking as she progresses through the task, assessing the soundness of her strategies and her success at the end.

So what does this have to do with specs grading?

In specs grading, all assignments and tests are graded pass/fail, credit/no credit, where “pass” means at least B or better work. A student product passes if it conforms to the specifications (specs) that an instructor described in the assignment or test directions. So either the students follow the directions and “get it right,” or the work doesn’t count. Partial credit doesn’t exist.

For the instructor, the main task is laying out the specs. A short reading compliance assignment may have specs as simple as: “You must answer all the study questions, and each answer must be at least 100 words long.” For more substantial assignments, the instructor can detail the “formula” or template of the assignment – that is, the elements and organization of a good literature review, research proposal, press release, or lab report – or provide a list of the questions that she wants students to answer, as for a reflection on a service-learning or group project experience. Especially for formulaic assignments, which so many undergraduate-level assignments are, models and examples bring the specs to life.

The stakes are higher for students than they are in our traditional grading system. With specs grading, it’s all or nothing. No sliding by with a careless, eleventh-hour product because partial credit is a given.

To be successful in a specs-graded course, students have to be aware of their thinking as they complete their assignments and tests. This means that students, first have to pay attention to the directions, and the directions are themselves a learning experience when they explicitly lay out the formula for different types of work. Especially when enhanced with models, the specs supply the crucial information that we so often gloss over: exactly what the task involves. Otherwise, how should our students know? With clear specs, they learn what reflection involves, how a literature review is organized, and what a research proposal must contain. Then during the task, students need to monitor and assess their work to determine if it is indeed meeting the specs. “Does the depth of my response match the length requirement?” “Am I answering all the reflection questions?” “Am I following the proper organization?” “Have I written all the sections?”

Another distinguishing characteristic of specs grading is the replacement of the point system with “bundles” of assignments and tests. For successfully completing a bundle, students obtain final course grades. And they select the bundle and the grade they are willing to work for. To get a D, the bundle involves relatively little, unchallenging work. For higher grades, the bundles require progressively more work, more challenging work, or both. In addition, each bundle is associated with a set of learning outcomes, so a given grade indicates the outcomes a student has achieved.

If students fail to self-monitor and self-assess, they risk receiving no credit for their work and, given that it is part of a bundle, getting a lower grade in the course. And their grade is important for a whole new reason: because they chose the grade they wanted/needed and its accompanying workload. This element of choice and volition increases students’ sense of responsibility for their performance.

With specs grading, students do get limited opportunities to revise an unacceptable piece of work or to obtain get a 24-hour extension on an assignment. These opportunities are represents by virtual tokens that students receive at the beginning of the course. Three is a reasonable number. This way, the instructor doesn’t have to screen excuses, requests for exceptions, and the like. She also has the option of giving students chances to earn tokens and rewarding those with the most tokens at the end of the course.

Specs grading solves many of the problems that our traditional grading system has bred while strengthening students’ metacognition and sense of ownership of their grades. Details on using and transitioning to this grading system are in my 2015 book, Specifications Grading: Restoring Rigor, Motivating Students, and Saving Faculty Time (Sterling, VA: Stylus).


So what if ‘metacognition’ is vague!

by John Draeger (SUNY Buffalo State)

When Lauren Scharff invited me to join Improve with Metacognition last year, I was only vaguely aware of what ‘metacognition’ meant. As a philosopher, I knew about various models of critical thinking and I had some inkling that metacognition was something more than critical thought, but I could not have characterized the extra bit. In her post last week, Scharff shared a working definition of ‘metacognitive instruction’ developed by a group of us involved as co-investigators on a project (Scharff, 2015). She suggested that it is the “intentional and ongoing interaction between awareness and self-regulation.” This is better than anything I had a year ago, but I want to push the dialogue further.

I’d like to take a step back to consider the conceptual nature of metacognition by applying an approach in legal philosophy used to analyze terms with conceptual vagueness. While clarity is desirable, Jeremy Waldron argues that there are limits to the level of precision that legal discourse can achieve (Waldron, 1994). This is not an invitation to be sloppy, but rather an acknowledgement that certain legal concepts are inescapably vague. According to Waldron, a concept can be vague in at least two ways. First, particular instantiations can fall along a continuum (e.g., actions can be more or less reckless, negligent, excessive, unreasonable). Second, some concepts can be understood in terms of overlapping features. Democracies, for example, can be characterized by some combination of formal laws, informal patterns of participation, shared history, common values, and collective purpose. These features are neither individually necessary nor jointly sufficient for a full characterization of the concept. Rather, a system of government counts as democratic if it has “enough” of the features. A particular democratic system may look very different from its democratic neighbor. This is in part because particular systems will instantiate the features differently and in part because particular systems might be missing some feature altogether. Moreover, democratic systems can share features with other forms of government (e.g., formal laws, common values, and collective purpose) without there being a clear boundary between democratic and non-democratic forms of government. According to Waldron, there can be vagueness within the concept of democracy itself and in the boundaries between it and related concepts.

While some might worry that the vagueness of legal concepts is a problem for legal discourse, Waldron argues that the lack of precision is desirable because it promotes dialogue. For instance, when considering whether some particular instance of forceful policing should be considered ‘excessive,’ we must consider the conditions under which force is justified and the limits of acceptability. Answering these questions will require exploring the nature of justice, civil rights, and public safety. Dialogue is valuable, in Waldron’s view, because it brings clarity to a broad constellation of legal issues even though clarity about any one of the constituents requires thinking carefully about the other elements in the constellation.

Is ‘metacognition’ vague in the ways that legal concepts can be vague? To answer this question, consider some elements in the metacognitive constellation as described by our regular Improve with Metacognition blog contributors. Self-assessment, for example, is feature of metacognition (Fleisher, 2014, Nuhfer, 2014). Note, however, that it is vague. First, self-assessments may fall along a continuum (e.g., students and instructors can be more or less accurate in their self-assessments). Second, self-assessment is composed of a variety of activities (e.g., predicting exam scores, tracking gains in performance, understanding personal weak spots and understanding one’s own level of confidence, motivation, and interest). These activities are neither individually necessary nor jointly sufficient for a full characterization of self-assessment. Rather, students or instructors are engaged in self-assessment if they engage in “enough” of these activities. Combining these two forms of vagueness, each of the overlapping features can themselves fall along a continuum (e.g., more or less accurate at tracking performance or understanding motivations). Moreover, self-assessment shares features with other related concepts such as self-testing (Taraban, Paniukov, and Kiser, 2014), mindfulness (Was, 2014), calibration (Gutierrez, 2014), and growth mindsets (Peak, 2015). All are part of the metacognitive constellation of concepts. Each of these concepts is individually vague in both senses described above and the boundaries between them are inescapably fuzzy. Turning to Scharff’s description of metacognitive instruction, all four constituent elements (i.e. ‘intentional,’ ‘ongoing interaction,’ ‘awareness,’ and ‘self-regulation’) are also vague in both senses described above. Thus, I believe that ‘metacognition’ is vague in the ways legal concepts are vague. However, if Waldron is right about the benefits of discussing and grappling with vague legal concepts (and I think he is) and if the analogy between vague concepts and the term ‘metacognition’ holds (and I think it does), then vagueness in this case should be perceived as desirable because it facilitates broad dialogue about teaching and learning.

As Improve with Metacognition celebrates its first year birthday, I want to thank all those who have contributed to the conversation so far. Despite the variety of perspectives, each contribution helps us think more carefully about what we are doing and why. The ongoing dialogue can improve our metacognitive skills and enhance our teaching and learning. As we move into our second year, I hope we can continue exploring the rich the nature of the metacognitive constellation of ideas.

References

Fleisher, Steven (2014). “Self-assessment, it’s a good thing to do.” Retrieved from https://www.improvewithmetacognition.com/self-assessment-its-a-good-thing-to-do/

Gutierrez, Antonio (2014). “Comprehension monitoring: the role of conditional knowledge.” Retrieved from https://www.improvewithmetacognition.com/comprehension-monitoring-the-role-of-conditional-knowledge/

Nuhfer, Ed (2014). “Self-Assessment and the affective quality of metacognition Part 1 of 2.”Retrieved from https://www.improvewithmetacognition.com/self-assessment-and-the-affective-quality-of-metacognition-part-1-of-2/

Peak, Charity (2015). “Linking mindset to metacognition.” Retrieved from https://www.improvewithmetacognition.com/linking-mindset-metacognition/

Scharff, Lauren (2015). “What do we mean by ‘metacognitive instruction’?” Retrieved from https://www.improvewithmetacognition.com/what-do-we-mean-by-metacognitive-instruction/

Taraban, Roman, Paniukov, Dmitrii, and Kiser, Michelle (2014). “What metacognitive skills do developmental college readers need? Retrieved from https://www.improvewithmetacognition.com/what-metacognitive-skills-do-developmental-college-readers-need/

Waldron, Jeremy (1994). “Vagueness in Law and Language: Some Philosophical Issues.” California Law Review 83(2): 509-540.

Was, Chris (2014). “Mindfulness perspective on metacognition. ”Retrieved from https://www.improvewithmetacognition.com/a-mindfulness-perspective-on-metacognition/

 


Who says Metacognition isn’t Sexy?

By Michael J. Serra at Texas Tech University

This past Sunday, you might have watched “The 87th Academy Awards” (i.e., “The Oscars”) on television. Amongst the nominees for the major awards were several films based on true events and real-life people, including two films depicting key events in the lives of scientists Stephen Hawking (The Theory of Everything) and Alan Turing (The Imitation Game).

There are few things in life that I am sure of, but one thing I can personally guarantee is this: No film studio will ever make a motion picture about the life of your favorite metacognition researcher. Believe it or not, the newest issue of Entertainment Weekly does not feature leaked script details about an upcoming film chronicling how J. T. Hart came up with the idea to study people’s feelings of knowing (Hart, 1967), and British actors are not lining up to depict John Flavell laying down the foundational components for future theory and research on metacognition (Flavell, 1979). Much to my personal dismay, David Fincher hasn’t returned my calls regarding the screenplay I wrote about that time Thomas Nelson examined people’s judgments of learning at extreme altitudes on Mt. Everest (Nelson et al., 1990).

Just as film studios seem to lack interest in portraying metacognition research on the big screen, our own students sometimes seem uninterested in anything we might tell them about metacognition. Even the promise of improving their grades sometimes doesn’t seem to interest them! Why not?

One possibility, as I recently found out from a recent blog post by organic-chemistry professor and tutor “O-Chem Prof”, is that the term “metacognition” might simply not be sexy to our students (O-Chem Prof, 2015). He suggests that we instead refer to the concept as “sexing up your noodle”.

Although the idea of changing the name of my graduate course on the topic to “PSY 6969: Graduate Seminar in Sexing-up your Noodle” is highly tempting, I do not think that the problem is completely one of branding or advertising. Rather, regardless of what we call metacognition (or whether or not we even put a specific label on it for our students), there are other factors that we know play a crucial role in whether or not students will actually engage in self-regulated learning behaviors such as the metacognitive monitoring and control of their learning. Specifically, Pintrich and De Groot (1990; see Miltiadou & Savenye, 2003 for a review) identified three major factors that determine students’ motivation to learn that I suggest will also predict their willingness to engage in metacognition: value, expectancy, and affect.

The value component predicts that students will be more interested and motivated to learn about topics that they see value in learning. If they are struggling to learn a valued topic, they should be motivated to engage in metacognition to help improve their learning about it. A wealth of research demonstrates that students’ values and interest predict their motivation, learning, and self-regulation behaviors (e.g., Pintrich & De Groot, 1990; Pintrich et al., 1994; Wolters & Pintrich, 1998; for a review, see Schiefele, 1991). Therefore, when students do not seem to care about engaging in metacognition to improve their learning, it might not be that metacognition is not “sexy” to them; it might be that the topic itself (e.g., organic chemistry) is not sexy to them (sorry, O-Chem Prof!).

The expectancy component predicts that students will be more motivated to engage in self-regulated learning behaviors (e.g., metacognitive control) if they believe that their efforts will have positive outcomes (and won’t be motivated to do so if they believe their efforts will not have an effect). Some students (entity theorists) believe that they cannot change their intelligence through studying or practice, whereas other students (incremental theorists) believe that they can improve their intelligence (Dweck et al., 1995; see also Wolters & Pintrich, 1998). Further, entity theorists tend to rely on extrinsic motivation and to set performance-based goals, whereas incremental theorists tend to rely on intrinsic motivation and to set mastery-based goals. Compared to entity theorists, students who are incremental theorists earn higher grades and are more likely to persevere in the face of failure or underperformance (Duckworth & Eskreis-Winkler, 2013; Dweck & Leggett, 1988; Romero et al., 2014; see also Pintrich, 1999; Sungur, 2007). Fortunately, interventions have been successful at changing students to an incremental mindset, which in turn improves their learning outcomes (Aronson et al., 2002; Blackwell et al., 2007; Good et al., 2003; Hong et al., 1999).

The affective component predicts that students will be hampered by negative thoughts about learning or anxiety about exams (e.g., stereotype threat; test anxiety). Unfortunately, past research indicates that students who experience test anxiety will struggle to regulate their learning and ultimately end up performing poorly despite their efforts to study or to improve their learning (e.g., Bandura, 1986; Pintrich & De Groot, 1990; Pintrich & Schunk, 1996; Wolters & Pintrich, 1998). These students in particular might benefit from instruction on self-regulation or metacognition, as they seem to be motivated and interested to learn the topic at hand, but are too focused on their eventual test performance to study efficiently. At least some of this issue might be improved if students adopt a mastery mindset over a performance mindset, as increased learning (rather than high grades) becomes the ultimate goal. Further, adopting an incremental mindset over an entity mindset should reduce the influence of beliefs about lack of raw ability to learn a given topic.

In summary, although I acknowledge that metacognition might not be particularly “sexy” to our students, I do not think that is the reason our students often seem uninterested in engaging in metacognition to help them understand the topics in our courses or to perform better on our exams. If we want our students to care about their learning in our courses, we need to make sure that they feel the topic is important (i.e., that the topic itself is sexy), we need to provide them with effective self-regulation strategies or opportunities (e.g., elaborative interrogation, self-explanation, or interleaved practice questions; see Dunlosky et al., 2013) and help them feel confident enough to employ them, we need to work to reduce test anxiety at the individual and group/situation level, and we need to convince our students to adopt a mastery (incremental) mindset about learning. Then, perhaps, our students will find metacognition to be just as sexy as we think it is.

ryan gosling metacog (2)

References

Aronson, J., Fried, C. B., & Good, C. (2002). Reducing the effects of stereotype threat on African American college students by shaping theories of intelligence. Journal of Experimental Social Psychology, 38, 113-125. doi:10.1006/jesp.2001.1491

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall.

Blackwell, L. S., Trzesniewski, K. H., & Dweck, C. S. (2007). Implicit theories of intelligence predict achievement across an adolescent transition: A longitudinal study and an intervention. Child Development, 78, 246-263. doi: 10.1111/j.1467-8624.2007.00995.x

Duckworth, A., & Eskreis-Winkler, L. (2013). True Grit. Observer, 26. http://www.psychologicalscience.org/index.php/publications/observer/2013/april-13/true-grit.html

Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14, 4-58. doi: 10.1177/1529100612453266

Dweck, C. S., Chiu, C. Y., & Hong, Y. Y. (1995). Implicit theories and their role in judgments and reactions: A world from two perspectives. Psychological Inquiry, 6, 267-285. doi: 10.1207/s15327965pli0604_1

Dweck, C. S., & Leggett, E. L. (1988). A social-cognitive approach to motivation and personality. Psychological Review, 95, 256-273. doi: 10.1037/0033-295X.95.2.256

Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34, 906-911. doi: 10.1037/0003-066X.34.10.906

Good, C., Aronson, J., & Inzlicht, M. (2003). Improving adolescents’ standardized test performance: An intervention to reduce the effect of stereotype threat. Applied Developmental Psychology, 24, 645-662. doi: 10.1016/j.appdev.2003.09.002

Hart, J. T. (1967). Memory and the memory-monitoring process. Journal of Verbal Learning and Verbal Behavior, 6, 685-691. doi: 10.1016/S0022-5371(67)80072-0

Hong, Y., Chiu, C., Dweck, C. S., Lin, D., & Wan, W. (1999). Implicit theories, attributions, and coping: A meaning system approach. Journal of Personality and Social Psychology, 77, 588-599. doi: 10.1037/0022-3514.77.3.588

Miltiadou, M., & Savenye, W. C. (2003). Applying social cognitive constructs of motivation to enhance student success in online distance education. AACE Journal, 11, 78-95. http://www.editlib.org/p/17795/

Nelson, T. O., Dunlosky, J., White, D. M., Steinberg, J., Townes, B. D., & Anderson, D. (1990). Cognition and metacognition at extreme altitudes on Mount Everest. Journal of Experimental Psychology: General, 119, 367-374.

O-Chem Prof. (2015, Jan 7). Our Problem with Metacognition is Not Enough Sex. [Web log]. Retrieved from http://phd-organic-chemistry-tutor.com/our-problem-with-metacognition-not-enough-sex/

Pintrich, P. R. (1999). The role of motivation in promoting and sustaining self-regulated learning. International Journal of Educational Research, 31, 459-470. doi: 10.1016/S0883-0355(99)00015-4

Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82, 33-40. doi: 10.1037/0022-0663.82.1.33

Pintrich, P. R., Roeser, R., & De Groot, E. V. (1994). Classroom and individual differences in early adolescents’ motivation and self-regulated learning. Journal of Early Adolescence, 14, 139-161. doi: 10.1177/027243169401400204

Pintrich, P. R., & Schunk D. H. (1996). Motivation in education: Theory, research, and applications. Englewood Cliffs, NJ: Merrill/Prentice Hall.

Romero, C., Master, A., Paunesku, D., Dweck, C. S., & Gross, J. J. (2014). Academic and emotional functioning in middle school: The role of implicit theories. Emotion, 14, 227-234. doi: 10.1037/a0035490

Schiefele, U. (1991). Interest, learning, and motivation. Educational Psychologist, 26, 299-323. doi: 10.1080/00461520.1991.9653136

Sungur, S. (2007). Modeling the relationships among students’ motivational beliefs, metacognitive strategy use, and effort regulation. Scandinavian Journal of Educational Research, 51, 315-326. doi: 10.1080/00313830701356166

Wolters, C. A., & Pintrich, P. R. (1998). Contextual differences in student motivation and self-regulated learning in mathematics, English, and social studies classrooms. Instructional Science, 26, 27-47. doi: 10.1023/A:1003035929216


Linking Mindset to Metacognition

By Charity Peak, Ph.D. (U. S. Air Force Academy)

As part of our institution’s faculty development program, we are currently reading Carol Dweck’s Mindset: The New Psychology of Success. Even though the title and cover allude to a pop-psychology book, Dweck’s done a fabulous job of pulling together decades of her scholarly research on mindsets into a layperson’s text.

After announcing the book as our faculty read for the semester, one instructor lamented that she wished we had selected a book on the topic of metacognition. We have been exploring metacognition as a theme this year through our SoTL Circles and our participation in the multi-institutional Metacognitive Instruction Project. My gut reaction was, “But Mindset is about metacognition!” Knowing your own mindset requires significant metacognition about your own thinking and attitudes about learning. And better yet, understanding and recognizing mindsets in your students helps you to identify and support their development of mindsets that will help them to be successful in school and life.

If you haven’t read the book, below are some very basic distinctions between the fixed and growth mindsets that Dweck (2006) discovered in her research and outlines eloquently in her book:

Fixed Mindset Growth Mindset
Intelligence is static. Intelligence can be developed.
Leads to a desire to look smart and therefore a tendency to:

  • avoid challenges
  • give up easily due to obstacles
  • see effort as fruitless
  • ignore useful feedback
  • be threatened by others’ success
Leads to a desire to learn and therefore a tendency to:

  • embrace challenges
  • persist despite obstacles
  • see effort as a path to mastery
  • learn from criticism
  • be inspired by others’ success

 

What does this mean for metacognition? Dweck points out that people go through life with fixed mindsets without even realizing they are limiting their own potential. For example, students will claim they are “not good at art,” “can’t do math,” “don’t have a science brain.” These mindsets restrict their ability to see themselves as successful in these areas. In fact, even when instructors attempt to refute these statements, the mindsets are so ingrained that they are extremely difficult to overcome.

What’s an instructor to do? Help students have metacognition about their self-limiting beliefs! Dweck offers a very simple online assessment on her website that takes about 5 minutes to complete. Instructors can very easily suggest that students take the assessment, particularly in subjects where these types of fallacious self-limiting attitudes abound, as a pre-emptive way to begin a course. These assessment results would help instructors easily identify who might need the most assistance in overcoming mental barriers throughout the course. Instructors can also make a strong statement to the class early in the semester that students should fight the urge to succumb to these limiting beliefs about a particular subject area (such as art or math).   As Dweck has proven through her research, people can actually become artistic if taught the skills through learnable components (pp. 68-69). Previously conceived notions of talent related to a wide variety of areas have been refuted time and again through research. Instead, talent is likely a cover for hard work, perseverance, and overcoming obstacles. But if we don’t share those insights with students, they will never have the metacognition of their own self-limiting – and frankly mythical – belief systems.

Inspired but wish you knew how to apply it to your own classes? A mere Google search on metacognition and mindset will yield a wealth of resources, but I particularly appreciate Frank Noschese’s blog on creating a metacognition curriculum. He started his physics course by having students take a very simple survey regarding their attitudes toward science. He then shared a short video segment called “Grow Your Brain” from the episode Changing Your Mind (jump to 13:20) in the Scientific American Frontiers series from PBS. Together, he and his students began a journey of moving toward a growth mindset in science. Through an intentional metacognition lesson, he sent a very clear message to his students that “I can’t” would not be tolerated in his course. He set them up for success by demonstrating clearly that everyone can learn physics if they put their minds (or mindsets) to it.

Metacognition about mindsets offers instructors an opportunity to give students the gift of a lifetime – the belief that they can overcome any learning obstacles if they just persevere, that their intelligence is not fixed but actually malleable, that learning is sometimes hard but not impossible! When I reflect on why I am so deeply dedicated to education as a profession, it is my commitment to helping students see themselves using a growth mindset. Helping them to change their mindsets can change their future, and metacognition is the first step on that journey!

 

References:

“Changing the Mind.” (11/21/00). Scientific American Frontiers. Boston: Ched-Angier Production Co. Retrieved from http://chedd-angier.com/frontiers/season11.html

Dweck, C. S. (2006). Mindset: The new psychology of success. New York: Ballantine Books.

Noschese, F. (September 10, 2012). Metacognition curriculum (Lesson 1 of ?). Retrieved from https://fnoschese.wordpress.com/2012/09/10/metacognition-curriculum-lesson-1-of/

 


Self-Assessment, It’s A Good Thing To Do

by Stephen Fleisher, CSU Channel Islands

McMillan and Hearn (2008) stated persuasively that:

In the current era of standards-based education, student self-assessment stands alone in its promise of improved student motivation and engagement, and learning. Correctly implemented, student self-assessment can promote intrinsic motivation, internally controlled effort, a mastery goal orientation, and more meaningful learning (p. 40).

In her study of three meta-analyses of medical students’ self-assessment, Blanch-Hartigan (2011) reported that self-assessments did prove to be fairly accurate, as well as improving in later years of study. She noted that if we want to increase our understanding of self-assessment and facilitate its improvement, we need to attend to a few matters. To understand the causes of over- and underestimation, we need to address direction in our analyses (using paired comparisons) along with our correlational studies. We need also to examine key moderators affecting self-assessment accuracy, for instance “how students are being inaccurate and who is inaccurate” (p. 8). Further, the wording and alignment of our self-assessment questions in relation to the criteria and nature of our performance questions are essential to the accuracy of understanding these relationships.

When we establish strong and clear relationships between our self-assessment and performance questions for our students, we facilitate their use of metacognitive monitoring (self-assessment, and attunement to progress and achievement), metacognitive knowledge (understanding how their learning works and how to improve it), and metacognitive control (changing efforts, strategies or actions when required). As instructors, we can then also provide guidance when performance problems occur, reflecting on students’ applications and abilities with their metacognitive monitoring, knowledge, and control.

Self-Assessment and Self-Regulated Learning

For Pintrich (2000), self-regulating learners set goals, and activate prior cognitive and metacognitive knowledge. These goals then serve to establish criteria against which students can self-assess, self-monitor, and self-adjust their learning and learning efforts. In monitoring their learning process, skillful learners make judgments about how well they are learning the material, and eventually they become better able to predict future performance. These students can attune to discrepancies between their goals and their progress, and can make adjustments in learning strategies for memory, problem solving, and reasoning. Additionally, skillful learners tend to attribute low performance to low effort or ineffective use of learning strategies, whereas less skillful learners tend to attribute low performance to an over-generalized lack of ability or to extrinsic things like teacher ability or unfair exams. The importance of the more adaptive attributions of the aforementioned skillful learners is that these points of view are associated with deeper learning rather than surface learning, positive affective experiences, improved self-efficacy, and greater persistence.

Regarding motivational and affective experiences, self-regulating learners adjust their motivational beliefs in relation to their values and interests. Engagement improves when students are interested in and value the course material. Importantly, student motivational beliefs are set in motion early in the learning process, and it is here that instructional skills are most valuable. Regarding self-regulation of behavior, skillful learners see themselves as in charge of their time, tasks, and attention. They know their choices, they self-initiate their actions and efforts, and they know how and when to delay gratification. As well, these learners are inclined to choose challenging tasks rather than avoid them, and they know how to persist (Pintrich, 2000).

McMillan and Hearn (2008) summarize the role and importance of self-assessment:

When students set goals that aid their improved understanding, and then identify criteria, self-evaluate their progress toward learning, reflect on their learning, and generate strategies for more learning, they will show improved performance with meaningful motivation. Surely, those steps will accomplish two important goals—improved student self-efficacy and confidence to learn—as well as high scores on accountability tests (p. 48). 

As a teacher, I see one of my objectives being to discover ways to encourage the development of these intellectual tools and methods of thinking in my own students. For example, in one of my most successful courses, a colleague and I worked at great length to create a full set of specific course learning outcomes (several per chapter, plus competencies we cared about personally, for instance, life-long learning). These course outcomes were all established and set into alignment with the published student learning outcomes for the course. Lastly, homework, lectures, class activities, individual and group assignments, plus formative and summative assessments were created and aligned. By the end of this course, students not only have gained knowledge about psychology, but tend to be pleasantly surprised to have learned about their own learning.

 

References

Blanch-Hartigan, D. (2011). Medical students’ self-assessment of performance: Results from three meta-analyses. Patient Education and Counseling, 84, 3-9.

McMillan, J. H., & Hearn, J. (2008). Student self-assessment: The key to stronger student motivation and higher achievement. Educational Horizons, 87(1), 40-49. http://files.eric.ed.gov/fulltext/EJ815370.pdf

Pintrich, P. R. (2000). The role of goal orientation in self-regulated learning. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.) Handbook of self-regulation. San Diego, CA: Academic.