Scratch and Win or Scratch and Lose? Immediate Feedback Assessment Technique

By Aaron S. Richmond, Ph. D., Metropolitan State University of Denver

When prepping my courses for this spring semester, I was thinking about how I often struggle with providing quick and easy feedback on quiz and exam performance to my students. I expressed this to my colleague, Dr. Anna Ropp (@AnnaRopp), and she quickly suggested that I check out Immediate Feedback Assessment Technique (IF-AT) by Epstein Educational Enterprises. When she showed me the IF-ATs, I was intrigued and thought I might as well give it a try—so I ordered some. IF-AT is used to instantaneously provide performance feedback to learners by allowing students to scratch off what they believe to be the correct answer on a multiple-choice exam, quiz or test. See Figure 1a and 1b for student examples of a completed IF-AT.  Students can find out what the incorrect or correct answer is by just scratching the chosen answer (see question 1 in Figure 1a). Students can scratch more than one answer to find the correct answer (see question 2 in Figure 1a). You may also use it as a way of providing partial credit for sequenced attempts (e.g., scratch 1 choice for full credit if correct, then scratch a second choice, and maybe a third, to get decreasing amounts of partial credit). See question 6 in Figure 1b for an example of this.  Epstein and colleagues suggest that IF-AT not only assesses student learning, but it can also teach at the same time. However, it occurred to me that this is not only an assessment and teaching tool, rather it is a great opportunity to increase metacognition.

                                                        (a)                                                (b)

Figure 1. Completed and Unscored 10-Question IF AT Completed 10-Question IF AT Student and Teacher Scored

How to Use IF-AT
Epstein and colleagues suggest that IF-AT is fair, fast, active, fun, and respectful and builds knowledge. The IF-AT scratch assessments come in 10, 25, or 50-item test options with 5 different orders of correct answers. The Epsteins suggest that IF-AT can be used in many ways. For example, they can be used for chapter tests; individual study (at home or in class); quizzes; pyramidal-sequential-process quizzing; exams; team-based and cooperative-learning; study-buddy learning; and most importantly as a feedback mechanism (see http://www.epsteineducation.com/home/about/uses.aspx website for further explanation). There are several metacognitive functions (although the Epstein’s do not couch their claims using this term) of the IF-AT. First, the Epstein’s argue that you can arrange your IF-AT so that the first question (and the immediate feedback of the correct answer) can be used in a pyramidal sequential process. That is, the correct answer to the first question is needed to answer subsequent questions as it is foundational knowledge for the remaining question. This sequential process allows the instructor and student to pin-point where the student’s knowledge of the integrated content broke down. This is implicit metacognitive modeling of a student’s metacognitive knowledge that should be made explicit. Meaning, by explaining to your students how the exam is set up, students can use cues and knowledge from previous questions and answers on the test to assist in their understanding of subsequent questions and answers. This is a key step to the learning process. Second, the IF-AT may also be used in a Team-Based way (i.e., distributed cognition) by forming groups, problem solving, and the team discovering what the correct answer is. IF-AT may also be used in dyads to allow students to discuss correct and incorrect answers. Students read a question, discuss the correct and incorrect answer, then cooperatively make a decision and receive immediate feedback. Third, IF-AT may be used to increase cognitive and metacognitive strategies. That is, by providing feedback immediately, students (if you explicitly instruct them to do so) may adjust their cognitive and metacognitive strategies for future study. For example, if a student used flashcards to study, and did poorly, they may want to adjust how they construct and use flashcards (e.g., distributed practice). Finally, and most importantly, IF-AT may improve student’s metacognitive regulation via calibration (i.e., the accuracy of knowing when you do and don’t know the answer to a question). That is, by providing immediate feedback, students may become more accurate in their judgments of knowing or even their feelings of knowing based on the feedback.

Is it Scratch to Win or is Scratch to Lose?
As described, by using the IF-AT, students get immediate feedback on whether they got the question correct, incorrect, and what is the appropriate answer. From a metacognitive perspective, this is outstanding. Students can calibrate (i.e., adjust their estimations and confidence in knowing an answer) in real-time, engage in distributed cognition, provide feedback on choice of cognitive and metacognitive strategies, can increase cognitive monitoring, and regulation control. These are all WIN, WIN, WIN, byproducts. HOWEVER, is there a down-side to instantaneously knowing you are wrong? That is, is there an emotional regulation and reactivity to IF-AT? As I have been implementing the use of the IF-AT, I have noticed (anecdotally) that about 1 in 10 students react negatively and it seems to increase their test anxiety. Presumably, the other 90% of the students love it and appreciate the feedback. Yet, what about the 10%? Does IF-AT stunt or hinder their performance? Again, my esteemed colleague Dr. Anna Ropp and I engaged in some scholarly discourse to answer this question, and Anna suggested that I make the first 3-5 questions on each IF-AT “soft-ball” questions. That is, questions that 75% of students will get correctly so that students’ fears and anxiety is remediated to some degree. Another alternative is to provide students with a copy of the test or exam and let them rank order or weight their answers (see Chris Was’ IwM Blog, 2014; on how to do this). Despite these two sound suggestions, there still may be an affective reaction that could be detrimental to student learning. To date, there has been no research to investigate this issue and there are only a hand full of well-designed studies to investigate IF-AT (e.g., Brosvic et al., 2006; Dihov et al., 2005; Epstein et al., 2002, 2003; Slepkov & Sheill, 2014). As such, more well-constructed and executed empirical research is needed to investigate this issue (Hint: all you scholars looking for a SoTL project…here’s your sign).

Concluding Thoughts and Questions for You
After investigating, reflecting on, and using IF-AT in my classroom, I think that it is a valuable assessment tool in your quiver of assessments to increase metacognition—but of course not an educational panacea. Furthermore, in my investigation of this assessment technique, (as usual), more questions popped up on the use of IF-AT. So, I will leave you with a charge and call to help me answer the questions below:

  1. Are there similar assessments that provide immediate feedback that you use? If so, are they less expensive or free?
  2. If you are using IF-AT, what is your favorite way to use it?
  3. Do you think IF-AT could cause substantial test anxiety? If so, to whom and to what level within your classes?
  4. How could you use IF-AT be used as a tool for calibration more efficiently? Or, what other ways do you think IF-AT can be used to increase metacognition?
  5. I think there are enormous opportunities for SoTL on IF-AT (e.g., the effects on calibration, distributed cognition, cognitive monitoring, conditional knowledge of strategy use, etc.), which means we all have some more work to do. J

References
Brosvic, G. M., Epstein, M. L., Dihoff, R. E., & Cook, M. J. (2006). Acquisition and retention of Esperanto: The case for error correction and immediate feedback. The Psychological Record56(2), 205.

Dihoff, R. E., Brosvic, G. M., Epstein, M. L., & Cook, M. J. (2005). Adjunctive role for immediate feedback in the acquisition and retention of mathematical fact series by elementary school students classified with mild mental retardation. The Psychological Record55(1), 39.

Epstein, M. L., Brosvic, G. M., Costner, K. L., Dihoff, R. E., & Lazarus, A. D. (2003). Effectiveness of feedback during the testing of preschool children, elementary school children, and adolescents with developmental delays. The Psychological Record53(2), 177.

Epstein, M. L., Lazarus, A. D., Calvano, T. B., & Matthews, K. A. (2002). Immediate feedback assessment technique promotes learning and corrects inaccurate first responses. The Psychological Record52(2), 187.

Slepkov, A. D., & Shiell, R. C. (2014). Comparison of integrated testlet and constructed-response question formats. Physical Review Special Topics-Physics Education Research10(2), 020120.

Was, C. (2014, August). Testing improves knowledge monitoring. Improve with Metacognition. Retrieved from https://www.improvewithmetacognition.com/testing-improves-knowledge-monitoring/


Collateral Metacognitive Damage

Why Seeing Others as “The Little Engines that Could” beats Seeing Them as “The Little Engines Who Were Unskilled and Unaware of It”

by Ed Nuhfer,Ph.D. Professor of Geology, Director of Faculty Development and Director of Educational Assessment, California State Universities (retired)

What is Self-Assessment?

At its root, self-assessment registers as an affective feeling of confidence in one’s ability to perform in the present. We can become consciously mindful of that feeling and begin to distinguish the feeling of being informed by expertise from the feeling of being uninformed. The feeling of ability to rise in the present to a challenge is generally captured by the phrase “I think I can….” Studies indicate that we can improve our metacognitive self-assessment skill with practice.

Quantifying Self-Assessment Skill

Measuring self-assessment accuracy assessment lies in quantifying the difference between a felt competence to perform and a measure of the actual competence demonstrated. However, what at first glance appears to be a calculation of simple subtraction has proven to be a nightmarish challenge to a researcher’s efforts in presenting data clearly and interpreting it accurately. I speak of this “nightmare” with personal familiarity. Some colleagues and I recently summarized different kinds of self-assessments, self-assessment’s relationship to self-efficacy, the importance of self-assessment to achievement, and the complexity of interpreting self-assessment measurements (Nuhfer and others, 2016; 2017).

Can we or can’t we do it?

The children’s story, The Little Engine that Could is a well-known story of the power of positive self-assessment. The throbbing “I think I can, I think I can…” and the success that follows offers an uplifting view of humanity’s ability to succeed. That view is close to the traits of the “Growth Mindset” of Stanford Psychologist Carol Dweck (2016). It is certainly more uplifting than an alternative title, The Little Engine that Was Unskilled and Unaware of It,” which predicts a disappointing ending to “I think I can I think I can….” The dismal idea that our possible competence is capped by what nature conferred at birth is a close analog to the traits of Dweck’s “Fixed Mindset,” which her research revealed as toxic to intellectual development.

As writers of several Improve with Metacognition blog entries have noted, “Unskilled and Unaware of It” are key words from the title of a seminal research paper (Kruger & Dunning, 1999) that offered one of the earliest credible attempts to quantify the accuracy of metacognitive self-assessment. That paper noted that some people were extremely unskilled and unaware of it. Less than a decade later, psychologists were claiming: “People are typically overly optimistic when evaluating the quality of their performance on social and intellectual tasks” (Ehrlinger and others, 2008). Today, laypersons cite the “Dunning-Kruger Effect” and often use it to label any individual or group that they dislike as “unskilled and unaware of it.” We saw the label being applied wholesale in the 2016 presidential election, not just to the candidates but also to the candidates’ supporters.

Self-assessment and vulnerability

Because self-assessment is about taking stock of ourselves rather than judging others, using the Dunning-Kruger Effect to label others is already on shaky ground. But are the odds that those we are tempted to label as “unskilled and unaware of it” likely to be correct? While the consensus in the literature of psychology seems to indicate that they are, our investigation of the numeracy underlying the consensus indicates otherwise (Nuhfer and others, 2017).

We think that nearly two decades of replicated studies that concluded that people are “…typically overly optimistic…” exhibited replication because they all relied on variants of a unique graphic introduced in the seminal paper in 1999. These graphs generate artifact patterns from both actual data and random numbers that are patterns expected from a Dunning-Kruger Effect, and the artifacts are easily mistaken for expressions of actual human self-assessment traits.

After gaining some understanding of the hazards presented by the devilish nature of self-assessment measures, our quantitative results showed that people, in general, have a surprisingly good awareness of their capabilities (Nuhfer and others, 2016, 2017). About half of our studied populace of over a thousand students and faculty accurately self-assessed their performance within ± 10 percentage points (ppts), and about two-thirds of people proved accurate within ±15 ppts. About 25% might qualify as having inadequate self-assessment skills (greater than ± 20 ppts), but only about 5% of our academic populace might merit the label “unskilled and unaware of it” (overestimated their abilities by 30 ppts or more). Odds seem high against a randomly selected person being seriously “unskilled and unaware of it” and are very high against this label being validly applicable to a group.

Others often rise to the expectations we have of them.

Consider the collective effects of people’s accepting beliefs about themselves and others as “unskilled and unaware of it.” This negative perspective can predispose an organization to accept, as a given, that people are less capable than they really are. Further, for those of us with power, such as instructors over students or tenured peers over untenured instructors, we should become aware of a term called “gaslighting.” In gaslighting, our negatively biased actions or comments may result in taking away the self-confidence of others who accept us as credible, trustworthy, and important to their lives. This type of influence can lead to lower performance, thus seeming to substantiate the initial negative perspective. When gaslighting is deliberate, it constitutes a form of emotional abuse.

Aren’t you CURIOUS yet?

Wondering about your self-assessment skills and how they compare with those of novices and experts? Give yourself about 45 minutes and try the self-assessment instrument used in our research at <http://tinyurl.com/metacogselfassess>. You will receive a confidential report if you furnish your email at the end of completing that self-assessment.

Several of us, including our blog founder Lauren Scharff, will be presenting the findings and implications of our recent numeracy studies in August, at the Annual Meeting of the American Psychological Association in Washington DC. We hope some of our fellow bloggers will be able to join us there.

References

Dweck, C. (2016). Mindset: The New Psychology of Success. New York: Ballantine.

Ehrlinger J., Johnson, K., Banner M., Dunning, D., and Kruger, J. (2008). Why the unskilled are unaware: Further explorations of absent self-insight among the incompetent. Organizational Behavior and Human Decision Processes 105: 98–121. http://dx.doi.org/10.1016/j.obhdp.2007.05.002.

Kruger, J. and Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one‘s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology 77: 1121‒ 1134. http://dx.doi.org/10.1037/0022- 3514.77.6.1121

Nuhfer, E. B., Cogan, C., Fleisher, S., Gaze, E., and Wirth, K., (2016). Random number simulations reveal how random noise affects the measurements and graphical portrayals of self-assessed competency.” Numeracy 9 (1): Article 4. http://dx.doi.org/10.5038/1936-4660.9.1.4

Nuhfer, E. B., Cogan, C., Fleisher, S., Wirth, K. and Gaze, E., (2017), “How random noise and a graphical convention subverted behavioral scientists’ explanations of self-assessment data: Numeracy underlies better alternatives. Numeracy: 10 : (1): Article 4.

<DOI: http://dx.doi.org/10.5038/1936-4660.10.1.4>


Promoting academic rigor with metacognition

By John Draeger (SUNY Buffalo State)

A few weeks ago, I visited Lenoir-Rhyne University to talk about promoting academic rigor and I was reminded of the importance of metacognition. College faculty often worry that students are arriving in their courses increasingly underprepared and they often find it difficult to maintain the appropriate level of academic rigor. Faced with this challenge, some colleagues and I developed a model for promoting academic rigor. According to this model, promoting academic rigor requires actively engaging students in meaningful content with higher-order thinking at the appropriate level of expectation for a given context (Draeger, del Prado Hill, Hunter, and Mahler, 2013). The model (see FIGURE ONE) can be useful insofar as it can prompt reflection and frame conversation.  In particular, faculty members can explore how to improve student engagement, how to uncover a course’s most meaningful elements, how to determine the forms of higher-order thinking most appropriate for a course, and how to modulate expectations for different student groups (e.g., majors, non-majors, general education, honors). There is nothing particularly magical about this model. It is one of many ways that college instructors might become more intentional about various aspect of course design, instruction, and assessment. However, I argue that promoting academic rigor in these way requires metacognition.

In a previous post, Lauren Scharff and I argued that metacognition can be used to select the appropriate teaching and learning strategy for a given context (Draeger & Scharff, 2016). More specifically, metacognition can help instructors “check in” with students and make meaningful “in the moment” adjustments. Similarly, engaging students in each of the components of the rigor model can take effort, especially because students often need explicit redirection. If instructors are monitoring student learning and using that awareness to make intentional adjustments, then they are more likely to encourage students to actively engage meaningful content with higher-order thinking at the appropriate level of expectation.

Consider, for example, a course in fashion merchandising. Students are often drawn to such a course because they like to shop for clothes. This may help with enrollment, but the goal of the course is to give students insight into industry thinking. In particular, students need to shift from a consumer mentality to the ability to apply consumer behavior theory in ways that sell merchandise. What would it mean to teach such a course with rigor? The model of academic rigor sketched above recognizes that each of the components can occur independently and not lead to academic rigor. For example, students can be actively engaged in content that is less than meaningful to the course (e.g., regaling others with shopping stories) and students can be learning meaningful content without being actively engaged (e.g., rote learning of consumer behavior theory). Likewise, students can be actively and meaningfully engaged with or without higher-order thinking. The goal, however, is to have multiple components of the model occur together, i.e. to actively engage students in meaningful content with higher-order thinking at the appropriate level of expectations. In the case of fashion merchandising, a professor might send students to the mall to have them use consumer behavior theory to justify why a particular rack of clothes occupies a particular place on the shop floor. If they can complete this assignment, then they are actively engaged (at the mall) in meaningful content (consumer behavior theory) with higher-order-thinking (applying theory to a rack of clothes). Metacognition requires that instructors monitor student learning and use that awareness to make intentional adjustments. If a fashion merchandising instructor finds students lapsing into stories about their latest shopping adventures, then the instructor might redirect the discussion towards higher-order-thinking with meaningful content by asking the students to use consumer behavior theory to question their assumptions about their shopping behaviors.

Or consider a course in introductory astronomy (Brogt & Draeger, 2015). Students often choose such a course to satisfy their general education requirements because they think it has something to do with star gazing and it is preferable to other courses, like physics. Much to their surprise, however, students quickly learn that astronomy is physics by another name. Astronomy instructors struggle because students in introductory astronomy often lack the necessary background in math and science. The trick, therefore, is to make the course rigorous when students lack the usual tools. One solution could be to use electromagnetic radiation (a.k.a. light) as the touchstone concept for the course. After all, light is the most salient evidence we have for occurrences far away. As such, it can figure into conversations about the scientific method, including scientific skepticism about various astronomical findings. Moreover, even if students cannot do precise calculations, it might be enough that they be able to estimate the order-of-magnitude of distant stars. Astronomy instructors have lots of great tools for actively engaging students in order-of-magnitude guesstimates. These can be used to scaffold students into understanding how answers to order-of-magnitude estimates involving light can provide evidence about distant objects. If so, then students are actively engaging meaningful content with higher-order thinking at a level appropriate to an introductory course satisfying a general education requirement. Again, metacognition can help instructors make intentional adjustments based on “in the moment” observations about student performance. If, for example, an instructor finds that students “check out” once mathematical symbols go up on the board, the instructor can redouble efforts to highlight the importance of understanding order-of-magnitude and can make explicit the connection between previous guesstimate exercises and the symbols on the board.

If tools for reflection (e.g., a model of academic rigor) help instructors map out the most salient aspects of a course, then metacognition is the mechanism by which instructors navigate that map. If so, then I suggest that promoting academic rigor requires metacognition. It is important to understand how we can help students actively engage in meaningful course content with higher-order-thinking at the appropriate level of expectation for a given course. However, consistently shepherding students to the intersection of those elements requires  metacognitive awareness and self-regulation on behalf of the instructor.

References

Brogt, E. & Draeger, J. (2015). “Academic Rigor in General Education, Introductory Astronomy Courses for Nonscience Majors.” The Journal of General Education, 64 (1), 14-29.

Draeger, J. (2015). “Exploring the relationship between awareness, self-regulation, and metacognition.”  Retrieved from https://www.improvewithmetacognition.com/exploring-the-relationship-between-awareness-self-regulation-and-metacognition/

Draeger, J., del Prado Hill, P., Hunter, L. R., Mahler, R. (2013). “The Anatomy of Academic Rigor: One Institutional Journey.” Innovative Higher Education 38 (4), 267-279.

Draeger, J. & Scharff, L. (2016). “Using Metacognition to select and apply appropriate teaching strategies.” Retrieved from https://www.improvewithmetacognition.com/using-metacognition-select-apply-appropriate-teaching-strategies/


Teacher, Know Thyself (Translation: Use Student Evaluations of Teaching!)

by Guy Boysen, Ph.D., McKendree University

I’ll call him Donald. I am willing to bet that you know a Donald too. Students fled from Donald’s classes in droves. His was the pedagogy of narcissism – “I am the Lord thy teacher and thou shall have no classroom activities unfocused on me!” Donald’s grading system was so subjective, vindictive, and Byzantine as to be barely defensible. Enrollment in his classes always followed the same pattern: intro course full of students who did not know any better at the start of the semester and then decimation by the end of the semester; advanced seminars empty except for a few adoring students with Stockholm syndrome. Asked about his student evaluations, Donald would say “My seminar evals are good, but I don’t even look at my intro evals anymore – they don’t know about teaching.”

Donald calls to mind the classic metacognitive phenomenon of being unskilled and unaware of it (Kruger & Dunnnig, 1999; Lodge, 2016; Schumacher, Akers, & Taraban, 2016). This is something teachers see in students all of the time; bad students overestimate their abilities and therefore don’t work to improve. As illustrated by Donald, this phenomenon applies to teachers as well.

There are a number of wide-ranging characteristics that make someone a model teacher (Richmond, Boysen, & Gurung, 2016), but the use of student evaluations to improve teaching is one that has a strong metacognitive component. Student evaluations provide teachers with feedback so that they can engage in metacognitive analysis of their pedagogical skills and practices. Based on that analysis, goals for improvement can be set and pursued.

Recommendations for Using Student Evals

How should teachers use student evaluations to develop metacognitive awareness of their pedagogical strengths and weakness? Several suggestions can be found in An Evidence-Based Guide for College and University Teaching: Developing the Model Teacher (Richmond, Boysen, & Gurung, 2016).

Set goals for improvement and follow through with them.

Have you ever gotten on a scale and not liked the number staring back at you? Did you just get on and off the scale repeatedly expecting the number to change? No? Well, that trick doesn’t work in teaching either. Collecting student evaluations without using them to set and implement goals for improvement is like a diet that only consists of repeated weigh-ins – the numbers will not change without the application of direct effort. Use your student evaluations, preferably in collaboration with a mentor or teaching expert, to set manageable goals for change. 

Select the correct assessment tool.    

Wouldn’t it be great if we could select our own teaching evaluations? Mine might look something like this.

But wait! You can select your own teaching evaluations. Official, summative evaluations may be set at the institutional level, but teachers can implement any survey they want for professional development purposes. Choose wisely, however.

If you are a numbers person, select a well-researched measure that provides feedback across several dimensions of teaching that are relevant to you. Perhaps the best known of these is the Student Evaluation of Educational Quality (SEEQ), which measures teaching quality across nine different factors (Marsh, 1983). The advantages to this type of measure are that the results can be scientifically trusted and are detailed enough to inform goals for improvement.

Not a numbers person? You might ask for written comments from students. Whatever you want to know about your teaching, you can simply ask – believe me, students have opinions! Although analyzing student comments can be laborious (Lewis, 2001), they can offer unequalled richness and specificity. Beware of asking for general feedback however. General questions tend to elicit general feedback (Hoon, Oliver, Szpakowska, & Newton, 2015). Rather, provide specific prompts such as the following.

  • What should I/we STOP doing in this class?
  • What should I/we START doing in this class?
  • What should I/we CONTINUE doing in this class? 

Don’t wait until the end of the semester.

Imagine if Donald could get feedback from the students who drop his classes. Perhaps he could make pedagogical changes to reach those students before they flee. Guess what, he can! Formative assessment is the key.

Teachers often allow official, end-of-semester student evaluations to serve as their only feedback from students. The problem with this approach is that the feedback comes too late to make midsemester course corrections. This is analogous to the metacognitive importance of providing students with early feedback on their performance. You wouldn’t expect students to succeed in your course if a final exam was the only grade, would you? Well, don’t put yourself in the same position. Model teachers ask for student feedback both at the end of the semester (i.e., summative) and early enough in the semester to make immediate improvements (i.e., formative).

Make changes large and small.

Student evaluations can be used to inform revisions to all levels of pedagogy. Imagine that students report being absolutely bewildered by a concept in your class. Potential responses to this feedback could be to change (a) the time spent on the concept in class, (b) scaffolding of knowledge needed to understand the concept, (c) the availability of study aids related to the concept, (d) the basic instructional technique used to teach the concept, or (e) the decision to even include the concept in the course. For model teachers, student feedback can inform changes large and small.

Conclusion

Every single semester students comment on my evaluations that they want the tests to be multiple choice rather than short answer/essay, and every semester I tell students that I will not be changing the test format because students do not study as hard for multiple-choice tests. Thus, my point is not that model teachers incorporate all student feedback into their courses. However, failure to respond should be a sound and intentional pedagogical choice rather than a Donald-like failure of metacognition – don’t be caught unskilled and unaware.

References

Hoon, A., Oliver, E., Szpakowska, K., & Newton, P. (2015). Use of the Stop, Start, Continue method is associated with the production of constructive qualitative feedback by students in higher education. Assessment & Evaluation in Higher Education, 40, 755-767. doi:10.1080/02602938.2014.956282

Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77, 1121-1134.

Lewis, K. G. (2001). Making sense of student written comments. New Directions for Teaching and Learning, 87, 25-32.

Lodge, J. (2016) Hypercorrection: Overcomming overconfidence with metacognition. Retreived from https://www.improvewithmetacognition.com/hypercorrection-overcoming-overconfidence-metacognition/

Marsh, H. W. (1983). Multidimensional ratings of teaching effectiveness by students from different academic settings and their relation to student/course/instructor characteristics. Journal of Educational Psychology, 75, 150-166. doi:10.1037/0022-0663.75.1.150

Richmond, A. S., Boysen, G. A., Gurung, R. A. R. (2016). An evidence-based guide for college and university teaching: Developing the model teacher. Routledge.

Schumacher, J. R., Akers, E. & Taraban, R. (2016). Unskilled and unaware: A metacognitive bias. Retrieved from https://www.improvewithmetacognition.com/unskilled-unaware-metacognitive-bias/


New Year Metacognition

by Lauren Scharff, Ph.D., United States Air Force Academy *

Happy New Year to you! This seasonal greeting has many positive connotations, including new beginnings, hope, fresh starts, etc. But, it’s also strongly associated with the making of new-year resolutions, and that’s where the link to metacognition becomes relevant.

As we state on the Improve with Metacognition home page, “Metacognition refers to an intentional focusing of attention on the development of a process, so that one becomes aware of one’s current state of accomplishment, along with the situational influences and strategy choices that are currently, or have previously, influenced accomplishment of that process. Through metacognition, one should become better able to accurately judge one’s progress and select strategies that will lead to success.”

Although this site typically focuses on teaching and learning processes, we can be metacognitive about any process / behavior in which we might engage. A new year’s resolution typically involves starting a new behavior that we might deem to be healthier for us, or stopping an already established behavior that we deem to be unhealthy for us. Either way, some effort is likely to be involved, because if it was going to be easy, we wouldn’t create a resolution to make the change.

Effort alone, however, is unlikely to lead to success. Just like students who “study harder” without being metacognitive about it, people who simply “try hard” to make a change will often be unsuccessful. This is because most behaviors, including learning, are complex. There are a multitude of situational factors and personal predispositions that interact to influence our success in obtaining our behavioral goals. Thus, it’s unlikely that a single strategy will work at all times. In fact, persisting with an ineffective strategy will lead to frustration, cynicism, and the giving up on one’s resolution.

Now, typically, I am not the sort of person who actually makes new-year resolutions. But this new year presents a new situation for me. I will be on sabbatical and working from home. I have prepared a fairly ambitious list of professional development activities that I hope to accomplish. I know I am capable of each of them. But, I also know that I will be working in an environment with a different group of distractions and without many external deadlines. Instead of committee work, grading, short turn-around taskers, and meetings with students and colleagues preventing me from working on my publications and other professional development activities, I will have a dog with big brown eyes who would love to go for a walk, children who need attention when they’re home from school, and projects at home that I usually can put out of mind when I’m at the office.

My resolution to myself for the coming 6 months of my sabbatical is that I will create a positive work environment for myself and accomplish my list of professional development activities while maintaining a balance with my family and personal goals. I know that I will need a variety of strategies, and that I will need to take time to reflect on the state of my progress and show self-regulation in my choice of strategies at different times. I plan to use a journal to help me with my awareness of the alignment between my daily goals and the activities in which I choose to engage in order to accomplish those goals.[1] This awareness will guide my self-regulation when, inevitably, I get off track. I also plan to make some public commitments and provide updates to my friends and colleagues regarding specific goals I plan to accomplish at specific times, as public commitment provides motivation, often results in social support, and is another way to encourage self-awareness and self-regulation, i.e. metacognition.

I’ll let you know how it goes in 6 months. 🙂  Meanwhile, Happy New Year and all the best to you with your new-year resolutions. Try using the tools of metacognition to help you succeed!

[1] See our preliminary research summary about the effectiveness of instructors using journals to enhance their metacognitive instruction.

* Disclaimer: The views expressed in this document are those of the author and do not reflect the official policy or position of the U. S. Air Force, Department of Defense, or the U. S. Govt.