Are Current Metacognition Measures Missing the Target?

FacebooktwittermailFacebooktwittermail

by Chris Was, Kent State University

Clearly, there is some agreement as to what metacognition is, or how to define it. In layman’s terms we often hear metacognition described as “thinking about thinking.” It is often defined as knowledge of and control of one’s cognitive processes.

There is also agreement that metacognition is necessary for one to successfully learn from instruction. Models such as Nelson and Naren’s (1990) model and that presented by Tobias and Everson (2009) stress the importance of knowledge of one’s state of knowledge as a key to learning.

In laboratory settings we have a number of “measures” of metacognition. Judgments of knowing, judgments of learning, feelings of knowing, etc. are all research paradigms used to understand individuals’ ability to assess and monitor their knowledge. These measures are demonstrated to predict differences in study strategies, learning outcomes and host of other performance measures.  However, individuals in a laboratory do not have the same pressures, needs, motivations, and desires as a student preparing for an exam.

How do we measure differences in students’ ability to monitor their knowledge so that we can help those who need to improve their metacognition? Not in the lab, but in the classroom. Although much of the research I have conducted with colleagues in metacognition has included attempts to both measure and increase metacognition in the college classroom (e.g., Isaacson & Was, 2010, Was, Beziat, & Isaacson, 2014), I am not convinced that we have always successfully measured these differences.

Simple measures of metacognitive knowledge monitoring administered at the beginning of a semester long course account for significant amounts of variance in end of the semester cumulative final exams (e.g,, Hartwig, Was, Dunlosky & Isaacson, 2013). However, the amount of the variance for which metacognitive knowledge monitoring in the models accounts is typically less than 15% and often much less. If knowledge monitoring is key to learning why then is it the case that it accounts for so little variance in measures of academic performance? Are the measures of knowledge monitoring inaccurate? Do scores on a final exam depend upon the life circumstances of the student during the semester? The answer to both questions is likely yes. But even more important, it could be that students are aware that their metacognitive monitoring is inaccurate and they therefore use other criteria to predict their academic performance.

The debate over whether the unskilled are unaware continues (cf. Krueger & Dunning, 2009; Miller & Geraci, 2011). Krueger and Dunning have provided evidence that poor academic performers carry a double burden. First, they are unskilled. Put differently, they lack the knowledge or skill to perform well. Second, they are unaware. That is, they do not know they lack the knowledge or skill and therefore have a tendency to be overconfident when predicting future performance.

There is however, a good deal of evidence that low-performing students are aware that when they are asked to predict how they will perform on an examination their predictions are overconfident. When asked to predict how well they will do on a test, the lowest performing students often predict scores well above how they eventually perform, but when asked how confident they are about their predictions these low performing students often report little confidence in their predictions.

So why does a poor performing student predict that they will perform well on an exam, when they are not confident in that prediction? Interestingly, my colleagues and I have (as have others) collected data that demonstrates that many students scoring near or above the class average under-predict their scores, and are just as uncertain as to what their actual scores will be.

An area we are beginning to explore is the relationship between ego-protection mechanisms and metacognition. As I stated earlier, students in a course, be it k-12, post-secondary or even adult education, are dealing with demands of the course, their goals in the course and the instructors goals, their attributes of success and failure in the course, and a multitude of other personal issues that may influence their performance predictions. The following is an anecdotal example from a student of mine. After several exams (in one of my undergraduate courses I administer 12 exams a semester plus a final exam) which students were required to predict their test scores, I asked a student why she consistently predicted her score to be 5 – 10 points lower then the grade she would receive. “Because when I do better than I predict, I feel good about my grade,” was her response.

My argument is that to examine metacognition of our students or to try to improve the metacognition of our students in isolation, without attempting to understand the other factors (e.g., motivation) that impact students’ perceptions of their knowledge and future performance, we are not likely to be successful in our attempts.

Isaacson, R., & Was, C. A.  (2010). Believing you’re correct vs. knowing you’re    correct: A significant difference?  The Researcher, 23(1), 1-12.

Krueger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in    recognizing one’s own incompetence lead to inflated self-assessments.    Journal of Personality and Social Psychology, 77(6), 1121-1134.

Miller, T. M., & Geraci, L. (2011). Unskilled but aware: reinterpreting overconfidence    in low-performing students. Journal of Experimental Psychology: Learning    Memory, and Cognition, doi:10.1037/a0021802

Nelson, T. O., & Narens, L. (1990). Metamemory: A theoretical framework and some    new findings.  In G. H. Bower (Ed.), The psychology of learning and motivation    (Vol. 26, pp. 125–173).  New York: Academic Press.

Tobias, S., & Everson, H. (2009).  The importance of knowing what you know: A    knowledge monitoring framework for studying metacognition in education.    In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Handbook of    Metacognition in Education. (pp. 107-128). New York, NY: Routledge.
Beziat, T. R. L., Was, C. A., & Isaacson, R. M. (2014). Knowledge monitoring accuracy    and college success of underprepared students. The Researcher, 26(1), 8-13.