Two forms of ‘thinking about thinking’: metacognition and critical thinking

by John Draeger (SUNY Buffalo State)

In previous posts, I have explored the conceptual nature of metacognition and shared my attempts to integrate metacognitive practices into my philosophy courses. I am also involved in a campuswide initiative that seeks to infuse critical thinking throughout undergraduate curricula. In my work on both metacognition and critical thinking, I often find myself using ‘thinking about thinking’ as a quick shorthand for both. And yet, I believe metacognition and critical thinking are distinct notions. This post will begin to sort out some differences.

My general view is that the phrase ‘thinking about thinking’ can be the opening move in a conversation about either metacognition or critical thinking. Lauren Scharff and I, for example, took this tack when we explored ways of unpacking what we mean by ‘metacognition’ (Scharff & Draeger, 2014). We considered forms of awareness, intentionality, and the importance of understanding of various processes. More specifically, metacognition encourages us to monitor the efficacy of our learning strategies (e.g., self-monitoring) and prompts us to use that understanding to guide our subsequent practice (e.g., self-regulation). It is a form of thinking about thinking. We need to think about how we think about our learning strategies and how to use our thinking about their efficacy to think through how we should proceed. In later posts, we have continued to refine a more robust conception of metacognition (e.g., Scharff 2015, Draeger 2015), but ‘thinking about thinking’ was a good place to start.

Likewise, the phrase ‘thinking about thinking’ can be the opening move in conversations about critical thinking. Given the wide range of program offerings on my campus, defining ‘critical thinking’ has been a challenge. Critical thinking is a collection of skills that can vary across academic settings and how these skills are utilized often requires disciplinary knowledge. For example, students capable of analyzing how factors such as gender, race, and sexuality influence governmental policy may have difficulty analyzing a theatrical performance or understanding the appropriateness of a statistical sampling method. Moreover, it isn’t obvious how the skills learned in one course will translate to the course down the hall. Consequently, students need to develop a variety of critical thinking skills in a variety of learning environments. As we began to consider how to infuse critical thinking across the curriculum, the phrase ‘thinking about thinking’ was something that most everyone on my campus could agree upon. It has been a place to start as we move on to discuss what critical thinking looks like in various domains of inquiry (e.g., what it means to think like an artist, biologist, chemist, dancer, engineer, historian, or psychologist).

‘Thinking about thinking’ captures the idea students need to think about the kind of thinking skills that they are trying to master, and teachers need to be explicit about those skills that if their students will have any hope of learning them. This applies to both metacognition and critical thinking. For example, many students are able to solve complex problems, craft meaningful prose, and create beautiful works of art without understanding precisely how they did it. Such students might be excellent thinkers, but unless they are aware of how they did what they did, it is also possible that they got just lucky. Both critical thinking and metacognition help ensure that students can reliably achieve desired learning outcomes. Both require practice and both require the explicit awareness of the relevant processes. More specifically, however, critical thinkers are aware of what they are trying to do (e.g., what it means to think like an artist, biologist, chemist, dancer, engineer, historian, psychologist), while metacognitive thinkers are aware of whether their particular strategies are effective (e.g., whether someone is an effective artist, biologist, chemist, dancer, engineer, historian, psychologist). Critical thinking and metacognition, therefore, differ in the object of awareness. Critical thinking involves an awareness of mode of thinking within a domain (e.g., question assumptions about gender, determine the appropriateness of a statistical method), while metacognition involves an awareness of the efficacy of particular strategies for completing that task.

‘Thinking about thinking’ is a good way to spark conversation with our colleagues and our students about a number of important skills, including metacognition and critical thinking. In particular, it is worth asking ourselves (and relaying to our students) what it might mean for someone to think like an artist or a zoologist (critical thinking) and how we would know whether that artist or zoologist was thinking effectively (metacognition). As these conversations move forward, we should also think through the implications for our courses and programs of study. How might this ongoing conversation change course design or methods of instruction? What might it tell us about the connections between courses across our campuses? ‘Thinking about thinking’ is a great place to start such conversations, but we must remember that it is only the beginning.

References

Draeger, John (2015). “Exploring the relationship between awareness, self-regulation, and metacognition.” Retrieved from https://www.improvewithmetacognition.com/exploring-the-relationship-between-awareness-self-regulation-and-metacognition/

Scharff, Lauren & Draeger, John (2014). “What do we mean when we say “Improve with metacognition”? (Part One) Retrieved from https://www.improvewithmetacognition.com/what-do-mean-when-we-say-improve-with-metacognition/

Scharff, Lauren (2015). “What do we mean by ‘metacognitive instruction?” Retrieved from https://www.improvewithmetacognition.com/what-do-we-mean-by-metacognitive-instruction/Thinking about two forms of thinking about thinking: Metacognition and critical thinking Share on X


Metacognitive Judgments of Knowing

Roman Taraban, Ph.D., Dmitrii Paniukov, John Schumacher, Michelle Kiser, at Texas Tech University

“The more you know, the more you know you don’t know.” Aristotle

Students often make judgments of learning (JOLs) when studying. Essentially, they make a judgment about future performance (e.g., a test) based on a self-assessment of their knowledge of studied items. Therefore, JOLs are considered metacognitive judgments. They are judgments about what the person knows, often related to some future purpose. Students’ accuracy in making these metacognitive judgments is academically important. If students make accurate JOLs, they will apply just the right amount of time to mastering academic materials. If students do not devote enough time to study, they will underperform on course assessments. If students spend more time than necessary, they are being inefficient.

As instructors, it would be helpful to know how accurate students are in making these decisions. There are several ways to measure the accuracy of JOLs. Here we will focus on one of these measures, termed calibration. Calibration is the difference between a student’s JOL related to some future assessment and his actual performance on that assessment. In the study we describe here, college students made JOLs (“On a scale of 0 to 100, what percent of the material do you think you can recall?”) after they read a brief expository text. Actual recall was measured in idea units (IUs) (Roediger & Karpicke, 2006). Idea units are the chunks of meaningful information in the text.   Calibration is here defined as JOL – Recalled IUs, or simply, predicted recall minus actual recall. If the calibration calculation leads to a positive number, you are overconfident to some degree; if the calculation result is negative, then you are underconfident to some degree. If the result is zero, then you are perfectly calibrated in your judgment.

The suggestion from Aristotle (see quote above) is that gains in how much we know lead us to underestimate how much we know, that is, we will be underconfident. Conversely, when we know little, we may overestimate how much we know, that is, we will be overconfident. Studies using JOLs have found that children are overconfident (predicted recall minus actual recall is positive) (Lipko, Dunlosky, & Merriman, 2009; Was, 2015). Children think they know more than they know, even after several learning trials with the material. Studies with adults have found an underconfidence with practice (UWP) effect (Koriat et al., 2002), that is, the more individuals learn, the more they underestimate their knowledge. The UWP effect is consistent with Aristotle’s suggestion. The question we ask here is ‘which is it’: If you lack knowledge, do your metacognitive judgments reflect overconfidence or underconfidence, and vice versa? Practically, as instructors, if students are poorly calibrated, what can we do to improve their calibration, that is, to recalibrate this metacognitive judgment.

We addressed this question with two groups of undergraduate students, as follows. Forty-three developmental-reading participants were recruited from developmental integrated reading and writing courses offered by the university, including Basic Literacy (n = 3), Developmental Literacy II (n = 29), and Developmental Literacy for Second Language Learners (n = 11). Fifty-two non-developmental participants were recruited from the Psychology Department subject pool. The non-developmental and developmental readers were comparable in mean age (18.3 and 19.8 years, respectively) and the number of completed college credits (11.8 and 16.7, respectively), and each sample represented roughly fifteen academic majors. All participants participated for course credit. The students were asked to read one of two expository passages and to recall as much as they could immediately. The two texts used for the study were each about 250 words in length and had an average Flesch-Kincaid readability score of 8.2 grade level. The passages contained 30 idea units each.

To answer our question, we first calculated calibration (predicted recall – actual recall) for each participant. Then we divided the total sample of 95 participants into quartiles, based on the number of idea units each participant recalled. The mean proportion of correct recalled idea units, out of 30 possible, and standard deviation in each quartile for the total sample were as follows:

Q1: .13 (.07); Q2: .33 (.05); Q3: .51 (.06); Q4: .73 (.09). Using quartile as the independent variable and calibration as the dependent variable, we found that participants were overconfident (predicted recall > actual recall) in all four quartiles. However, there was also a significant decline in overconfidence from Quartile 1 to Quartile 4 as follows: Q1: .51; Q2: .39; Q3: .29; Q4: .08. Very clearly, the participants in the highest quartile were nearly perfectly calibrated, that is, they were over-predicting their actual performance by only about 8%, compared to the lowest quartile, who were over-predicting by about 51%. This monotonic trend of reducing overconfidence and improving calibration was also true when we analyzed the two samples separately:

NON-DEVELOPMENTAL: Q1: .46; Q2: .39; Q3: .16; Q4: .10;

DEVELOPMENTAL: Q1: .57; Q2: .43; Q3: .39; Q4: .13.

The findings here suggest that Aristotle may have been wrong when he stated that “The more you know, the more you know you don’t know.” Our findings would suggest that the more you know, the more you know you know. That is, calibration gets better the more you know. What is striking here is the vulnerability of weaker learners to overconfidence. It is the learners who have not encoded a lot of information from reading that have an inflated notion of how much they can recall. This is not unlike the children in the Lipko et al. (2009) research mentioned earlier. It is also clear in our analyses that typical college students as well as developmental college students are susceptible to overestimating how much they know.

It is not clear from this study what variables underlie low recall performance. Low background knowledge, limited vocabulary, and difficulty with syntax, could all contribute to poor encoding of the information in the text and low subsequent recall. Nonetheless, our data do indicate that care should be taken in assisting students who fall into the lower performance quartiles to make better calibrated metacognitive judgments. One way to do this might be by asking students to explicitly make judgments about future performance and then encouraging them to reflect on the accuracy of those judgments after they complete the target task (e.g., a class test). Koriat et al. (1980) asked participants to give reasons for and against choosing responses to questions before the participants predicted the probability that they had chosen the correct answer. Prompting students to consider the amount and strength of the evidence for their responses reduced overconfidence. Metacognitive exercises like these may lead to better calibration.

References

Koriat, A., Lichtenstein, S., Fischoff, B. (1980). Reasons for confidence. Journal of Experimental Psychology: Human Learning and Memory, 6(2), 107-118.

Koriat, A., Sheffer, L., & Ma’ayan, H. (2002). Comparing objective and subjective learning curves: Judgments of learning exhibit increased underconfidence with practice. Journal of Experimental Psychology: General, 131, 147–162.

Lipko, A. R., Dunlosky, J., & Merriman, W. E. (2009). Persistent overconfidence despite practice: The role of task experience in preschoolers’ recall predictions. Journal of Experimental Child Psychology, 102(2), 152-166.

Roediger, H., & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17(3), 249-255.

Was, C. (2015). Some developmental trends in metacognition. Retrieved from

https://www.improvewithmetacognition.com/some-developmental-trends-in-metacognition/.