Metacognitive Judgments of Knowing

FacebooktwittermailFacebooktwittermail

Roman Taraban, Ph.D., Dmitrii Paniukov, John Schumacher, Michelle Kiser, at Texas Tech University

“The more you know, the more you know you don’t know.” Aristotle

Students often make judgments of learning (JOLs) when studying. Essentially, they make a judgment about future performance (e.g., a test) based on a self-assessment of their knowledge of studied items. Therefore, JOLs are considered metacognitive judgments. They are judgments about what the person knows, often related to some future purpose. Students’ accuracy in making these metacognitive judgments is academically important. If students make accurate JOLs, they will apply just the right amount of time to mastering academic materials. If students do not devote enough time to study, they will underperform on course assessments. If students spend more time than necessary, they are being inefficient.

As instructors, it would be helpful to know how accurate students are in making these decisions. There are several ways to measure the accuracy of JOLs. Here we will focus on one of these measures, termed calibration. Calibration is the difference between a student’s JOL related to some future assessment and his actual performance on that assessment. In the study we describe here, college students made JOLs (“On a scale of 0 to 100, what percent of the material do you think you can recall?”) after they read a brief expository text. Actual recall was measured in idea units (IUs) (Roediger & Karpicke, 2006). Idea units are the chunks of meaningful information in the text.   Calibration is here defined as JOL – Recalled IUs, or simply, predicted recall minus actual recall. If the calibration calculation leads to a positive number, you are overconfident to some degree; if the calculation result is negative, then you are underconfident to some degree. If the result is zero, then you are perfectly calibrated in your judgment.

The suggestion from Aristotle (see quote above) is that gains in how much we know lead us to underestimate how much we know, that is, we will be underconfident. Conversely, when we know little, we may overestimate how much we know, that is, we will be overconfident. Studies using JOLs have found that children are overconfident (predicted recall minus actual recall is positive) (Lipko, Dunlosky, & Merriman, 2009; Was, 2015). Children think they know more than they know, even after several learning trials with the material. Studies with adults have found an underconfidence with practice (UWP) effect (Koriat et al., 2002), that is, the more individuals learn, the more they underestimate their knowledge. The UWP effect is consistent with Aristotle’s suggestion. The question we ask here is ‘which is it’: If you lack knowledge, do your metacognitive judgments reflect overconfidence or underconfidence, and vice versa? Practically, as instructors, if students are poorly calibrated, what can we do to improve their calibration, that is, to recalibrate this metacognitive judgment.

We addressed this question with two groups of undergraduate students, as follows. Forty-three developmental-reading participants were recruited from developmental integrated reading and writing courses offered by the university, including Basic Literacy (n = 3), Developmental Literacy II (n = 29), and Developmental Literacy for Second Language Learners (n = 11). Fifty-two non-developmental participants were recruited from the Psychology Department subject pool. The non-developmental and developmental readers were comparable in mean age (18.3 and 19.8 years, respectively) and the number of completed college credits (11.8 and 16.7, respectively), and each sample represented roughly fifteen academic majors. All participants participated for course credit. The students were asked to read one of two expository passages and to recall as much as they could immediately. The two texts used for the study were each about 250 words in length and had an average Flesch-Kincaid readability score of 8.2 grade level. The passages contained 30 idea units each.

To answer our question, we first calculated calibration (predicted recall – actual recall) for each participant. Then we divided the total sample of 95 participants into quartiles, based on the number of idea units each participant recalled. The mean proportion of correct recalled idea units, out of 30 possible, and standard deviation in each quartile for the total sample were as follows:

Q1: .13 (.07); Q2: .33 (.05); Q3: .51 (.06); Q4: .73 (.09). Using quartile as the independent variable and calibration as the dependent variable, we found that participants were overconfident (predicted recall > actual recall) in all four quartiles. However, there was also a significant decline in overconfidence from Quartile 1 to Quartile 4 as follows: Q1: .51; Q2: .39; Q3: .29; Q4: .08. Very clearly, the participants in the highest quartile were nearly perfectly calibrated, that is, they were over-predicting their actual performance by only about 8%, compared to the lowest quartile, who were over-predicting by about 51%. This monotonic trend of reducing overconfidence and improving calibration was also true when we analyzed the two samples separately:

NON-DEVELOPMENTAL: Q1: .46; Q2: .39; Q3: .16; Q4: .10;

DEVELOPMENTAL: Q1: .57; Q2: .43; Q3: .39; Q4: .13.

The findings here suggest that Aristotle may have been wrong when he stated that “The more you know, the more you know you don’t know.” Our findings would suggest that the more you know, the more you know you know. That is, calibration gets better the more you know. What is striking here is the vulnerability of weaker learners to overconfidence. It is the learners who have not encoded a lot of information from reading that have an inflated notion of how much they can recall. This is not unlike the children in the Lipko et al. (2009) research mentioned earlier. It is also clear in our analyses that typical college students as well as developmental college students are susceptible to overestimating how much they know.

It is not clear from this study what variables underlie low recall performance. Low background knowledge, limited vocabulary, and difficulty with syntax, could all contribute to poor encoding of the information in the text and low subsequent recall. Nonetheless, our data do indicate that care should be taken in assisting students who fall into the lower performance quartiles to make better calibrated metacognitive judgments. One way to do this might be by asking students to explicitly make judgments about future performance and then encouraging them to reflect on the accuracy of those judgments after they complete the target task (e.g., a class test). Koriat et al. (1980) asked participants to give reasons for and against choosing responses to questions before the participants predicted the probability that they had chosen the correct answer. Prompting students to consider the amount and strength of the evidence for their responses reduced overconfidence. Metacognitive exercises like these may lead to better calibration.

References

Koriat, A., Lichtenstein, S., Fischoff, B. (1980). Reasons for confidence. Journal of Experimental Psychology: Human Learning and Memory, 6(2), 107-118.

Koriat, A., Sheffer, L., & Ma’ayan, H. (2002). Comparing objective and subjective learning curves: Judgments of learning exhibit increased underconfidence with practice. Journal of Experimental Psychology: General, 131, 147–162.

Lipko, A. R., Dunlosky, J., & Merriman, W. E. (2009). Persistent overconfidence despite practice: The role of task experience in preschoolers’ recall predictions. Journal of Experimental Child Psychology, 102(2), 152-166.

Roediger, H., & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17(3), 249-255.

Was, C. (2015). Some developmental trends in metacognition. Retrieved from

https://www.improvewithmetacognition.com/some-developmental-trends-in-metacognition/.