Comprehension Monitoring: The Role of Conditional Knowledge

By Antonio Gutierrez, Georgia Southern University

In my previous post, Metacognitive Strategies: Are They Trainable?, I explored the extent to which metacognitive strategies are teachable. In my own research on how well students monitor their comprehension during learning episodes, I discovered that students reported already having a repertoire of metacognitive strategies. Yet, I have often found, in my own teaching and interaction with undergraduate and even graduate students, that having metacognitive declarative knowledge of strategies is often not sufficient to promote students’ comprehension monitoring. For instance, students may already know to draw a diagram when they are attempting to learn new concepts. However, they may not know under which circumstances it is best to apply such a strategy. When students do not know when, where and why to apply a strategy, they may in fact be needlessly expending cognitive resources for little, to no, benefit with respect to learning.

Schraw and Dennison (1994) argued that metacognition is divided in to knowledge and regulation components. Knowledge is comprised of declarative knowledge about strategies, procedural knowledge of how to apply them, and conditional knowledge about when, where, and why to apply strategies given task demands. The more that I engage students, inside and beyond my classes, the more that I become convinced that the greatest lack in metacognitive knowledge lies not in declarative or procedural knowledge, but in conditional knowledge. Students clearly have a repository of strategies and procedures to apply them. However, they seem incapable of applying those strategies effectively given the demands of the learning tasks in which they engage. So how can we enhance students’ conditional knowledge? Let’s assume that Sally is attempting to learn the concept of natural selection in her biology lesson. As Sally attempts to connect what she is learning with prior knowledge in long-term memory, she realizes she may have misconceptions regarding natural selection. She also understands that she has a variety of strategies to assist her in navigating this difficult concept. However, she does not know or understand which strategy will optimize her learning of the concept. Thus, she resorts to a trial-and-error utilization of the strategies she thinks are “best” to help her. Here we see a clear example of lack of adequate conditional knowledge. Much time and cognitive effort can be saved if we enhance students’ conditional knowledge. Calibration (the relationship between task performance and a judgment about that performance; Boekaerts & Rozendaal, 2010; Keren, 1991), a related metacognitive process, but distinct from conditional knowledge, involves the comprehension monitoring element of metacognitive regulation. As I continue my scholarship to deepen my understanding of calibration, I wonder whether conditional knowledge and calibration are more closely associated than researchers assume.

In my recent research on calibration I have often asked why the body of literature on calibration is inconclusive in its findings with respect to the effects of metacognitive strategy training on calibration. For instance, some studies have found positive effects for calibration (e.g., Gutierrez & Schraw, in press; Nietfeld & Schraw, 2002) while others have demonstrated no effect for strategy training on calibration (e.g., Bol et al., 2005; Hacker et al., 2008). This inconclusive evidence has frustrated me not only as a scholar but as a teacher as well. I suspect that these mixed findings in the literature on calibration may be due in part because researchers on calibration have neglected to address participants’ metacognitive conditional knowledge. How can we possibly hope as instructors to improve students’ comprehension monitoring when the findings on the role of metacognitive strategy instruction plays on calibration are inconclusive? So, perhaps as researchers/scholars of metacognition we are asking the wrong questions? I argue that by improving students’ metacognitive conditional knowledge, we can improve their ability to more effectively determine what they know and what they do not know about their learning (i.e., better calibrate their performance judgments to their actual performance). If students cannot effectively apply strategies given the demands of the learning episode (a conditional knowledge issue) how can we expect them to adequately monitor their comprehension (a regulation of learning issue)? Perhaps the next line of inquiry should exclusively focus on the enhancement of students’ conditional knowledge?

 

References

Boekaerts, M., & Rozendaal, J. S. (2010). Using multiple calibration measures in order to capture the complex picture of what affects students’ accuracy of feeling of confidence. Learning and Instruction, 20(4), 372-382. doi:10.1016/j.learninstruc.2009.03.002

Bol, L., Hacker, D. J., O’Shea, P., & Allen, D. (2005). The influence of overt practice, achievement level, and explanatory style on calibration accuracy, and performance. The Journal of Experimental Education, 73, 269-290.

Gutierrez, A. P., & Schraw, G. (in press). Effects of strategy training and incentives on students’ performance, confidence, and calibration. The Journal of Experimental Education: Learning, Instruction, and Cognition.

Hacker, D. J., Bol, L., & Bahbahani, K. (2008). Explaining calibration accuracy in classroom contexts: The effects of incentives, reflection, and explanatory style. Metacognition Learning, 3, 101-121.

Keren, G. (1991). Calibration and probability judgments: Conceptual and methodological issues. Acta Psychologica, 77(2), 217- 273. http://dx.doi.org/10.1016/0001-6918(91)90036-Y

Nietfeld, J. L., & Schraw, G. (2002). The effect of knowledge and strategy explanation on monitoring accuracy. Journal of Educational Research, 95, 131-142.