To Test or Not to Test: That is the Metacognitive Question

by John Schumacher & Roman Taraban at Texas Tech University

In prepping for upcoming classes, we are typically interested in how to best structure the class to promote the most effective learning. Applying best-practices recommendations in the literature, we try to implement active learning strategies that go beyond simple lecturing. One such strategy that has been found to be effective from research is the use of testing. The inference to draw from the research literature is quite simple: test students frequently, informally, and creatively, over and above standard course tests, like a mid-term and final. Testing is a useful assessment tool, but research has shown that it is also a learning tool that has been found to promote learning above and beyond simply rereading material (Roediger & Karpicke, 2006a). This is called the testing effect. In controlled studies, researchers have shown testing effects with a variety of materials, including expository texts and multimedia presentations (e.g., Carrier & Pashler, 1992; Huff, Davis, & Meade, 2013; Johnson & Mayer, 2009; Roediger & Karpicke, 2006b). Testing has been found to increase learning when implemented in a classroom setting (McDaniel, Anderson, Derbish, & Morrisette, 2007) and is a useful learning tool for people of all ages (Meyer & Logan, 2013). The theoretical explanation for the benefits of testing is that testing strengthens retrieval paths to the stored information in memory more so than simply rereading the material. Therefore, later on a person can more effectively recover the information from memory.

Although implementing testing and other active learning strategies in the classroom is useful in guiding and scaffolding student learning, it is important that we develop an understanding of when and for whom these strategies are most helpful. Specifically, regarding testing, research from our lab and in others is starting to show that testing may not always be as beneficial as past research suggests. Characteristics of the students themselves may nullify or even reverse the benefits of testing. Thus, the first question we address is whether frequent classroom testing will benefit all students. Yet a more pertinent question, which is our second question, is whether frequent testing develops metacognitive practices in students. We will discuss these in turn.

In a formal study of the testing effect, or in an informal test in any classroom, one needs two conditions, a control condition in which participants study the material on their own for a fixed amount of time, and an experimental condition in which participants study and are tested over the material, for instance, in a Study-Test-Study-Test format. Both groups spend an equal amount of time either simply studying or studying and testing. All participants take a final recall test over the material. Through a series of testing-effect studies incorporating expository texts as the learning material, we have produced a consistent grade-point average (GPA) by testing-effect interaction. This means that the benefits of testing (i.e., better later retrieval of information) depend on students’ GPAs! A closer look at this interaction showed us that students with low GPAs benefited most from the implementation of testing whereas mid to high GPA students benefited just as much by simply studying the material.

While at this preliminary stage it is difficult to ascertain why exactly low GPA students benefit from testing in our experiments while others do not, a few observations can be put forth. First, at the end of the experiments, we asked participants to report any strategies they used on their own to help them learn the materials. Metacognitive reading strategies that the participants reported included focusing on specific aspects of the material, segmenting the material into chunks, elaborating on the material, and testing themselves. Second, looking further into the students’ self-reports of metacognitive strategy use, we found that participants in the medium to high GPA range used these strategies often, while low GPA students used them less often. Simply, the self-regulated use of metacognitive strategies was associated with higher GPAs and better recall of the information in the texts that the participants studied. Lower GPA students benefited when the instructor deliberately imposed self-testing.

These results are interesting because they indicate that the classroom implementation of testing may only be beneficial to low achieving students because they either do not have metacognitive strategies at their disposal or are not applying these strategies. High-achieving students may have metacognitive strategies at their disposal and may not need that extra guidance set in place by the instructor.

Another explanation for the GPA and testing-effect interaction may simply be motivation. Researchers have found that GPA correlates with motivation (Mitchell, 1992). It is possible that implementing a learning strategy may be beneficial to low GPA students because it forces them to work with the material. Motivation may also explain why GPA correlated with metacognitive strategy use. Specifically if lower GPA students are less motivated to work with the material it stands to reason that they would be less likely to employ learning strategies that take time and effort.

This leads to our second question: Does frequent testing develop metacognitive skills in students, particularly self-regulated self-testing? This is a puzzle that we cannot answer from the current studies. Higher-GPA students appear to understand the benefits of applying metacognitive strategies and do not appear to need additional coaxing from the experimenter/teacher to apply them. Will imposing self-testing, or any other strategy on lower-GPA students lead them to eventually adopt the use of these strategies on their own? This is an important question and one that deserves future attention.

While testing may be useful for bolstering learning, we suggest that it should not be blindly utilized in the classroom as a learning tool. A consideration of what is being taught and to whom will dictate the effectiveness of testing as a learning tool. As we have suggested, more research also needs to be done to figure out how to bring metacognitive strategies into students’ study behaviors, particularly low-GPA students.

References

Carrier, M., & Pashler, H. (1992). The influence of retrieval on retention. Memory & Cognition,   20(6), 633-642.

Huff, M. J., Davis, S. D., & Meade, M. L. (2013). The effects of initial testing on false recall and             false recognition in the social contagion of memory paradigm. Memory & Cognition41(6), 820-831.

Johnson, C. I., & Mayer, R. E. (2009). A testing effect with multimedia learning. Journal of          Educational Psychology, 101(3), 621-629.

McDaniel, M. A., Anderson, J. L., Derbish, M. H., & Morrisette, N. (2007). Testing the testing effect in the classroom. European Journal of Cognitive Psychology, 19(4-5), 494-513.

Meyer, A. D., & Logan, J. M. (2013). Taking the testing effect beyond the college freshman:        Benefits for lifelong learning. Psychology and Aging, 28(1), 142-147.

Mitchell Jr, J. V. (1992). Interrelationships and predictive efficacy for indices of intrinsic,                         extrinsic, and self-assessed motivation for learning. Journal of Research and       Development in Education25(3), 149-155.

Roediger, H., & Karpicke, J. D. (2006a). The power of testing memory: Basic research and           implications for educational practice. Perspectives on Psychological Science, 1(3), 181-       210.

Roediger, H., & Karpicke, J. D. (2006b). Test-enhanced learning: Taking memory tests     improves long-term retention. Psychological Science, 17(3), 249-255.


Positive Affective Environments and Self-Regulation

by Steven Fleisher at CSU Channel Islands

Although challenging at times, providing for positive affective experiences are necessary for student metacognitive and self-regulated learning. In a classroom, caring environments are established when teachers not only provide structure, but also attune to the needs and concerns of their students. As such, effective teachers establish those environments of safety and trust, as opposed to merely environments of compliance (Fleisher, 2006). When trust is experienced, learning is facilitated through the mechanisms of autonomy support as students discover how the academic information best suits their needs. In other words, students are supported in moving toward greater intrinsic (as opposed to extrinsic) motivation and self-regulation, and ultimately enhanced learning and success (Deci & Ryan, 2002; Pintrich, 2003).

Autonomy and Self-Regulation

In an academic context, autonomy refers to students knowing their choices and alternatives and self-initiating their efforts in relation to those alternatives. For Deci and Ryan (1985, 2002), a strong sense of autonomy within a particular academic task would be synonymous with being intrinsically motivated and, thus, intrinsically self-regulated. On the other hand, a low sense of autonomy within a particular academic context would be synonymous with being extrinsically motivated and self-regulated. Students with a low sense of autonomy might say, “You just want us to do what you want, it’s never about us,” while students with a strong sense of autonomy might say, “We can see how this information may be useful someday.” The non-autonomous students feel controlled, whereas the autonomous students know they are in charge of their choices and efforts.

Even more relevant to the classroom, Pintrich (2003) reported that the more intrinsically motivated students have mastery goal-orientations (a focus on effort and effective learning strategy use) as opposed to primarily performance goal-orientations (actually a focus on defending one’s ability). These two positions are best understood under conditions of failure. Performance-orientated students see failure as pointing out their innate inabilities, whereas mastery-oriented students see failure as an opportunity to reevaluate and reapply their efforts and strategies as they build their abilities. Thus, in the long run, mastery-oriented students end up “performing” the best academically.

The extrinsically motivated students perceive that the teacher is in charge, and not themselves, as to whether or not they are rewarded for their work. This extrinsic orientation may facilitate performance, however, it can backfire. These students can become unwilling to put forth a full effort for fear of failure or judgment. These students feel a compulsion for performance, which can result in a refusal to try to meet goals. They may come to prefer unchallenging courses, fail, or drop out entirely. On the other hand, students with intrinsic goal-orientations realize that they are in charge of their reasons for acting. Metacognitively, they are aware of their alternatives and strategies and self-regulate accordingly as they apply the necessary effort toward their learning tasks. These students would sense that the classroom provided an environment for exploring the subject matter in relevant and meaningful ways and they would identify how and where to best apply their learning efforts.

Strategies for the Classroom

As with autonomy (minimum to maximum), motivation and self-regulation exist on a continuum (extrinsic to intrinsic), as opposed to existing at one end or the other. Here are a couple of instructional strategies that I have found that support students in their movement toward greater autonomy and intrinsic motivation and self-regulation.

Knowledge surveys, for example, offer a course tool for organizing content learning and assessing student intellectual development (Nuhfer & Knipp, 2003). These surveys consist of questions that represent the breadth and depth of the course, including the main concepts, the related content information, and the different levels of reasoning to be practiced and assessed. I have found that using knowledge surveys to disclose to student where a course is going and why helps them take charge of their learning. This type of transparency helps students discover ways in which their learning efforts are effective.

Cooperative learning strategies (Millis & Cottell, 1998) provide an ideal counterpart to knowledge surveys. Cooperative learning (for instance, working in groups or teaching your neighbor) offers both positive learning and positive affective experiences. These learning experiences, between students and between teachers and students support the development of autonomy, as well as intrinsic motivation and self-regulation. For example, when students work together effectively in applications of course content, they come to see through one another’s perspectives the relevance of the material, while gaining competency as well as insights into how to gain that competency. When students are aware, by way of the knowledge surveys, of the course content and levels of reasoning required, and when these competencies and related learning strategies are practiced, reflected upon, and attained, learning and metacognitive learning are engaged.

References

Deci, E. L. & Ryan, R. M. (1985). Intrinsic motivation and self-determination in human behavior. New York: Plenum Press.

Deci, E. L. & Ryan, R. M. (2002). Handbook of self-determination research. Rochester, NY: The University of Rochester Press.

Fleisher, S. C. (2006). Intrinsic self-regulation in the classroom. Academic Exchange Quarterly, 10(4), 199-204.

Millis, B. J. & Cottell, P. G. (1998). Cooperative learning for higher education faculty. American Council on Education: Oryx Press.

Nuhfer, E. & Knipp, D. (2003). The knowledge survey: A tool for all reasons. To Improve the Academy, 21, 59-78.

Pintrich, P. R. (2003). Motivation and classroom learning. In W. M. Reynolds & G. E. Miller (Eds.), Handbook of psychology: Educational psychology, Volume 7. Hoboken, NJ: John Wiley & Sons.


Using Just-in-Time assignments to promote metacognition

by John Draeger (SUNY Buffalo State)

In a previous post entitled “Just-in-time for metacognition,” I argued that Just-in-Time teaching techniques could be used to promote both higher-order-thinking and metacognition. Just-in-time teaching techniques require that students submit short assignments prior to class for review by the instructor before class begins (Novak 1999; Simkins & Maier, 2009; Schraff et al., 2011). In my philosophy courses, students send their answers to me electronically the night before class and I spend the morning of class using their answers to shape my pre-class planning. I’ve had success with higher-order-thinking questions, but I tended to ask students questions about their learning process only when the class had clearly gone off track. Since I’ve become convinced that developing good metacognitive habits requires practice, I’ve made metacognitive questions a regular component of my Just-in-Time assignments. In this post, I thought I would let you know how things are going.

Research shows that students learn more effectively when they are aware of their own learning process (I encourage you to surf around this site for examples). Borrowing from Tanner (2012) and Scharff (2014), I have asked students to think about why and how they engage in various learning strategies (e.g., reading, writing, reflecting). More specifically, I have asked: what was the most challenging part of the reading? Was the current reading more challenging than the last? What was the most useful piece of the reading? What was the most challenging piece of the reading? What was your reading strategy this week? How might you approach the reading differently next time? What was the most challenging part of the last writing assignment? How might you approach your next writing assignment differently? What are your learning goals for the week?

Responses from students at all levels have been remarkably similar. In particular, student responses fall into three broad categories: general student commentary (e.g., about the course, reading, particular assignment), content (e.g., students reframe the metacognition question and answer with use of course content), reflective practice (e.g., students actually reflect on their learning process).

First Type of Response: General Commentary

  • When asked to describe the most challenging part of the reading, students took the opportunity to observe that the reading too long, too boring, or it was interesting but confusing.
  • When asked to describe the most useful part of the reading, students often said that the question was difficult to answer because the reading was too long, too boring, or it was interesting but confusing.
  • When asked about their reading strategy, students observed that they did their best but the reading was too long, too boring, or interesting but confusing.
  • When asked about their learning goals for the week, students said that the question was strange, off the wall, and they had never been asked such a thing before.

Second Type of Response: Content

  • When asked to describe the most challenging part of the reading, students identified particular examples that were hard to follow and claims that seemed dubious.
  • When asked to describe the most useful part of the reading, students often restated the central question of the week (e.g., is prostitution morally permissible? should hate speech be restricted?) or summarized big issues (e.g., liberty argument for the permissibility of prostitution or hate speech).
  • When asked about their reading strategy, students often said that they wanted to understand a particular argument for that day (e.g., abortion, euthanasia, prostitution).
  • When asked their learning goal for the week, students said that they wanted to explore a big question (e.g., the nature of liberty or equality) and put philosophers into conversation (this is a major goal in all my courses).

Third Type of Response: Reflective practice

  • When asked to describe the most challenging part of the reading, students said that they didn’t give themselves enough time, they stretched it over multiple days, or they didn’t do it at all.
  • When asked about the most useful part of the reading, some students said that the reading forced them to challenge their own assumptions (e.g., “I always figured prostitution was disgusting, but maybe not”).
  • When asked about their reading strategies, some said that they had to read the material several times. Some said they skimmed the reading and hoped they could piece it together in class. Others found writing short summaries to be essential.
  • When asked about their learning goals for the week, some students reported wanting to become more open-minded and more tolerant of people with differing points of view.

Responses to the metacognitive prompts have been remarkably similar from students in my freshman to senior level courses. In contrast, I can say that there’s a marked difference by class year in responses to higher-order thinking prompts, possibly because I regularly use student responses to higher-order thinking prompts to structure class discussion. While I gave students some feedback on their metacognitive prompt responses, in the future I could be more intentional about using their responses to structure discussions of the student learning process.

I also need to refine my metacognition-related pre-class questions. For example, asking students to discuss the most challenging part of a reading assignment encourages students to reflect on roadblocks to understanding. The question is open-ended in a way that allows students to locate the difficulty in a particular bit of content, a lack of motivation, or a deficiency in reading strategy. However, if I want them to focus on their learning strategies, then I need to focus the question in ways that prompt that sort of reflection. For example, I could reword the prompt as follows: Identify one challenging passage in the reading this week. Explain why you believe it was difficult to understand. Discuss what learning strategy you used, how you know whether the strategy worked, and what you might do differently next time. Revising the questions so that they have a more explicitly metacognitive focus is especially important given that students are often unfamiliar with metacognitive reflection. If I can be more intentional about how I promote metacognition in my courses, then perhaps there can be gains in the metacognitive awareness demonstrated by my students. I’ll keep you posted.

References

Novak, G., Patterson, E., Gavrin, A., & Christian, W. (1999). Just-in-time teaching: Blending active learning with web technology. Upper Saddle River, NJ: Prentice Hall.

Scharff, L. “Incorporating Metacognitive Leadership Development in Class.” (2014). Retrieved from https://www.improvewithmetacognition.com/incorporating-metacognitive-leadership-development-in-class/.

Scharff, L., Rolf, J. Novotny, S. and Lee, R. (2011). “Factors impacting completion of pre-class assignments (JiTT) in Physics, Math, and Behavioral Sciences.” In C.

Rust (ed.), Improving Student Learning: Improving Student Learning Global Theories and Local Practices: Institutional, Disciplinary and Cultural Variations. Oxford Brookes University, UK.

Simkins, S. & Maier, M. (2009). Just-in-time teaching: Across the disciplines, across the  academy. Stylus Publishing, LLC..

Tanner, K. D. (2012). Promoting student metacognition. CBE-Life Sciences Education11(2), 113-120.


Interest Beyond the Ivory Tower

by David Westmoreland, U.S. Air Force Academy*

I recently had a speaking engagement at a local pub which took an interesting turn related to metacognition. “Science on Tap” is a loosely organized program that takes place in cities around the United States, in which people with an interest in science can meet with a scientist to learn about current developments. The topic that I chose to cover was not current at all – in fact, it was a historical narrative set in England during the 1870s. The story revolved a public challenge to prove that the earth is a sphere, as opposed to being flat, which was published in the journal Scientific Opinion. The author of the challenge, John Hampden, was a biblical literalist who offered to match a wager up to £500 (over $30,000 in today’s dollars) that would be held by an independent party until that person determined whether the burden of evidence had been met. The bet was picked up by Alfred Russel Wallace, who used a simple demonstration involving nothing more than wooden stakes with flags, a telescope, and a surveyor’s level to win. Those interested in the details of the story can find them in Schadewald (1978).

My intent in presenting the story was to engage the audience in the process of scientific reasoning – rather than presenting Wallace’s solution, I challenged those sharing a table to come up with a convincing demonstration using the same tools that Wallace employed. They succeeded, converging on a common theme similar to the one that Wallace used. In the Q & A that followed, the audience was clearly more interested in questions about the nature of thinking than in historical details. What is happening when two people view the same evidence and come to opposite conclusions? How often does a person make a deliberate attempt to view evidence through the lens of another? When someone rejects data in order to retain a belief, has rationalism been abandoned? The discussion was lively, engaging, and ultimately had to be cut off as we ran out of time. For me, it drove home the point that metacognitive thinking is of broad interest, not relegated to the halls of the academy.

Reference

Robert Schadewald. 1978. He knew the earth is round, but his proof fell flat. Smithsonian Magazine 9 (April), 101-113.

* Disclaimer: The views expressed in this document are those of the authors and do not reflect the official policy or position of the U. S. Air Force, Department of Defense, or the U. S. Govt.


Parallels: Instructors’ Metacognition Practices and their Mindsets

by Lauren Scharff, U. S. Air Force Academy

A week ago my institution held our annual Outstanding Educator’s Award Ceremony, at which the Chancellor of the University of Colorado – Colorado Springs, Dr. Pamela S. Shockley-Zalabak, gave the keynote address. Her presentation was engaging and right on target for the event. It helped honor the winners of the awards, and at the same time it was applicable to all the other faculty members who were in attendance. But what is prompting this blog piece is the parallel I found between her points about instructor mindset and our efforts to explore and develop metacognitive instruction (check out our Phase I research project summary).

As many of you are likely aware, and as cited by Dr. Shockley-Zalabak, Dr. Carol Dweck has led the research on the concept of mindset with respect to how it might impact educational (and many other) behaviors. She found evidence for two types of mindset: fixed and growth. Individuals who show a fixed mindset believe that characteristics such as scholastic ability, leadership potential, and speaking skills are innate rather than developable. In contrast, individuals with a growth mindset believe that such characteristics can be developed, and they seek opportunities to do so. These mindsets impact the likelihood that individuals will seek out challenging experiences (growth mindset) as opposed to seeking out experiences that will reinforce their current level of skill and avoid failure (fixed mindset). (Here is a nice review of Carol Dweck’s book, Mindset: The New Psychology of Success.)

Typically, research on mindset and efforts to shift mindsets from fixed to growth are focused on students. For example, see the nice review and developmental activity shared in Charity Peak’s blog post from February of this year. This emphasis on student mindset is similar to the emphasis on student rather than instructor metacognitive practices. While a student focus is extremely important (ultimately students are the target population of our educational efforts), we shouldn’t neglect to acknowledge, study, and develop instructors with respect to their mindset (and their metacognitive practices).

Dr. Shockley-Zalabak’s keynote presentation pointed out that attending to instructor mindset, and not just student mindset, is key to creating great educational climates. Once you think about instructor mindset, the implications become obvious. Instructor mindset can have large influences on student learning because mindset can impact instructor expectations about their students’ abilities and about their own teaching ability.

Instructors with fixed mindsets tend to believe that students either are or are not capable within their discipline. It’s not hard to imagine that students in such a teacher’s course might not thrive unless they showed early promise and were tagged as “talented” by their instructor. For a teacher with a fixed mindset, it might be difficult to understand why effort should be put into those students who don’t seem capable. Such instructors prefer to only work with the top students, not realizing how much potential they might be overlooking and inadvertently not developing.

Instructor mindset can also apply to instructors’ views about teaching ability. If they believe that great instructors are born, not made, then they will likely resist opportunities for professional development. New approaches and pedagogies are threatening because they present the possibility of a decreased sense of efficacy as they move out of their comfortable routine. Growth mindset teachers, on the other hand, will continuously seek out new approaches, and if they don’t work well, view those experiences as learning opportunities.

Take a moment to evaluate yourself and your mindset tendencies. Where do you think you fall in the fixed-growth spectrum? (As with many evaluation tasks, it’s probably easier to roughly categorize some of your colleagues as having more fixed or growth mindsets about their students and their teaching than it is to accurately examine yourself.) Although it’s not always easy to attain, self-awareness is foundational for effective and intentional self-development.

Self-awareness is also one of the key components of metacognition, which leads me to wonder…   Will metacognition lead someone to recognize that she needs to develop a growth mindset? Will a growth mindset lead her to become a more metacognitive instructor? These questions lead me to believe that future phases of our metacognitive instruction research project should include explicit efforts to develop awareness of one’s mindset in addition to awareness and self-regulation of the teaching strategies that one chooses.*

————–

* If you are interested in participating in a future phase of the metacognitive instruction study, please contact one of the two lead investigators: Dr. Lauren Scharff (laurenscharff@gmail.com) or Dr. John Draeger (draegejd@buffalostate.edu).

 


Developing Metacognitive Literacy through Role Play: Edward De Bono’s Six Thinking Hats

Ed Nuhfer, Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029

In recent posts on the “Improve with Metacognition” blog, we gained some rich contributions that are relevant to teaching metacognition across all disciplines. (Scharff, 2015) offered a worthy definition of metacognition as “the intentional and ongoing interaction between awareness and self-regulation.” Lauren Scharff’s definition references intentionality, which John Flavell, the founding architect of metacognitive theory, perceived as essential to doing metacognition. Actions that arise from intentional thinking are deliberate, informed and goal-directed (see http://www.lifecircles-inc.com/Learningtheories/constructivism/flavell.html).

Dr Edward de Bono created Six Thinking Hats as a framework of training for thinking. De Bono’s hats assign six distinct modes of thinking. Each role is so simple and clear that the thinker can easily monitor if she or he is engaged in the mode of thinking assigned by the role. Further, communicating the thinking through expressions and arguments to others familiar with the roles allows a listener to correctly assess the mode of thinking of the speaker or writer. Successful training eventually vests participants with the ability to shift comfortably between all six modes as a way to understand an open-ended problem from a well-rounded perspective before committing to a decision. Both training and application constitute role-play, in which each participant must, for a time, assume and adhere to the role of thinking represented by each particular hat. During training, the participant experiences playing all six roles.

Six Thinking Hats: Summary of the Roles

The White Hat role offers the facts. It is neutral, objective and practical. It provides an inventory of the best information known without advocating for solutions or positions.

The Yellow Hat employs a sunny, positive affect to advocate for a particular position/action but always justifies the proposed action with supporting evidence. In short, this hat advocates for taking informed action.

The Black Hat employs a cautious and at times negative role in order to challenge proposed positions and actions, but this role also requires the challenging argument to be supported by evidence. This hat seeks to generate evidence-based explanations for why certain proposals may not work or may prove counter-productive.

The Red Hat’s role promotes expression of felt emotion—positive, negative, or apathy—without any need to justify the expressed position with evidence. Red Hat thinking runs counter to the critical thinking promoted in higher education. However, to refuse to allow voice to Red Hat thinking translates into losing awareness that such thinking exists and may ultimately undermine an evidence-based decision. De Bono recognized a sobering reality: citizens often make choices and take actions based upon affective feelings rather than upon good use of evidence.

The Green Hat role is provocative in that it questions assumptions and strives to promote creative thinking that leads to unprecedented ideas or possibly redefines the challenge in a new way. Because participants recognize that each presenter is playing a role, the structure encourages creativity in the roles of all hats. It enables a presenter to stretch and present an idea or perspective that he or she might feel too inhibited to offer if trepidation exists about being judged personally for so-doing.

The Blue Hat is the control hat. It is reflective and introspective as it looks to ensure that the energy and contributions of all of the other hats are indeed enlisted in addressing a challenge. It synthesizes the awareness that grows from discussions and, in a group, is the hat that is charged with summarizing progress for other participants. When used by an individual working alone, assuming the role of the Blue Hat offers a check on whether the individual has actually employed the modes of all the hats in order to understand a challenge well.

Six Thinking Hats exercises are overtly metacognitive—intentional, deliberate, and goal-directed. One must remain focused on the role and the objective of contributing to meeting the challenge by advocating from thinking in that role. Those who have experienced training know how difficult it can be to thoughtfully argue for a position that one dislikes and how easily one can slip out of playing the assigned role into either one’s usual style of thinking and communicating or toward starting to advocate for one’s favored position.

Classroom use can take varied forms, and the most useful props are a one-page handout that concisely explains each role and six hats in the appropriate colors. Two of several formats that I have used follow.

(1) The class can observe a panel of six engage a particular challenge, with each member on the panel wearing the assigned hat role and contributing to discussing the challenge in that role. After the six participants have each contributed, they pass the hat one person clockwise and repeat the process until every person has assumed all six roles. Instructors, from time to time, pause the discussion and invite the observers to assume each of the roles in sequence.

(2) One can arrange the class into a large circle and toss a single hat in the center of the circle. Every person must mindfully assume the role of that hat and contribute. The instructor can serve the blue-hat role as a recorder at the whiteboard and keep a log of poignant contributions that emerge during the role-plays. The process continues until all in the class have experienced the six roles. In follow-up assignments, self-reflection exercises should require students to analyze a particular part of the assignment regarding the dominant kind of “colored hat” thinking that they are engaged in.

I first learned about Six Thinking Hats from a geology professor at Colorado School of Mines, who had learned it from Ruth Streveler, CSM’s faculty developer. The professor used it to good advantage to address open-ended case studies, such as deciding whether to permit a mine in a pristine mountain area or to develop a needed landfill near Boulder, Colorado. Subsequently, I have used it to good advantage in many classes and faculty development workshops.

When one develops ability to use all six hats well, one actually enters the higher stages of adult developmental thinking models. All involve the obtaining of relevant evidence, weighing of contradictory evidence, addressing affective influences, developing empathy with others oppositional viewpoints, and understanding the influences of one’s own bias and feelings on a decision (Nuhfer and Pavelich, 2001 mapped Six Thinking Hats onto several developmental models: ModelsAdultThinkingmetacog).

We can become aware of metacognition by reading about it, but we only become literate about metacognition through experiences gained through consciously applying it. Draeger, 2015 offered thoughts that expanded our thinking about Scharff’s definition by suggesting that advantages can come from embracing metacognition as vague. The flexibility gained by practicing applications on diverse cases allows us to appreciate the plastic, complex nature of metacognition as we stretch to do think well as we engage challenges.

Six Thinking Hats offers an amorphous approach to engaging nearly any kind of open-ended real-life challenge while mindfully developing metacognitive awareness and skill. After experiencing such an exercise, one can return to a definition of metacognition, like that of Lauren Scharff’s and find deeper meanings within the definition that were unlikely apparent from initial exposure to the definition.

Reference

Nuhfer E. and Pavelich M. (2001). Levels of thinking and educational outcomes. National Teaching and Learning Forum 11 (1) 9-11.


So Your Students Think They Are Left-Brained Thinkers or Kinesthetic Learners: Please God, No! How Metacognition Can Explain Student’s Misconceptions

By Aaron S. Richmond, Hannah M. Rauer, and Eric Klein

Metropolitan State University of Denver

Have you heard students say, “We only use 10% of our brain!” or “MMR shots cause Autism” or “My cousin has ESP…no seriously!” or “I am really good at multi-tasking.” or “I have high bodily-kinesthetic intelligence!”? Sadly, the list can go on, and on, and on. Our students, and the general public for that matter, have many misconceptions and preformed inaccurate naïve theories of the world which often impairs learning in the classroom (Dochy et al., 1999). These misconceptions are pervasive and extremely hard to change (Lilienfeld et al., 2009). Our research suggests that metacognition may be the key to understanding misconceptions.

My undergraduate students and I sought to understand the role metacognition could play in the susceptibility to common psychology and education misconceptions. Prior to our study, most research in this area focused on the persistence of misconceptions (e.g., Kowalski & Taylor, 2009), or how they relate to critical thinking skills (Taylor & Kowalski, 2004), or how to reduce misconceptions by direct instruction (e.g., Glass et al., 2008). However, our study was the first to investigate how metacognitive beliefs (e.g., metacognitive awareness, need for cognition, cognitive and learning strategy use, or actual metacognitive performance) may predict prevalence of psychological and educational misconceptions.

We gave over 300 undergraduate freshman a 65-item psychological and educational misconceptions inventory that were pooled from several studies (e.g., Amsel et al., 2009; Standing & Huber, 2003). We assessed metacognitive beliefs using the Need for Cognition Scale (NCS; Cacioppo, Petty, Feinstein, & Jarvis, 1996), the Memory for Self-Efficacy Questionnaire (MSEQ; Berry, West, & Dennehy, 1989), the Metacognitive Awareness Inventory (MAI; Schraw & Dennison, 1994), the Motivated Strategies for Learning Questionnaire (MSLQ; Pintrich, Smith, Garcia, & McKeachie, 1991), and one direct measure of metacognition, calibration. Calibration is the degree to which learners understand what they know and what they do not know.

We found that metacognitive variables were highly predictive of student’s susceptibility to believing in educational and psychological misconceptions. Interestingly, the most powerful predictor was the student’s actual measure of metacognition (e.g., calibration as measured through gamma). Meaning, the more accurate students were at knowing when they knew or did not know something (i.e., calibration), the less they believed in misconceptions. Also, when students had higher scores on need for cognition, had more advanced beliefs on how to regulate cognition, stronger self-efficacy for learning preferences and control of learning beliefs, the less susceptible they were to believing in misconceptions.

What does this research tell us? We think that this is the first step in understanding the role metacognition has in conceptual development (both inaccurate and accurate). Second, if teachers stress the importance of metacognitive development and teach how to improve student metacognition, then one of the added benefits maybe that students will have more accurate conceptual development. The natural progression in this research is to experimentally manipulate metacognitive instruction and see if it reduces educational and psychological misconceptions.

References

Amsel, E., Johnston, A., Alvarado, E., Kettering, J., Rankin, L., & Ward, M. (2009). The effect of perspective on misconceptions in psychology: A test of conceptual change theory. Journal of Instructional Psychology, 36(4), 289-295.

Berry, J. M., West, R. L. & Dennehey, D. M. (1989). Reliability and validity of the Memory Self-Efficacy Questionnaire. Developmental Psychology, 25(5), 701-713. doi:10.1037/0012-1649.25.5.701

Kowalski, P., & Taylor, A. K. (2009). The effect of refuting misconceptions in the introductory psychology class. Teaching of Psychology, 36(3), 153-159. doi:10.1080/00986280902959986

Pintrich, P. R., Smith. D. A., Garcia, T., & McKeachie. W. (1991) A manual for the use of the motivated strategies for learning questionnaire. Ann Arbor, MI: University of Michigan.

Schraw, G., & Dennison, R. S. (1994). Assessing metacognitive awareness. Contemporary Educational Psychology, 19(4), 460-475. doi:10.1006/ceps.1994.1033

Standing, L. G., & Huber, H. (2003). Do psychology courses reduce beliefs in psychological myths? Social Behavior & Personality: An International Journal, 31(6), 585-585. doi:10.2224/sbp.2003.31.6.585

Taylor, A. K., & Kowalski, P. (2004). Naïve psychological science: The prevalence, strength, and sources of misconceptions. The Psychological Record, 54(1), 15-25.


Exploring the relationship between awareness, self-regulation, and metacognition

Thinking about thinking, awareness, and self-regulation Share on Xby John Draeger (SUNY Buffalo State)

Recent blog posts have considered the nature of metacognition and metacognitive instruction. Lauren Scharff, for example, defines metacognition as “the intentional and ongoing interaction between awareness and self-regulation” (Scharff, 2015). This post explores the relationship between the elements of this definition.

Scharff observes that a person can recognize that a pedagogical strategy isn’t working without changing her behavior (e.g., someone doesn’t change because she is unaware of alternative strategies) and a person can change her behavior without monitoring its efficacy (e.g., someone tries a technique that she heard about in a workshop without thinking through whether the technique makes sense within a particular learning environment). Scharff argues that a person engaging in metacognition will change her behavior when she recognizes that a change is needed. She will be intentional about when and how to make that change. And she will continue the new behavior only if there’s reason to believe that it is the achieving the desired result. Metacognition, therefore, can be found in the interaction between awareness and self-regulated action. Moreover, because learning environments are fluid, the interaction between awareness and self-regulation must be ongoing. This suggests that awareness and self-regulation are necessary for metacognition.

In response, I offered what might seem to be a contrary view (Draeger, 2015). I argued that the term ‘metacognition’ is vague in two ways. First, it is composed of overlapping sub-elements. Second, each of these sub-elements falls along a continuum. For example, metacognitive instructors can be more (or less) intentional, more (or less) informed about evidence-based practice, more (or less) likely to have alternative strategies ready to hand, and more (or less) nimble with regards to when and how to shift strategies based on their “in the moment” awareness of student need. Sub-elements are neither individually necessary nor jointly sufficient for a full characterization of metacognition. Rather, a practice is metacognitive if it has “enough” of the sub-elements and they are far “enough” along the various continua.

Scharff helpfully suggests that metacognition must involve both awareness and action. I would add that awareness can be divided into sub-elements (e.g., reflection, mindfulness, self-monitoring, self-knowledge) and behavior can be divided into sub-elements (e.g., self-regulation, collective actions, institutional mandates). While I suspect that no one of the sub-elements is individually necessary for metacognition, Scharff has correctly identified two broad clusters of elements that are required for metacognition.

As I continue to think through the relationship between awareness and self-regulation, I am reminded of an analogy between physical exercise and intellectual growth. As I have said in a previous post, I am a gym rat. Among other things, I swim several times a week. A few years ago, however, I noticed that my stroke needed refinement. So, I contacted a swimming instructor. She found a handful of areas where I could improve, including my kick and the angle of my arms. As I worked on these items, it was often helpful to focus on my kick without worrying about the angle of my arms and vice versa. With time and effort, I got gradually better. Because my kick had been atrocious, focusing on that one area resulted in dramatic improvement. Because my arm angle hadn’t been all that bad, improvements were far less dramatic. Working on my kick and my arm angle combined to make me a better swimmer. Separating the various elements of my stroke allowed me to identify areas for improvement and allowed me to tackle my problem areas without feeling overwhelmed. However, even after working on the parts, I found that I still needed to put it together. Eventually, I found a swim rhythm that brought elements into alignment.

Likewise, it is often useful to separate elements of our pedagogical practice (e.g., awareness, self-regulation) because separation allows us identify and target areas in need of improvement. If a person knows what she is doing isn’t working but doesn’t know what else to do, then she might focus on identifying alternative strategies. If a person knows of alternative strategies but does not know when or how to use them, then she might focus on her “in the moment awareness” and her ability to shift to new strategies as needed during class. Focusing on the one element can give a person something concrete to work on without feeling overwhelmed by all the other moving parts. The separation is useful, but it is also somewhat artificial. By analogy, my kick and my arm angle are elements of my swim stroke, but they are also part of an interrelated process. While it is important to improve the parts, the ultimate goal is finding a way to integrate the changes into an effective whole. Metacognitive instructors seek to become more explicit, more intentional, more informed about evidence-based practice, and better able to make “in the moment” adjustments. Focusing on each of these elements can improve practice. Separating these elements can be useful, but somewhat artificial because the ultimate goal is finding a way to integrate these elements into an effective whole.

References

Draeger, John (2015). “So what if ‘metacognition’ is vague!” Retrieved from https://www.improvewithmetacognition.com/so-what-if-metacognition-is-vague/

Scharff, Lauren (2015). “What do we mean by ‘metacognitive instruction?” Retrieved from https://www.improvewithmetacognition.com/what-do-we-mean-by-metacognitive-instruction/

 


Metacognition and Specifications Grading: The Odd Couple?

By Linda B. Nilson, Clemson University

More than anything else, metacognition is awareness of what’s going on in one’s mind. This means, first, that a person sizes up a task before beginning it and figures out what kind of a task it is and what strategies to use. Then she monitors her thinking as she progresses through the task, assessing the soundness of her strategies and her success at the end.

So what does this have to do with specs grading?

In specs grading, all assignments and tests are graded pass/fail, credit/no credit, where “pass” means at least B or better work. A student product passes if it conforms to the specifications (specs) that an instructor described in the assignment or test directions. So either the students follow the directions and “get it right,” or the work doesn’t count. Partial credit doesn’t exist.

For the instructor, the main task is laying out the specs. A short reading compliance assignment may have specs as simple as: “You must answer all the study questions, and each answer must be at least 100 words long.” For more substantial assignments, the instructor can detail the “formula” or template of the assignment – that is, the elements and organization of a good literature review, research proposal, press release, or lab report – or provide a list of the questions that she wants students to answer, as for a reflection on a service-learning or group project experience. Especially for formulaic assignments, which so many undergraduate-level assignments are, models and examples bring the specs to life.

The stakes are higher for students than they are in our traditional grading system. With specs grading, it’s all or nothing. No sliding by with a careless, eleventh-hour product because partial credit is a given.

To be successful in a specs-graded course, students have to be aware of their thinking as they complete their assignments and tests. This means that students, first have to pay attention to the directions, and the directions are themselves a learning experience when they explicitly lay out the formula for different types of work. Especially when enhanced with models, the specs supply the crucial information that we so often gloss over: exactly what the task involves. Otherwise, how should our students know? With clear specs, they learn what reflection involves, how a literature review is organized, and what a research proposal must contain. Then during the task, students need to monitor and assess their work to determine if it is indeed meeting the specs. “Does the depth of my response match the length requirement?” “Am I answering all the reflection questions?” “Am I following the proper organization?” “Have I written all the sections?”

Another distinguishing characteristic of specs grading is the replacement of the point system with “bundles” of assignments and tests. For successfully completing a bundle, students obtain final course grades. And they select the bundle and the grade they are willing to work for. To get a D, the bundle involves relatively little, unchallenging work. For higher grades, the bundles require progressively more work, more challenging work, or both. In addition, each bundle is associated with a set of learning outcomes, so a given grade indicates the outcomes a student has achieved.

If students fail to self-monitor and self-assess, they risk receiving no credit for their work and, given that it is part of a bundle, getting a lower grade in the course. And their grade is important for a whole new reason: because they chose the grade they wanted/needed and its accompanying workload. This element of choice and volition increases students’ sense of responsibility for their performance.

With specs grading, students do get limited opportunities to revise an unacceptable piece of work or to obtain get a 24-hour extension on an assignment. These opportunities are represents by virtual tokens that students receive at the beginning of the course. Three is a reasonable number. This way, the instructor doesn’t have to screen excuses, requests for exceptions, and the like. She also has the option of giving students chances to earn tokens and rewarding those with the most tokens at the end of the course.

Specs grading solves many of the problems that our traditional grading system has bred while strengthening students’ metacognition and sense of ownership of their grades. Details on using and transitioning to this grading system are in my 2015 book, Specifications Grading: Restoring Rigor, Motivating Students, and Saving Faculty Time (Sterling, VA: Stylus).


So what if ‘metacognition’ is vague!

by John Draeger (SUNY Buffalo State)

When Lauren Scharff invited me to join Improve with Metacognition last year, I was only vaguely aware of what ‘metacognition’ meant. As a philosopher, I knew about various models of critical thinking and I had some inkling that metacognition was something more than critical thought, but I could not have characterized the extra bit. In her post last week, Scharff shared a working definition of ‘metacognitive instruction’ developed by a group of us involved as co-investigators on a project (Scharff, 2015). She suggested that it is the “intentional and ongoing interaction between awareness and self-regulation.” This is better than anything I had a year ago, but I want to push the dialogue further.

I’d like to take a step back to consider the conceptual nature of metacognition by applying an approach in legal philosophy used to analyze terms with conceptual vagueness. While clarity is desirable, Jeremy Waldron argues that there are limits to the level of precision that legal discourse can achieve (Waldron, 1994). This is not an invitation to be sloppy, but rather an acknowledgement that certain legal concepts are inescapably vague. According to Waldron, a concept can be vague in at least two ways. First, particular instantiations can fall along a continuum (e.g., actions can be more or less reckless, negligent, excessive, unreasonable). Second, some concepts can be understood in terms of overlapping features. Democracies, for example, can be characterized by some combination of formal laws, informal patterns of participation, shared history, common values, and collective purpose. These features are neither individually necessary nor jointly sufficient for a full characterization of the concept. Rather, a system of government counts as democratic if it has “enough” of the features. A particular democratic system may look very different from its democratic neighbor. This is in part because particular systems will instantiate the features differently and in part because particular systems might be missing some feature altogether. Moreover, democratic systems can share features with other forms of government (e.g., formal laws, common values, and collective purpose) without there being a clear boundary between democratic and non-democratic forms of government. According to Waldron, there can be vagueness within the concept of democracy itself and in the boundaries between it and related concepts.

While some might worry that the vagueness of legal concepts is a problem for legal discourse, Waldron argues that the lack of precision is desirable because it promotes dialogue. For instance, when considering whether some particular instance of forceful policing should be considered ‘excessive,’ we must consider the conditions under which force is justified and the limits of acceptability. Answering these questions will require exploring the nature of justice, civil rights, and public safety. Dialogue is valuable, in Waldron’s view, because it brings clarity to a broad constellation of legal issues even though clarity about any one of the constituents requires thinking carefully about the other elements in the constellation.

Is ‘metacognition’ vague in the ways that legal concepts can be vague? To answer this question, consider some elements in the metacognitive constellation as described by our regular Improve with Metacognition blog contributors. Self-assessment, for example, is feature of metacognition (Fleisher, 2014, Nuhfer, 2014). Note, however, that it is vague. First, self-assessments may fall along a continuum (e.g., students and instructors can be more or less accurate in their self-assessments). Second, self-assessment is composed of a variety of activities (e.g., predicting exam scores, tracking gains in performance, understanding personal weak spots and understanding one’s own level of confidence, motivation, and interest). These activities are neither individually necessary nor jointly sufficient for a full characterization of self-assessment. Rather, students or instructors are engaged in self-assessment if they engage in “enough” of these activities. Combining these two forms of vagueness, each of the overlapping features can themselves fall along a continuum (e.g., more or less accurate at tracking performance or understanding motivations). Moreover, self-assessment shares features with other related concepts such as self-testing (Taraban, Paniukov, and Kiser, 2014), mindfulness (Was, 2014), calibration (Gutierrez, 2014), and growth mindsets (Peak, 2015). All are part of the metacognitive constellation of concepts. Each of these concepts is individually vague in both senses described above and the boundaries between them are inescapably fuzzy. Turning to Scharff’s description of metacognitive instruction, all four constituent elements (i.e. ‘intentional,’ ‘ongoing interaction,’ ‘awareness,’ and ‘self-regulation’) are also vague in both senses described above. Thus, I believe that ‘metacognition’ is vague in the ways legal concepts are vague. However, if Waldron is right about the benefits of discussing and grappling with vague legal concepts (and I think he is) and if the analogy between vague concepts and the term ‘metacognition’ holds (and I think it does), then vagueness in this case should be perceived as desirable because it facilitates broad dialogue about teaching and learning.

As Improve with Metacognition celebrates its first year birthday, I want to thank all those who have contributed to the conversation so far. Despite the variety of perspectives, each contribution helps us think more carefully about what we are doing and why. The ongoing dialogue can improve our metacognitive skills and enhance our teaching and learning. As we move into our second year, I hope we can continue exploring the rich the nature of the metacognitive constellation of ideas.

References

Fleisher, Steven (2014). “Self-assessment, it’s a good thing to do.” Retrieved from https://www.improvewithmetacognition.com/self-assessment-its-a-good-thing-to-do/

Gutierrez, Antonio (2014). “Comprehension monitoring: the role of conditional knowledge.” Retrieved from https://www.improvewithmetacognition.com/comprehension-monitoring-the-role-of-conditional-knowledge/

Nuhfer, Ed (2014). “Self-Assessment and the affective quality of metacognition Part 1 of 2.”Retrieved from https://www.improvewithmetacognition.com/self-assessment-and-the-affective-quality-of-metacognition-part-1-of-2/

Peak, Charity (2015). “Linking mindset to metacognition.” Retrieved from https://www.improvewithmetacognition.com/linking-mindset-metacognition/

Scharff, Lauren (2015). “What do we mean by ‘metacognitive instruction’?” Retrieved from https://www.improvewithmetacognition.com/what-do-we-mean-by-metacognitive-instruction/

Taraban, Roman, Paniukov, Dmitrii, and Kiser, Michelle (2014). “What metacognitive skills do developmental college readers need? Retrieved from https://www.improvewithmetacognition.com/what-metacognitive-skills-do-developmental-college-readers-need/

Waldron, Jeremy (1994). “Vagueness in Law and Language: Some Philosophical Issues.” California Law Review 83(2): 509-540.

Was, Chris (2014). “Mindfulness perspective on metacognition. ”Retrieved from https://www.improvewithmetacognition.com/a-mindfulness-perspective-on-metacognition/

 


What Do We Mean by “Metacognitive Instruction”?

by Lauren Scharff (U.S. Air Force Academy*) 

Many of you are probably aware of the collaborative, multi-institutional metacognitive instruction research project that we initiated through the Improve with Metacognition site.  This project has been invigorating for me on many levels. First, through the process of developing the proposal, I was mentally energized. Several of us had long, thoughtful conversations about what we meant when we used the term “metacognitive instruction” and how these ideas about instruction “mapped” to the concept of “metacognitive learning.”  These discussions were extensions of some early blog post explorations, What do we mean when we say “Improve with metacognition”? (Part 1 and Part 2). Second, my involvement in the project led me to (once again) examine my own instruction. Part of this self-examination happened as a natural consequence of the discussions, but also it’s happening in an ongoing manner as I participate in the study as an intervention participant. Good stuff!

For this post, I’d like to share a bit more about our wrangling with what we meant by metacognitive instruction as we developed the project, and I invite you to respond and share your thoughts too.

Through our discussions, we ultimately settled on the following description of metacognitive instruction:

Metacognitive instructors are aware of what they are doing and why. Before each lesson, they have explicitly considered student learning goals and multiple strategies for achieving those goals.  During the lesson, they actively monitor the effectiveness of those strategies and student progress towards learning goals.  Through this pre-lesson strategizing and during lesson monitoring awareness, a key component of metacognition, is developed; however, awareness is not sufficient for metacognition.  Metacognitive instructors also engage in self-regulation. They have the ability to make “in-the-moment”, intentional changes to their instruction during the lesson based on a situational awareness of student engagement and achievement of the learning objectives — this creates a responsive and customized learning experience for the student.

One of the questions we pondered (and we’d love to hear your thoughts on this point), is how these different constructs were related and / or were distinct. We came to the conclusion that there is a difference between reflective teaching, self-regulated teaching, and metacognitive instruction/teaching.

More specifically, a person can reflect and become aware of their actions and their consequences, but at the same time not self-regulate to modify behaviors and change consequences, especially in the moment. A person can also self-regulate / try a new approach / be intentional in one’s choice of actions, but not be tuned in / aware of how it’s going at the moment with respect to the success of the effort. (For example, an instructor might commit to a new pedagogical approach because she learned about it from a colleague. She can implement that new approach despite some personal discomfort due to changing pedagogical strategies, but without conscious and intentional awareness of how well it fits her lesson objectives or how well it’s working in the moment to facilitate her students’ learning.) Metacognition combines the awareness and self-regulation pieces and increases the likelihood of successfully accomplishing the process (teaching, learning, or other process).

Thus, compared to other writings we’ve seen, we are more explicitly proposing that metacognition is the intentional and ongoing interaction between awareness and self-regulation. Others have generally made this claim about metacognitive learning without using the terms as explicitly. For example, “Simply possessing knowledge about one’s cognitive strengths or weaknesses and the nature of the task without actively utilizing this information to oversee learning is not metacognitive.” (Livingston, 1997). But, in other articles on metacognition and on self-regulated learning, it seems like perhaps the metacognitive part is the “thinking or awareness” part and the self-regulation is separate.

What do you think?

——————
Livingston, J. A. (1997). Metacognition: An Overview. Unpublished manuscript, State University of New York at Buffalo. http://gse.buffalo.edu/fas/shuell/cep564/metacog.htm

* Disclaimer: The views expressed in this document are those of the authors and do not reflect the official policy or position of the U. S. Air Force, Department of Defense, or the U. S. Govt.


Executive Function: Can Metacognitive Awareness Training Improve Performance?

by Antonio Gutierrez, Georgia Southern University

In a recent meta-analysis of 67 research studies that utilize an intervention targeted at enhancing metacognitive awareness, Jacob and Parkinson (in press) argue that metacognitive interventions aimed at improving executive function processes are not as effective at improving student achievement as once believed by scholars and practitioners alike. In essence, the evidence in support of robust effects of these types of interventions in improving achievement is inconclusive. While descriptive research studies continue to report high associations between metacognitive awareness and performance or achievement measures, Jacob and Parkinson argue that the experimental evidence supporting a strong role of metacognitive training in improving student performance is scant. I have recently pondered a similar dilemma with research on the effect of metacognitive monitoring training on students’ performance, confidence judgments but especially calibration. The literature on these topics converges on the finding that metacognitive monitoring training improves performance and confidence in performance judgments but not necessarily calibration (see e.g., Bol et al., 2005; Gutierrez & Schraw, 2015; Hacker et al., 2008).

While Jacob and Parkinson’s meta-analysis is illuminating, I wonder whether, like the calibration literature, the conclusion that executive function interventions are not as effective at improving achievement may be due to very different conceptualizations of the constructs under investigation. In the case of calibration, the mixed findings may be due to the fact that the metacognitive monitoring interventions were not likely targeting the same thing. For instance, some interventions may have been targeting a reduction in calibration errors (overconfidence and underconfidence), others may have been targeting improvement in calibration accuracy, whereas yet others may have been targeting both, whether intentionally or unintentionally. Because these interventions were targeting different aspects of calibration, it could be that the inconclusive findings were due to a confounding of these various dimensions of calibration … comparing apples to oranges, if you will. Could the lack of robust effects of executive function interventions on achievement be due to a similar phenomenon? What if these studies were not targeting the same executive function processes, in which case they would not be as directly comparable as at first glance? Jacob and Parkinson’s (in press) study may lead some to believe that there is little to be gained in investing time and effort in executive function interventions. However, before we abandon these interventions, perhaps we should consider developing executive function interventions that are more specific and finer grained such as by targeting very specific aspects of the executive function rather than a more general approach.

References
Bol, L., Hacker, D. J., O’Shea, P., & Allen, D. (2005). The influence of overt practice, achievement level, and explanatory style on calibration accuracy, and performance. The Journal of Experimental Education, 73, 269-290.

Gutierrez, A. P., & Schraw, G. (2015). Effects of strategy training and incentives on students’ performance, confidence, and calibration. The Journal of Experimental Education: Learning, Instruction, and Cognition. Advance online publication. doi: 10.1080/00220973.2014.907230

Hacker, D. J., Bol, L., & Bahbahani, K. (2008). Explaining calibration accuracy in classroom contexts: The effects of incentives, reflection, and explanatory style. Metacognition Learning, 3, 101-121.

Jacob, R., & Parkinson, J. (in press). The potential for school-based interventions that target executive function to improve academic achievement: A review. Review of Educational Research. Advance online publication. doi: 10.3102/0034654314561338


The Metacognitive Syllabus!

By Aaron S. Richmond, Ph.D.
Metropolitan State University of Denver

This blog may be like no other in Improve with Metacognition (IwM). I am asking you, the readers to actively participate. Yes, I mean YOU, YOU, and YOU☺. But let me clarify—I do not ask rhetorical questions. As such, please respond using the comment function in IwM or Tweet your answer to the three questions in this blog.

Question #1: How can we use the syllabus as a metacognitive tool?
As delineated by scores of researchers and teachers, the syllabus can be many things. The syllabus can be a contract (Slattery & Carlson, 2005). These elements of the syllabus typically include policies on attendance, late work, ethics, grading, etc. The syllabus can also be a permanent record (Parkes & Harris, 2002). Permanent record elements of a syllabus include course objectives, assessment procedures, course description, and course content. The syllabus is also a communication device that can set the tone for your class and is an opportunity to gain your students trust and respect by modeling your pedagogical beliefs (Bain, 2004) .

As the syllabus can be many things, the syllabus, it is very possible that the syllabus can serve as a metacognitive tool. Several researchers suggest that the syllabus is also a cognitive map (Parkes & Harris, 2002) and a learning tool (Matejka & Kurke, 1994). These elements typically include a description of how to succeed in the course, common pitfalls and misconceptions that occur in the course, campus resources that can assist the students in learning (e.g., writing center), a teaching philosophy, and embedded explanations of class assignments, structure, and student learning. If we consider the syllabus in this context, I believe that we can easily incorporate metacognitive elements. For instance, in my personal teaching philosophy, I specifically mention my focus on improving metacognition. Another example is that I have at least one student learning objective that is megacogntively based with assignments designed to assess this objective. For example, Students will understand what metacognition is and how it improves their own learning (assessed by experiential learning assignment 1 and comprehensive exam). Or Students will understand what it means to develop a culture of metacognition in the classroom (assessed by classroom observation and mid-term exam). Finally, I actively incorporate course content on learning strategies and the metacognitive explanations for those strategies which sets the tone for the importance of metacognition in the class.

Question #2: How are you using the syllabus as a metacognitive tool?
I really want to hear from you on how you may be using the syllabus as a metacognitive tool. For example, what specific statements do you include related to metacognition goals? What assignments do you mention that link to metacognitive development?

Question #3: If the syllabus can be used as a metacognitive tool, how do we know it is effective?
What is your answer to this question? My answer centers on the Scholarship of Teaching and Learning. That is, we don’t have empirical evidence yet to say that the syllabus is a metacognitive tool. That doesn’t mean that it can’t be or isn’t already in practice. But I think you(we) should take up this challenge and investigate this issue. The syllabus can have profound impact on how student learning, instruction, and student ratings of instruction (Richmond, Becknell, Slattery, Morgan, & Mitchell, 2015; Saville, Zinn, Brown, & Marchuk, 2010). so let’s investigate how to improve the syllabus through metacognition.

UsCourse syllabi can be a metacognitive tool. Share on X

 

References
Bain, K. (2004). What the best college teachers do. Cambridge, MA: Harvard University Press.
Matejka, K., & Kurke, L. B. (1994). Designing a great syllabus. College Teaching, 42(3), 115-117. doi:10.1080/87567555.1994.9926838
Parkes, J., & Harris, M. B. (2002). The purposes of a syllabus. College Teaching, 50(2), 55-61. doi:10.1080/87567550209595875
Richmond, A. S., Becknell, J., Slattery, J., Morgan, R., & Mitchell, N. (2015, August). Students’ perceptions of a student-centered syllabus: An experimental analysis. Poster presented the annual meeting of the American Psychological Association, Toronto, Canada.
Saville, B. K., Zinn, T. E., Brown, A. R., & Marchuk, K. A. (2010). Syllabus detail and students’ perceptions of teacher effectiveness. Teaching of Psychology, 37, 186-189. doi:10.1080/00986283.2010.488523
Slattery, J. M., & Carlson, J. F. (2005). Preparing an effective syllabus: Current best practices. College Teaching, 53, 159-164. doi:10.3200/CTCH.53.4.159-164


Exploring the Developmental Progression of Metacognition

by Sarah L. Bunnell at Ohio Wesleyan University (slbunnel@owu.edu)

As a developmental psychologist, it is difficult to consider student learning (and my own learning as well) without a strong nod to developmental process. Metacognition, as has been described by many others on this blog and in other venues (e.g., Baxter-Magolda, 1992; Flavell, 1979; Kuhn, 1999; Perry, 1970), requires the cognitive skills of reflection, connection, evaluation, and revision. Metacognitive acts are initially quite cognitively demanding, and like most conscious cognitive processes, require practice to become more automatic or at least less consuming of cognitive resources. In addition to examining how students acquire the tools required for the hard work of metacognition, I have also been interested in whether there are developmental differences in students’ ability to make connections and reflections across the college years.

I recently conducted two examinations of metacognitive development; the first project involved my Introductory Psychology course, which enrolls primarily first year students, and the second project involved my Adolescent Psychology course, which enrolls primarily sophomore-level students. Below, I will provide a brief summary of each study and then discuss what I see as some take-home points and next-steps for inquiry.

In the Introductory Psychology course (n = 45), each student completed a metacognitive portfolio (hosted through the MERLOT website; http://eportfolio.merlot.org/) throughout the semester. In this portfolio, students responded to a series of prompts to reflect on their current thinking about course concepts and the ways in which these concepts play out in their own lives. At the end of the semester, students were asked to review their responses, identify any responses that they would now change, and explain why they would now alter their responses. They were also asked to describe how they thought their thinking had changed over the course of the semester.

Given the large body of work on the learning benefits associated with metacognition, I was not surprised that students who wanted to change a greater number of their responses performed significantly better on the final exam than did students who identified fewer points of change. More surprising, however, was the finding that students who did well on the final exam were significantly more likely to have endorsed changes in their thinking about themselves as opposed to changes in their thinking about others. A year after this class ended, I contacted these same students again, and I asked them to reflect on their thinking at the end of the course relative to their thinking about Psychology and themselves now. Of note, an analysis of these responses indicated that the students who were high performers on the final exam and in the course overall were no longer reporting many self-related metacognitive links. Instead, these students were significantly more likely to say that they now had a greater understanding of others than they did before. Thus, there was a powerful shift over time in the focus of metacognition from self to other.

In my Adolescent Psychology course (n = 35), students conduct a semi-structured interview of an adolescent, transcribe the interview, and then analyze the interview according to developmental theory. This assignment is designed to foster connection and application, and I have compelling evidence indicating that this experience enhances learning. What was less clear to me, however, is whether participating in this course and in the interview paper activity contributes to students’ metacognitive awareness of self? To address this question, I implemented a pre-post test design. On the first day of class, students were asked, “Are you currently an adolescent? Please explain your answer.” To answer this question, one must consider multiple ways in which we may conceptualize adolescence (i.e., age, legal responsibility, physical maturity, financial responsibility); as you can clearly see, the lens we apply to ourselves and others leads to quite varied views of when adolescence ends and adulthood begins! At the end of the term, students were again asked the same question, plus an additional prompt that asked them to reflect on how their thinking about themselves had changed across the semester.

On Day 1, 17 students endorsed currently being an adolescent, 16 students reported no longer being an adolescent, and 2 students said they did not feel that they had enough information to respond. It is important to note that all students in the course were between the ages of 18 and 21 years and as such, all were technically late adolescents. On the last day of class, 21 class members labeled themselves as adolescents, 4 students said that they did not consider themselves to be adolescents, and 5 said that they were an adolescent in some contexts of their life and not others. As an example of a contextual way of thinking, one student said: “I believe that neurologically I am still an adolescent because I am below the age of 25 and therefore do not have a fully developed frontal lobe, which can alter decision making, and from a Piagetian standpoint I believe I am out of adolescence because I have reached the formal operational stage of development and possibly even beyond that. Overall though, I believe that I can’t fully define myself as an adolescent or not because there are so many factors in play.”

I examined these group-level differences in terms of course performance from a number of angles, and two interesting patterns emerged. First, students who adopted a more context-dependent view of self did significantly better on the application-based, cumulative final exam than did students who held an absolute view of self. This first finding is consistent with the work on Marcia Baxter-Magolda (1992), William Perry (1970), and others, which views contextual knowing as a complicated and mature form of meta-knowing. Second, students who changed their view of themselves across the semester conducted significantly more advanced analyses of the interview content relative to those whose view of self did not change. Thus, the students who displayed greater advances in metacognition were better able to apply these reflections and connections to themselves and, in turn, to the lives of others.

Taken together, this work suggests to me that the ability to engage in metacognitive reflection and connection may initially develop in a self-focused way and then, following additional experience and metacognitive skill attainment, extend beyond the self. Please note that I am careful to suggest that the ability of other-related connection emerges following experience and the acquisition of lower-level preparatory skills, rather than merely the passage of time, even though there is clearly a temporal dimension at play. Instead, as Donald Baer warned us, age is at best a proxy for development; at the most extreme, development is “age-irrelevant” (Baer, 1970). Why do students demonstrate improved metacognition across the college years? It is certainly not merely because the days have ticked by. Instead, these advances in thinking, as well as students’ willingness to refine their thinking about the self, are supported and constructed by a range of experiences and challenges that their college experience affords. To understand age- or college-level changes in thinking, therefore, we should focus on the developmental tasks and experiences that support this development. I hope that my lines of inquiry contribute in small part to this process.

References

Baer, D. M. (1970). An age-irrelevant concept of development. Merrill-Palmer Quarterly, 16, 238-245.

Baxter Magolda, M. B. (1992). Students’ epistemologies and academic experiences: implications for pedagogy, Review of Higher Education, 15, 265-87.

Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34, 906 – 911.

Kuhn, D. (1999). A developmental model of critical thinking. Educational Researcher, 28, 16-25.

Perry, William G., Jr. (1970). Forms of intellectual and ethical development in the college years: A scheme. New York: Holt, Rinehart, & Winston.


Who says Metacognition isn’t Sexy?

By Michael J. Serra at Texas Tech University

This past Sunday, you might have watched “The 87th Academy Awards” (i.e., “The Oscars”) on television. Amongst the nominees for the major awards were several films based on true events and real-life people, including two films depicting key events in the lives of scientists Stephen Hawking (The Theory of Everything) and Alan Turing (The Imitation Game).

There are few things in life that I am sure of, but one thing I can personally guarantee is this: No film studio will ever make a motion picture about the life of your favorite metacognition researcher. Believe it or not, the newest issue of Entertainment Weekly does not feature leaked script details about an upcoming film chronicling how J. T. Hart came up with the idea to study people’s feelings of knowing (Hart, 1967), and British actors are not lining up to depict John Flavell laying down the foundational components for future theory and research on metacognition (Flavell, 1979). Much to my personal dismay, David Fincher hasn’t returned my calls regarding the screenplay I wrote about that time Thomas Nelson examined people’s judgments of learning at extreme altitudes on Mt. Everest (Nelson et al., 1990).

Just as film studios seem to lack interest in portraying metacognition research on the big screen, our own students sometimes seem uninterested in anything we might tell them about metacognition. Even the promise of improving their grades sometimes doesn’t seem to interest them! Why not?

One possibility, as I recently found out from a recent blog post by organic-chemistry professor and tutor “O-Chem Prof”, is that the term “metacognition” might simply not be sexy to our students (O-Chem Prof, 2015). He suggests that we instead refer to the concept as “sexing up your noodle”.

Although the idea of changing the name of my graduate course on the topic to “PSY 6969: Graduate Seminar in Sexing-up your Noodle” is highly tempting, I do not think that the problem is completely one of branding or advertising. Rather, regardless of what we call metacognition (or whether or not we even put a specific label on it for our students), there are other factors that we know play a crucial role in whether or not students will actually engage in self-regulated learning behaviors such as the metacognitive monitoring and control of their learning. Specifically, Pintrich and De Groot (1990; see Miltiadou & Savenye, 2003 for a review) identified three major factors that determine students’ motivation to learn that I suggest will also predict their willingness to engage in metacognition: value, expectancy, and affect.

The value component predicts that students will be more interested and motivated to learn about topics that they see value in learning. If they are struggling to learn a valued topic, they should be motivated to engage in metacognition to help improve their learning about it. A wealth of research demonstrates that students’ values and interest predict their motivation, learning, and self-regulation behaviors (e.g., Pintrich & De Groot, 1990; Pintrich et al., 1994; Wolters & Pintrich, 1998; for a review, see Schiefele, 1991). Therefore, when students do not seem to care about engaging in metacognition to improve their learning, it might not be that metacognition is not “sexy” to them; it might be that the topic itself (e.g., organic chemistry) is not sexy to them (sorry, O-Chem Prof!).

The expectancy component predicts that students will be more motivated to engage in self-regulated learning behaviors (e.g., metacognitive control) if they believe that their efforts will have positive outcomes (and won’t be motivated to do so if they believe their efforts will not have an effect). Some students (entity theorists) believe that they cannot change their intelligence through studying or practice, whereas other students (incremental theorists) believe that they can improve their intelligence (Dweck et al., 1995; see also Wolters & Pintrich, 1998). Further, entity theorists tend to rely on extrinsic motivation and to set performance-based goals, whereas incremental theorists tend to rely on intrinsic motivation and to set mastery-based goals. Compared to entity theorists, students who are incremental theorists earn higher grades and are more likely to persevere in the face of failure or underperformance (Duckworth & Eskreis-Winkler, 2013; Dweck & Leggett, 1988; Romero et al., 2014; see also Pintrich, 1999; Sungur, 2007). Fortunately, interventions have been successful at changing students to an incremental mindset, which in turn improves their learning outcomes (Aronson et al., 2002; Blackwell et al., 2007; Good et al., 2003; Hong et al., 1999).

The affective component predicts that students will be hampered by negative thoughts about learning or anxiety about exams (e.g., stereotype threat; test anxiety). Unfortunately, past research indicates that students who experience test anxiety will struggle to regulate their learning and ultimately end up performing poorly despite their efforts to study or to improve their learning (e.g., Bandura, 1986; Pintrich & De Groot, 1990; Pintrich & Schunk, 1996; Wolters & Pintrich, 1998). These students in particular might benefit from instruction on self-regulation or metacognition, as they seem to be motivated and interested to learn the topic at hand, but are too focused on their eventual test performance to study efficiently. At least some of this issue might be improved if students adopt a mastery mindset over a performance mindset, as increased learning (rather than high grades) becomes the ultimate goal. Further, adopting an incremental mindset over an entity mindset should reduce the influence of beliefs about lack of raw ability to learn a given topic.

In summary, although I acknowledge that metacognition might not be particularly “sexy” to our students, I do not think that is the reason our students often seem uninterested in engaging in metacognition to help them understand the topics in our courses or to perform better on our exams. If we want our students to care about their learning in our courses, we need to make sure that they feel the topic is important (i.e., that the topic itself is sexy), we need to provide them with effective self-regulation strategies or opportunities (e.g., elaborative interrogation, self-explanation, or interleaved practice questions; see Dunlosky et al., 2013) and help them feel confident enough to employ them, we need to work to reduce test anxiety at the individual and group/situation level, and we need to convince our students to adopt a mastery (incremental) mindset about learning. Then, perhaps, our students will find metacognition to be just as sexy as we think it is.

ryan gosling metacog (2)

References

Aronson, J., Fried, C. B., & Good, C. (2002). Reducing the effects of stereotype threat on African American college students by shaping theories of intelligence. Journal of Experimental Social Psychology, 38, 113-125. doi:10.1006/jesp.2001.1491

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall.

Blackwell, L. S., Trzesniewski, K. H., & Dweck, C. S. (2007). Implicit theories of intelligence predict achievement across an adolescent transition: A longitudinal study and an intervention. Child Development, 78, 246-263. doi: 10.1111/j.1467-8624.2007.00995.x

Duckworth, A., & Eskreis-Winkler, L. (2013). True Grit. Observer, 26. http://www.psychologicalscience.org/index.php/publications/observer/2013/april-13/true-grit.html

Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14, 4-58. doi: 10.1177/1529100612453266

Dweck, C. S., Chiu, C. Y., & Hong, Y. Y. (1995). Implicit theories and their role in judgments and reactions: A world from two perspectives. Psychological Inquiry, 6, 267-285. doi: 10.1207/s15327965pli0604_1

Dweck, C. S., & Leggett, E. L. (1988). A social-cognitive approach to motivation and personality. Psychological Review, 95, 256-273. doi: 10.1037/0033-295X.95.2.256

Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34, 906-911. doi: 10.1037/0003-066X.34.10.906

Good, C., Aronson, J., & Inzlicht, M. (2003). Improving adolescents’ standardized test performance: An intervention to reduce the effect of stereotype threat. Applied Developmental Psychology, 24, 645-662. doi: 10.1016/j.appdev.2003.09.002

Hart, J. T. (1967). Memory and the memory-monitoring process. Journal of Verbal Learning and Verbal Behavior, 6, 685-691. doi: 10.1016/S0022-5371(67)80072-0

Hong, Y., Chiu, C., Dweck, C. S., Lin, D., & Wan, W. (1999). Implicit theories, attributions, and coping: A meaning system approach. Journal of Personality and Social Psychology, 77, 588-599. doi: 10.1037/0022-3514.77.3.588

Miltiadou, M., & Savenye, W. C. (2003). Applying social cognitive constructs of motivation to enhance student success in online distance education. AACE Journal, 11, 78-95. http://www.editlib.org/p/17795/

Nelson, T. O., Dunlosky, J., White, D. M., Steinberg, J., Townes, B. D., & Anderson, D. (1990). Cognition and metacognition at extreme altitudes on Mount Everest. Journal of Experimental Psychology: General, 119, 367-374.

O-Chem Prof. (2015, Jan 7). Our Problem with Metacognition is Not Enough Sex. [Web log]. Retrieved from http://phd-organic-chemistry-tutor.com/our-problem-with-metacognition-not-enough-sex/

Pintrich, P. R. (1999). The role of motivation in promoting and sustaining self-regulated learning. International Journal of Educational Research, 31, 459-470. doi: 10.1016/S0883-0355(99)00015-4

Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82, 33-40. doi: 10.1037/0022-0663.82.1.33

Pintrich, P. R., Roeser, R., & De Groot, E. V. (1994). Classroom and individual differences in early adolescents’ motivation and self-regulated learning. Journal of Early Adolescence, 14, 139-161. doi: 10.1177/027243169401400204

Pintrich, P. R., & Schunk D. H. (1996). Motivation in education: Theory, research, and applications. Englewood Cliffs, NJ: Merrill/Prentice Hall.

Romero, C., Master, A., Paunesku, D., Dweck, C. S., & Gross, J. J. (2014). Academic and emotional functioning in middle school: The role of implicit theories. Emotion, 14, 227-234. doi: 10.1037/a0035490

Schiefele, U. (1991). Interest, learning, and motivation. Educational Psychologist, 26, 299-323. doi: 10.1080/00461520.1991.9653136

Sungur, S. (2007). Modeling the relationships among students’ motivational beliefs, metacognitive strategy use, and effort regulation. Scandinavian Journal of Educational Research, 51, 315-326. doi: 10.1080/00313830701356166

Wolters, C. A., & Pintrich, P. R. (1998). Contextual differences in student motivation and self-regulated learning in mathematics, English, and social studies classrooms. Instructional Science, 26, 27-47. doi: 10.1023/A:1003035929216


Linking Mindset to Metacognition

By Charity Peak, Ph.D. (U. S. Air Force Academy)

As part of our institution’s faculty development program, we are currently reading Carol Dweck’s Mindset: The New Psychology of Success. Even though the title and cover allude to a pop-psychology book, Dweck’s done a fabulous job of pulling together decades of her scholarly research on mindsets into a layperson’s text.

After announcing the book as our faculty read for the semester, one instructor lamented that she wished we had selected a book on the topic of metacognition. We have been exploring metacognition as a theme this year through our SoTL Circles and our participation in the multi-institutional Metacognitive Instruction Project. My gut reaction was, “But Mindset is about metacognition!” Knowing your own mindset requires significant metacognition about your own thinking and attitudes about learning. And better yet, understanding and recognizing mindsets in your students helps you to identify and support their development of mindsets that will help them to be successful in school and life.

If you haven’t read the book, below are some very basic distinctions between the fixed and growth mindsets that Dweck (2006) discovered in her research and outlines eloquently in her book:

Fixed Mindset Growth Mindset
Intelligence is static. Intelligence can be developed.
Leads to a desire to look smart and therefore a tendency to:

  • avoid challenges
  • give up easily due to obstacles
  • see effort as fruitless
  • ignore useful feedback
  • be threatened by others’ success
Leads to a desire to learn and therefore a tendency to:

  • embrace challenges
  • persist despite obstacles
  • see effort as a path to mastery
  • learn from criticism
  • be inspired by others’ success

 

What does this mean for metacognition? Dweck points out that people go through life with fixed mindsets without even realizing they are limiting their own potential. For example, students will claim they are “not good at art,” “can’t do math,” “don’t have a science brain.” These mindsets restrict their ability to see themselves as successful in these areas. In fact, even when instructors attempt to refute these statements, the mindsets are so ingrained that they are extremely difficult to overcome.

What’s an instructor to do? Help students have metacognition about their self-limiting beliefs! Dweck offers a very simple online assessment on her website that takes about 5 minutes to complete. Instructors can very easily suggest that students take the assessment, particularly in subjects where these types of fallacious self-limiting attitudes abound, as a pre-emptive way to begin a course. These assessment results would help instructors easily identify who might need the most assistance in overcoming mental barriers throughout the course. Instructors can also make a strong statement to the class early in the semester that students should fight the urge to succumb to these limiting beliefs about a particular subject area (such as art or math).   As Dweck has proven through her research, people can actually become artistic if taught the skills through learnable components (pp. 68-69). Previously conceived notions of talent related to a wide variety of areas have been refuted time and again through research. Instead, talent is likely a cover for hard work, perseverance, and overcoming obstacles. But if we don’t share those insights with students, they will never have the metacognition of their own self-limiting – and frankly mythical – belief systems.

Inspired but wish you knew how to apply it to your own classes? A mere Google search on metacognition and mindset will yield a wealth of resources, but I particularly appreciate Frank Noschese’s blog on creating a metacognition curriculum. He started his physics course by having students take a very simple survey regarding their attitudes toward science. He then shared a short video segment called “Grow Your Brain” from the episode Changing Your Mind (jump to 13:20) in the Scientific American Frontiers series from PBS. Together, he and his students began a journey of moving toward a growth mindset in science. Through an intentional metacognition lesson, he sent a very clear message to his students that “I can’t” would not be tolerated in his course. He set them up for success by demonstrating clearly that everyone can learn physics if they put their minds (or mindsets) to it.

Metacognition about mindsets offers instructors an opportunity to give students the gift of a lifetime – the belief that they can overcome any learning obstacles if they just persevere, that their intelligence is not fixed but actually malleable, that learning is sometimes hard but not impossible! When I reflect on why I am so deeply dedicated to education as a profession, it is my commitment to helping students see themselves using a growth mindset. Helping them to change their mindsets can change their future, and metacognition is the first step on that journey!

 

References:

“Changing the Mind.” (11/21/00). Scientific American Frontiers. Boston: Ched-Angier Production Co. Retrieved from http://chedd-angier.com/frontiers/season11.html

Dweck, C. S. (2006). Mindset: The new psychology of success. New York: Ballantine Books.

Noschese, F. (September 10, 2012). Metacognition curriculum (Lesson 1 of ?). Retrieved from https://fnoschese.wordpress.com/2012/09/10/metacognition-curriculum-lesson-1-of/

 


Fostering Metacognition: Right-Answer Focused versus Epistemologically Transgressive

by Craig E. Nelson at Indiana University (Contact: nelson1@indiana.edu)

I want to enrich some of the ideas posted here by Ed Nuhfer (2014 a, b, c and d) and Lauren Scharff (2014). I will start by emphasizing some key points made by Nuhfer (2014 a):

  • Instead of focusing on more powerful ways of thinking, most college instruction has thus far focused on information, concepts and discipline specific skills. I will add that even when concepts and skills are addressed they, too, are often treated as memorizable information both by students and faculty. Often little emphasis is placed on demonstrating real understanding, let alone on application and other higher-level skills.
  • “Adding metacognitive components to our assignments and lessons can provide the explicit guidance that students need. However, authoring these components will take many of us into new territory…” This is tough because such assignments require much more support for students and many of faculty members have had little or no practice in designing such support.
  • The basic framework for understanding higher-level metacognition was developed by Perry in the late 1960s and his core ideas have since been deeply validated, as well as expanded and enriched, by many other workers (e.g. Journal of Adult Development, 2004; Hoare, 2011.).
  • “Enhanced capacity to think develops over spans of several years. Small but important changes produced at the scale of single quarter or semester-long courses are normally imperceptible to students and instructors alike.”

It is helpful (e.g. Nelson, 2012, among many) to see most of college-level thinking as spanning four major levels, a truncated condensation of Perry’s 9 stages as summarized in Table 1 of Nuhfer (2014 a). Each level encompasses a different set of metacognitive skills and challenges. Individual students’ thinking is often a mix or mosaic where they approach some topics on one level and others at the next.

In this post I am going to treat only the first major level, Just tell me what I need to know (Stages 1 & 2 of Table 1 in Nuhfer, 2012 a). In this first level, students view knowledge fundamentally as Truth. Such knowledge is eternal (not just some current best model), discovered (not constructed) and objective (not temporally or socially situated). In contrast, some (but certainly not all) faculty members view what they are teaching as constructed best current model or models and as temporally and socially situated with the subjectivity that implies.

The major cognitive challenges within this first level are usefully seen as moving toward a more complete mastery of right-answer reasoning processes (Nelson, 2012), sometimes referred to as a move from concrete to formal reasoning (although the extent to which Piaget’s stages actually apply is debated). A substantial majority of entering students at most post-secondary institutions have not yet mastered formal reasoning. However, many (probably most) faculty members tacit assume that all reasonable students will quickly understand anything that is asked in terms of most right-answer reasoning. As a consequence, student achievement is often seriously compromised.

Lawson et al. (2007) showed that a simple test of formal reasoning explained about 32% of the variance in final grades in an introductory biology course and was the strongest such predictor among several options. This is quite remarkable considering that the reasoning test had no biological content and provided no measure of student effort. Although some reasoning tasks could be done by most students, an understanding of experimental designs was demonstrated largely by students who scored as having mastered formal reasoning. Similar differences in achievement have been documented for some other courses (Nelson, 2012).

Nuhfer (2014 b) and Scharff (2014) discuss studies of the associations among various measures of student thinking. From my viewpoint, their lists start too high up the thinking scale. I think that we need to start with the transitions between concrete and formal reasoning. I have provided a partial review of key aspects of this progression and of the teaching moves that have been shown to help students master more formal reasoning, as well as sources for key instruments (Nelson, 2012). I think that such mastery will turn out to be especially helpful, and perhaps essential, to more rapid development of higher level-reasoning skills.

This insight also may helps to resolve a contrast, between the experience of Scharff and her colleagues (Scharff, 2014) and Nuhfer’s perspective (2014 b). Scharff reports: “At my institution we have some evidence that such an approach does make a very measurable difference in aspects of critical thinking as measured by the CAT (Critical Thinking Assessment, a nationally normed, standardized test …).” In his responses, Nuhfer (2014 b) emphasizes that, given how we teach, there is, not surprisingly, very little change over the course an undergraduate degree in higher-order thinking. (“… the typical high school graduate is at about [Perry] level 3 2/3 and the typical college graduate is a level 4. That is only one-third of a Perry stage gain made across 4-5 years of college.”)

It is my impression that the “Critical Thinking Assessment” discussed by Scharff deals primarily with right-answer reasoning. The mastery of the skills underlying right-answer reasoning questions is largely a matter of mastering formal reasoning processes. Indeed, tests of concrete versus formal reasoning usually consist exclusively of questions that have very clear right answers. I think that several of the other thinking assessments that Nuhfer and Scharff discuss also have exclusively or primarily clear right-answers. This approach contrasts markedly with the various instruments for assessing intellectual development in the sense of Perry and related authors, none of which focuses on right-answer questions. An easily accessible instrument is given the appendices of King and Kitchener (1994).

This leads to three potentially helpful suggestions for fostering metacognition.

  • Use one of the instruments for assessing concrete versus formal reasoning as a background test for all of your metacognitive interventions. This will allow you to ask whether students who perform differently on such an assessment also perform differently on your pre- or post-assessment, or even in the course as a whole (as in Lawson et al. 2007).
  • Include interventions in your courses that are designed to help students succeed with formal, right-answer reasoning tasks. In STEM courses, teaching with a “learning cycle” approach that starts with the examination or even the generation of data is one important, generally applicable such approach.
  • Carefully distinguish between the ways that you are helping students master right-answer reasoning and the ways you are trying to foster more complex forms of reasoning. Fostering right-answer reasoning will include problem-focused reasoning, self-monitoring and generalizing right-answer reasoning processes (e.g. “Would using a matrix help me solve this problem?”).

Helping students move to deeper sophistication requires epistemologically transgressive challenges. Those who wish to pursue such approaches seriously should examine first, perhaps, Nuhfer’s (2014d) “Module 12 – Events a Learner Can Expect to Experience” and ask how one could foster each successive step.

Unfortunately, the first key step to helping students move beyond right-answer thinking requires helping them understand the ways in which back-and-white reasoning fails in one’s discipline. For this first epistemologically transgressive challenge, understanding that knowledge is irredeemably uncertain, one might want to provide enough scaffolding to allow students to make sense of readings such as: Mathematics: The Loss of Certainty (Kline, 1980); Be Forewarned: Your Knowledge is Decaying (Arbesman, 2012); Why Most Published Research Findings Are False (Ioannidis, 2005); and Lies, Damned Lies, and Medical Science (Freedman, 2010).

As an overview for students of the journey in which everything becomes a matter of better and worse ideas and divergent standards for judging better, I have had some success using a heavily scaffolded approach (detailed study guides, including exam ready essay questions, and much group work) to helping students understand Reality Isn’t What It Used to Be: Theatrical Politics, Ready-to-Wear Religion, Global Myths, Primitive Chic, and Other Wonders of the Postmodern World (Anderson,1990).

We have used various heavily scaffolded, epistemologically transgressive challenges to produce an average gain of one-third Perry stage over the course of a single semester (Ingram and Nelson, 2009). As Nuhfer (2014b) noted, this is about the gain usually produced by an entire undergraduate degree of normal instruction.

And for the bravest, most heavily motivated faculty, I would suggest In Over Our Heads: The Mental Demands of Modern Life (Kegan, 1994). Kegan attempts to make clear that each of us has our ability to think in more complex ways limited by epistemological assumptions of which we are unaware. This is definitely not a book for undergraduates nor is it one that easily embraced by most faculty members.

REFERENCES CITED

  • Hoare, Carol. Editor (2011). The Oxford Handbook of Reciprocal Adult Development and Learning. 2nd Edition. Oxford University Press.
  • Ingram, Ella L. and Craig E. Nelson (2009). Applications of Intellectual Development Theory to Science and Engineering Education. P 1-30 in Gerald F. Ollington (Ed.), Teachers and Teaching: Strategies, Innovations and Problem Solving. Nova Science Publishers.
  • Ioannidis, John (2005). “Why Most Published Research Findings Are False.” PLoS Medicine August; 2(8): e124. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/ [The most downloaded article in the history of PLoS Medicine. Too technical for many first-year students even with heavy scaffolding?]
  • Journal of Adult Development (2004). [Special volume of nine papers on the Perry legacy of cognitive development.] Journal of Adult Development 11(2):59-161.
  • King, Patricia M. and Karen Strohm Kitchner (1994). Developing Reflexive Judgment: Understanding and Promoting Intellectual Growth and Critical Thinking in Adolescents and Adults. Jossey-Bass.
  • Kline, Morris (1980). Mathematics: The Loss of Certainty. Oxford University Press. [I used the summary (the Preface) in a variety of courses.]
  • Nelson, Craig E. (2012). “Why Don’t Undergraduates Really ‘Get’ Evolution? What Can Faculty Do?” Chapter 14 (p 311-347) in Karl S. Rosengren, E. Margaret Evans, Sarah K. Brem, and Gale M. Sinatra (Editors.) Evolution Challenges: Integrating Research and Practice in Teaching and Learning about Evolution. Oxford University Press. [Literature review applies broadly, not just to evolution]

Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments

This sometimes humorous article by Justin Kruger and David Dunning describes a series of four experiments that “that incompetent individuals have more difficulty recognizing their true level of ability than do more competent individuals and that a lack of metacognitive skills may underlie this deficiency.”  It also includes a nice review of the literature and several examples to support their study.

Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments, Journal of Personality and Social Psychology 1999, Vol. 77, No. 6. 121-1134