Distributed Metacognition: Are Two Heads Better Than One—Or Does it Even Exist?

by Aaron S. Richmond

Metropolitan State University of Denver

In many of the wonderful blog posts in Improve with Metacognition, scholars around the globe have described various teaching techniques and strategies to improve metacognition in our students. Many of these techniques require students to openly describe their learning behaviors with the hopes that they will become more metacognitively aware. For example, asking students how, where, and when they study, to reflect on the use of the strategies, and how they can improve their study strategies. Many of these examples are executed in a class setting and even sometimes students are asked to share their strategies with one another and discuss how these strategies work, when they work, when they don’t, etc. In such cases when students are sharing their beliefs about metacognition (e.g., learning strategies) we know that students benefit by improving their own metacognition through this process, but is it possible that they are improving the overall level of the group or class metacognition? Meaning, is it possible that there is more than just an individual metacognitive process occurring—is it possible that there is some form of distributed metacognition occurring across the students that is shared?

What is Distributed Metacognition?

Little is known about the concept of distributed metacognition (Chiu & Kuo, 2009; Wecker, & Fischer, 2007, July). In fact, in a Google Scholar search, there are only 38 results with the exact phrase “distributed metacognition”. In this limited research there is no clear operational definition of distributed metacognition. Therefore, I am interested in understanding and have a discussion with you all about the concept of distributed metacognition. From what I gather, it is not the spreading of metacognition over time (akin to distributed practice, spacing, or studying). Nor am I referring to what Philip Beaman (2016) referred to in the context of machine learning and human distraction in his IwM blog.  However, could it be that distributed metacognition is the ability of two or more individuals to first talk and discuss their personal metacognition (the good, the bad, and the ugly) and to then to use these metacognitive strategies in a distributed manner (i.e., the group discusses and uses a strategy as a group)? Furthermore,  Chiu and Kuo’s (2009) definition of social metacognition may be akin to distributed metacognition.  Although they have no empirical evidence, they suggest that metacognitive tasks can be distributed across group members and thus they engage in “social metacognition”.  For instance, in task management, students can simultaneously evaluate and monitor, regulate others to reduce mistakes and distraction, and divide and conquer to focus on subsets of the problem. Finally, in discussion with John Draeger (an IwM co-creater), he asked whether distributed metacognition was “…something over and above collaborative learning experiences that involve collective learning about metacognition and collectively practicing the skill?” After giving it some thought, my answer is, “I think so.”  As such, let me try to give an example to illustrate whether distributed metacognition exists and how we may define it.

Using IF-AT’s Collaboratively

I have written on the utility of using immediate feedback assessment techniques (IF-AT) as a metacognitive tool for assessment.  I often use IF-AT in a team-based and collaborative learning way. I have students get into groups or dyads and discuss and debate 1 or 2 questions on the assessment. They then scratch off their answer and see how they do. Regardless of whether they were correct or not, I have students discuss, debate, and even argue why they were so adamant about their answer as an individual and as a group. I then have students then discuss, debate, and answer two more questions with one another. They have to, as a group, come up with strategies for monitoring their performance, steps to solve the problem, etc. They repeat this process until the quiz is finished. When my students are doing this IF-AT process, I find (I know introspection is not the best science) that they become so intrigued by other students’ metacognitive processes, that they often slightly modify their own metacognitive processes/strategies AND collectively come up with strategies to solve the problems.

So, what is going on here? Are students just listening to other students’ metacognitive and epistemological beliefs and choosing to either internalize or ignore these beliefs?  Or in contrast, when there is a group task at hand, do students share (i.e., distribute) the metacognitive strategy that they learned through the group process and then use it collectively?  For example, when students perform activities like dividing tasks and assigning them to others (i.e., resource demand and monitoring), regulating others’ errors or recognize correct answers (i.e., monitoring) within the group— would these behaviors count as distributed metacognition?  Is it possible that in these more collaborative situations, the students are not only engaging in their own internal metacognition, but that they are also engaging in a collective distributed cognition among the group used in a collective manner? That is, in the IF-AT activity example, students may be both becoming more metacognitively aware, changing their metacognitive beliefs, and experimenting with different strategies—on an individual level—AND they may also have a meta-strategy that exists among the group members (distributed metacognition) that they then use to answer the quiz questions and become more effective and successful at completing the task.

Currently (haha), I am leaning towards the latter. I think that the students might be engaging in both individual and distributed metacognition in part because of an article in the Proceedings of the Annual Meeting of the Cognitive Science Societyby Christopher Andersen (2003). Andersen found that when students worked in pairs to solve two science tasks, that over time, students who were in pairs rather than working individually made more valid inferences (correct judgments and conclusion about the task) than when they worked alone. Specifically, on the first trial of solving a problem, the dyads use relatively ineffective strategies, on the second trial they expanded and adapted their use of effective strategies, and by the third trial, the dyad expanded even more effective strategies. Andersen (2003) concluded that the students were collectively consolidating their metacognitive strategies.  Meaning, when working collaboratively, students employed more effective metacognitive strategies that led to solving the problem correctly. Although this is only one study, it provides a hint that distributed metacognition may exist.

Tentative Conclusions

So, where does this leave us? As almost always, I am awash with questions. I have more questions than answers. Thus, what do you think? As defined, do you think that distributed metacognition exists? If not, how would you describe what is going on when students share their metacognitive strategies and then employ metacognitive strategies in a group setting? Is this situation just a product of collaborative or cooperative learning?

If you do believe distributed metacognition exists, how do we measure it? How do we create instructional methods that may increase it?  Again, I am full of questions and my mind is reeling about this topic, and I would love to hear from you to know your thoughts and opinions.

References

Andersen, C. (2003, January). Distributed metacognition during peer collaboration. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 25, No. 25).

Beaman, P. (2016, May 14th). Distributed metacognition: Insights from machine learning and human distraction. Retrieved from https://www.improvewithmetacognition.com/distributed-metacognition-insights-machine-learning-human-distraction/

Chiu, M. M., & Kuo, W. S. (2009). From metacognition to social metacognition: Similarities, differences, and learning. Journal of Educational Research, 3(4), 1-19. Retrieved from https://www.researchgate.net/profile/Ming_Chiu/publication/288305672_Social_metacognition_in_groups_Benefits_difficulties_learning_and_teaching/links/58436dba08ae2d217563816b/Social-metacognition-in-groups-Benefits-difficulties-learning-and-teaching.pdf

Richmond, A. S. (2017, February 22nd). Scratch and win or scratch and lose? Immediate feedback assessment technique. Retrieved from https://www.improvewithmetacognition.com/scratch-win-scratch-lose-immediate-feedback-assessment-technique/

Wecker, C., & Fischer, F. (2007, July). Fading scripts in computer-supported collaborative learning: The role of distributed monitoring. In Proceedings of the 8th international conference on Computer supported collaborative learning (pp. 764-772).


Can Metacognition Instruction be Unethical?

By Aaron S. Richmond, Ph. D., Metropolitan State University of Denver

Many college and university teachers incorporate great metacognitive activities into our course work with the lofty goal of trying to improve the metacognition of our students. These activities include various assessment techniques (see Richmond, 2017, March; Was, 2014, August), instructional interventions (see Draeger, 2016, November), and course designs (see McCabe, 2018, March). But have we ever questioned whether these Improve with Metacognition (IwM) educational practices are ethical? In other words, when we do these great activities, assessments, or other techniques, are we implementing them in an ethical way?

The reason I embarked down this road was because I was having a conversation with one of my all-time favorite teachers (Doug Woody from the University of Northern Colorado) and we were discussing using active learning and metacognitive strategies in classroom instruction. He leaned over, mid-sentence, and said, “You know that sometimes, when done improperly, using those [metacognitive instruction] strategies may cause students to feel disrespected, out of control, cause feelings of distrust, and in some rare occasions cause harm.” I just looked back at him with shock, incredulity, and a creeping sense of horror. Incredulity because I felt that I was trying to do the best thing for the student, so how could that be bad. Shock, because I had never thought of it that way and a creeping feeling of horror because just maybe, maybe he could be right.

Ethical Teaching

But first, let me explain the nature of Ethical Teaching. Eric Landrum and Maureen McCarthy recently published the book Teaching ethically: Challenges and opportunities (2012). In their book, they discuss that ethical teachers focus on student’s respect and autonomy, nonmaleficience, beneficience, fidelity, and caring. Specifically, as teachers, we should allow students the right to make their own decisions (respect and autonomy), above all do no harm (nonmaleficience), promote our student’s welfare (beneficence), be fair, unbiased, and equal (justice), and be trustworthy and honest (fidelity). Thus, ethical teachers set out to improve their students’ learning by these guiding principles.

So how can IwM practices be potentially unethical? Again, when discussing this with Doug, my initial reaction was, “Of course not! I’m trying to IMPROVE my student’s metacognition. Which I KNOW will help them not only in my class, but throughout their college career.” However, upon reflecting and considering what it means to be an ethical teacher, it may be possible that implementing such IwM techniques improperly may in fact be unethical.

Let me illustrate. I’ve touted the use of Immediate Feedback Assessment Techniques (IF-AT) as a metacognitive tool for assessment (Richmond, 2017, February). IF-AT is used to instantaneously provide performance feedback to learners by allowing students to scratch off what they believe to be the correct answer on a multiple-choice exam, quiz or test. However, if implemented incorrectly, Can Metacognitive Instruction be Considered Unethical Teaching? Share on XIF-AT may cause to feel coerced (opposite of autonomy) into taking an assessment in this format that they don’t want to take or, more importantly that may cause them to do poorer than in other formats. For example, because IF-AT is so unique and takes some time to get to use to, students may feel that there is undue pressure on them to use this format without other options. A tenet of learner-centered pedagogy and ethical teaching is to provide options for students to choose from. Additionally, as I argued in the previous blog, using IF-AT, in some cases, may do more harm than good (opposite of nonmaleficience) if the format of IF-AT causes them anxiety and stress.  That is, most assessments do cause some anxiety and stress (which at low to moderate levels can be good for learning), however, IF-AT may cause students to experience exceptionally high levels of stress and anxiety and consequently decrease their performance.  Finally, the question then becomes whether IF-AT promotes student welfare (beneficence). Of course, we metacognitive teachers believe that this is why we are employing such strategies, but if harm is done, then it is definitely not beneficial to the students.

There are other examples of metacognitive activities that may be unethical (e.g., forcing students to do activities without prewarning them or giving them options not to participate), however, I think the silver-lining is that it may not be the activity itself, but rather how instructors implement these IwM activities.

How Can I Be an Ethical Teacher AND Improve with Metacognition?

Recently, my colleagues Regan Gurung and Guy Boysen and I (2016) tackled this very issue in our book on model college and university teaching. We suggested that there are several steps that teacher can take to be both ethical and metacognitive/model teachers. First, to provide respect and autonomy, we should let our students opt out of certain activities or give them alternatives (Richmond et al., 2016). For example, give students the option to take the IF-AT or a traditional formatted quiz. Second, to increase fidelity we should give forewarning on potential adverse or negative feelings or attitudes that may result when participating in an IwM activity. For example, with IF-ATs let your students know that you may get anxious when you realize that you missed the first two questions or if doing a metacognitive activity that puts certain students to a disadvantage (e.g., experiment of the use of elaboration vs. flash cards) let them know that it is intentionally designed in that way and it is not a reflection on their skills or abilities. To promote nonmaleficience, always discuss the purpose of your IwM activities. For example, discuss why you want to teach them various learning or memory strategies and why they should be beneficial. By doing this you are a more transparent teacher, which leads to what, I believe, being a metacognitive teacher embodies—promote beneficence by using effective IwM strategies that are known to work in many contexts (Richmond et al., 2016).

Concluding Thoughts and Questions for You

As illustrated, it may be possible that when we use IwM activities, we may be engaging in some unethical teaching practices. However, I think there are a few things that we can do which avoid this dilemma and much of it has to do with how IwM activities are implemented. Thus, I would like to conclude with a few questions that I hope you will take the time to answer and start a conversation on this important but often overlooked issue within IwM:

  1. Do you believe that some IwM practices have the potential to be unethical?
  2. If so, how do you ameliorate this issue?
  3. How do I become both an ethical and metacognitive teacher?

References

Draeger, J. (2016, November). Promoting metacognitive reading through Just-in-Time Teaching. Retrieved from https://www.improvewithmetacognition.com/promoting-metacognitive-reading-just-time-teaching/

Landrum, R., & McCarthy, M. A. (Eds.) (2012). Teaching ethically: Challenges and opportunities. Washington, D. C.: American Psychological Association

McCabe, J. (2018, March). Small metacognition—Part 1. Retrieved from https://www.improvewithmetacognition.com/small-metacognition-part-1/

Richmond, (2017, March). Joining forces: The potential effects of team-based Learning and immediate feedback assessment technique on metacognition. Retrieved from https://www.improvewithmetacognition.com/joining-forces-the-potential-effects-of-team-based-learning-and-immediate-feedback-assessment-technique-on-metacognition/

Richmond, A. S. (2017, February). Scratch and win or scratch and lose? Immediate feedback assessment technique. Retrieved from https://www.improvewithmetacognition.com/scratch-win-scratch-lose-immediate-feedback-assessment-technique/

Richmond, A. S., Gurung, R. A. R., & Boysen, G. (2016).  An evidence-based guide to college and university teaching: Developing the model teacher. New York, NY: Routledge.

Was, C. (2014, August). Testing improves knowledge monitoring. Improve with Metacognition. Retrieved from https://www.improvewithmetacognition.com/testing-improves-knowledge-monitoring/


The First Instinct Fallacy: Metacognition Helps You Decide to Stick With It or Revise Your Answer

By Aaron S. Richmond, Ph. D., Metropolitan State University of Denver

When giving guidance to students on how to take tests in your class, do you tell your students to always go with their first answer (go with their gut), or to always revise their answers, or that it depends on the question?  Because many of you are fans of metacognition, likely you are wise and you choose the latter—it depends—and you would be correct. However, most students andmany teachers would choose “go with your gut instinct”, otherwise known as the First Instinct Fallacy (Kruger, Wirtz, & Miller, 2005). In this well-known article by Kruger and colleagues, they found (in 4 separate experiments) that when students change their answers, they typically change from incorrect to correct answers, they underestimate the number of changes from incorrect to correct answers, and overestimate the number of changes from incorrect to correct. Ironically, but not surprisingly, because students like to “go-with-their-gut”, they also tend to be very hesitant to switch their answers and regret doing so, even though they get the correct answer. However, what Kruger and colleagues did not investigate was the role that metacognition may play in the First Instinct Fallacy.

The [First] Instinct Fallacy: The Metacognition of Answering and Revising During College Exams

In two recent studies by Couchman et al. (2016), they investigated the mediating effects that metacognition may have on the First Instinct Fallacy. The procedure of both studies required students to complete a normal multiple-choice exam, indicate their confidence in their answers(whether they knew it or guessed the answer), and to indicate whether or not they changed their initial answer. Consistent with Kruger et al. (2005) results, Couchman and colleagues found that students more often change their initial response from incorrect to correct answers than the reverse. What was interesting is that when students thought they knew the answer and didn’t change their answer, they were significantly more likely to get the answer correct (indicating higher metacognition).  When students guessed, and didn’t change their answer, they were significantly more likely to get the answer incorrect (indicating low metacognition). Moreover, when compared to questions students thought they knew, when students revised guessed questions, they choose the correct answer significantly more often than when they didn’t change their answer. In other words, students did better on questions when they guessed and changed their answer to when they thought they knew the answer and changed their answer. These results suggested that students were using the metacognitive construct of cognitive monitoring to deliberately choose when to revise their answers or when to stick with their gut on a question-by-question basis.

Moral of the Story: Real-Time Metacognitive Monitoring is Key to Falling Prey to the First-Instinct Fallacy

As demonstrated in Couchman and colleagues’ results, when student metacognitively monitor their knowledge and performance on a question-by-question basis, they will perform better. Metcalfe (2002) called this adaptive control—focusing on process that you can control in order to improve performance. Koriat et al. (2004) suggests that instead of reflective thinking in general on performance, in-the-moment and item-by-item assessment of performance may be more productive and effective.

So, you were correct in telling your students that “it depends”, but as a practitioner, what do you do to facilitate students’ ability to increase the metacognitive skills of adaptive control and monitoring? Couchman and colleagues suggested that teachers instruct their students to simplyindicate a judgment of confidence for each question on the test (either use a categorical judgment such as low vs. medium vs. high confidence or use a 0-100 confidence scale). Then, if students are low in their confidence, instructors should encourage them to change or revise their answer. However, if student confidence is high, they should consider not changing or revising their answer. Interestingly enough, this must be done in real-time, because if students make this confidence judgment at post-assessment (i.e., at a later time), they tend to be overconfident and inaccurate in their confidence ratings. Thus, the answer to the First Instinct Fallacy is—like most things—complicated. However, don’t just respond with a simple “it depends”—even though you are correct in this advice. Go the step further and explain and demonstrate how to improve adaptive control and cognitive monitoring.

References

Couchman, J. J., Miller, N. E., Zmuda, S. J., Feather, K., & Schwartzmeyer, T. (2016). The instinct fallacy: The metacognition of answering and revising during college exams. Metacognition and Learning, 11(2), 171-185. doi:10.1007/s11409-015-9140-8

Kruger, J., Wirtz, D., & Miller, D. T. (2005). Counterfactual thinking and the first instinct fallacy. Journal of Personality and Social Psychology, 88(5), 725–35.

Koriat, A., Bjork, R. A., Sheffer, L., & Bar, S. K. (2004). Predicting one’s own forgetting: the role of experience based and theory-based processes. Journal of Experimental Psychology: General, 133, 643–656.

Metcalfe, J. (2002). Is study time allocated selectively to a region of proximal learning? Journal of Experimental Psychology: General, 131, 349–363.


The Great, The Good, The Not-So-Good of Improve with Metacognition: An Exercise in Self-Reflection

By Aaron S. Richmond, Ph. D., Metropolitan State University of Denver

Recently, Lauren, John, and I reflected on and discussed our experiences with Improve with Metacognition (IwM). We laughed and (no crying) found (at least I did) that our experiences were rich and rewarding. As such, we decided that each of us would write a blog on our experience and self-reflection with IwM. Therefore, I’m up. When thinking about IwM, the theme that kept surfacing in my mind is that we are Great, Good, and on a few things—Not-So-Great.

The Great

Oh, how can I count the ways of how IwM is Great. Well, by counting. In my reflection on what we have accomplished, it came apparent that here at IwM, we have been highly productive in our short existence. Specifically, we have published over 200 blogs, resources about metacognition measures, videos, instruction, curated research articles, and teaching metacognition (see our new call for Teaching with Metacognition). We have created a space for collaborators to gather and connect. We have engaged in our own research projects. We have had over 35 contributors from all over North America and a few from beyond, who have ranged from preeminent scholars in the field of metacognition and SoTL to graduate students writing their first blog. Speaking for Lauren and John, I can only hope that the explosion in productivity and high quality research and writing continues with IwM.

The Good

Ok, it is not just Good—this is just another thing that is great. IwM has produced some amazing blogs. I can’t review them all because, this time I will keep to my word count, but I would like to highlight a few insightful blogs that resonated with me. First, Edn Nuhfer recently wrote a blog titled, Collateral Metacognitive Damage (2017, February). The title is amazing in itself, but Ed extolls the use of self-assessments, why approach and perspective of self-assessment matters most (be the little engine that could vs. little engine who couldn’t), and provides a marvelous self-assessment tool (http://tinyurl.com/metacogselfassess ). I have already shared this with my students and colleagues. Second, one of the topics I would never have thought of, was Stephen Chew’s blog on Metacognition and Scaffolding Learning (2015, July). I have used scaffolding (and still do) throughout all of my courses, however, I never considered that by over-scaffolding, that I could reduce my student’s ability to calibrate (know when you know or don’t know something). That is, by providing too much scaffolding, it may cause students to be highly over confident and overestimate their knowledge and skill. Third, Chris Was wrote about A Mindfulness Perspective on Metacognition (2014, October). I have been begrudgingly and maybe curmudgeonly resistant to mindfulness. As such,  I was skeptical even though I know how great Chris’ research is. Well, Chris convinced me of the value of mindfulness and its connection to metacognition. Chris said it best, “It seems to me that if metacognition is knowledge and control of one’s cognitive processes and training in mindfulness increases one’s ability to focus and control awareness in a moment-by-moment manner, then perhaps we should reconsider, and investigate the relationship between mindfulness and metacognition in education and learning.” There are literally dozens and dozens of other blogs that I have incorporate into both my teaching and research. The work done at IwM is not merely good, it is great!

The Not-So-Good

IwM has been a labor of love. Speaking for myself, the work that has been done is amazing, exhausting, invigorating, productive, and fulfilling.  However, what I believe we have been “Not Great” at is getting the word out. That is, considering that there are over 200 blogs, resources, curated research articles, collaborations, etc. I believe that one of the things we are struggling with is spreading the gospel of metacognition.  Also, despite the fact that Lauren, John, and I have travelled across the globe (literally) promoting IwM at various conferences, so few people know about the good work being done. Moreover, notwithstanding that we have 258 email subscribers, I feel (passionately) that we can do better. I want and desire for other researchers and practitioners to not only benefit from the work we’ve done but to contribute to new IwM blogs, resources, research, and collaboration.

As I do with all my blogs, I will leave you with an open-ended question: What can we do to spread the word of the Great and Good work here at IwM?

Please give me/us some strategies or go out and help spread the word for us.

References

Chew, S. (2015, July). Metacognition and scaffolding student learning. Retrieved from https://www.improvewithmetacognition.com/metacognition-and-scaffolding-student-learning/

Nuhfer, E. (2017, February). Collateral metacognitive damage. Retrieved from https://www.improvewithmetacognition.com/collateral-metacognitive-damage/

Was, C. (2015, October). A mindfulness perspective on metacognition. Retrieved from https://www.improvewithmetacognition.com/a-mindfulness-perspective-on-metacognition/


Joining Forces: The Potential Effects of Team-Based Learning and Immediate Feedback Assessment Technique on Metacognition

by Aaron S. Richmond, Ph. D., Metropolitan State University of Denver

As a standalone assessment tool, the Immediate Feedback Assessment Technique (IF-AT) has been demonstrated to affect student learning and students’ perceptions of the teacher (e.g., Brosvic et al. 2006; Slepkov & Sheil, 2014) and possibly improve metacognition (see Richmond, 2017). However, can IF-AT be combined with a cooperative learning activity such as Team-Based Learning (TBL) to enhance metacognition as well?

To partially answer this question, several researchers suggest that the IF-AT may be used effectively with TBL (Carmichael, 2009; Hefley & Tyre, 2012; Ives, 2014). For example, you could first form teams, give them an exam to discuss and debate the correct answer, and then have the teams decide on the correct answer. If students within a team cannot come to a consensus on the response to a question, you may allow them to write an “appeal” to turn in a separate answer. Click on Figure 1 for a video on how to use IF-AT combined with TBL.  IF-AT may also be used in dyads to allow students to discuss correct and incorrect answers. Students read a question, discuss the correct and incorrect answers, and then cooperatively make a decision, with the IF-AT providing immediate feedback. A third way, suggested by Ives (2011), is to do a two-stage group quiz. Ives suggests that you should have individual students write weekly quiz questions (first-stage), then get into teams and take quizzes in teams that consist of student’s written questions. However, the question then becomes, can the combination of TBL and IF-AT instructional strategies improve metacognition?

Figure 1. Team-Based Learning Using IF-AT. 

The Interplay Among IF-AT, TBL, and Metacognition
As I argued previously (Richmond, 2017), IF-AT may improve student’s metacognition; however, by adding TBL, what metacognitive processes and skills might improve? I see a several metacognitive benefits that may occur when combining these two instructional strategies.

First, the combination of IF-AT and TBL may increase student’s metacognitive awareness. For instance, test anxiety may be reduced in a group setting when using IF-AT (Ives, 2011) because students have the opportunity to debate the answers, hear from others, gather consensus and share responsibility. As the awareness of and conscious effort to reduce test anxiety is part of metacognitive awareness, the combination of TBL and IF-AT may make this process more salient.

Second, using TBL with IF-AT may also increase student’s calibration (e.g., the accuracy of knowing when you know or do not know something). That is, in a cooperative learning activity such as TBL, students are either reinforced with their correct knowledge through the process of debating and discussion of answers OR confronted with their incorrect knowledge by interacting with team members. Consequently, their assessment (calibration) of their knowledge should become more accurate through this process. For example, if a team member accurately identifies a correct answer, and one of the team members (who had the incorrect answer to start with) observes this, they may reflect on their answer, determine why and how they came to the incorrect answer, and change future strategies to study and subsequent estimations of knowledge. Or, an individual team member could consistently get (originally) the correct answer, but always underestimate his or her knowledge. This type of student may gain confidence in their knowledge and become more accurately calibrated.

Third, by combining TBL and IF-AT, there may also be an increase of metacognitive, cognitive, and learning strategy skills. That is, as team members share how, where, what, and why they studied, other team members may incorporate these strategies  into their quiver of learning strategies (especially if the team member who suggested it was correct). For example, one team member may explain the elaborative strategy they used effectively to study, and other team members listen and incorporate elaboration into their repertoire of strategies. Or, for example, a team member may consistently get questions wrong and share what strategy he or she uses (e.g., cramming and rehearsal). Other team members observe this, realize that strategy does not appear to work very well, and subsequently rarely use it themselves (we could only wish J).

Based on the above examples, it does seem likely that the combined use of TBL and IF-AT may improve various metacognitive skills.

Concluding Thoughts and The Hallmark of Good Assessments—Evidence
As a SoTL scholar, I would be remiss not to investigate the evidence supporting or refuting the efficacy of IF-AT and TBL. There are a handful of studies that demonstrate the advantage of using TBL and IF-AT to increase academic performance and enjoyment of class (e.g., Carmichael, 2009; Haberyan, 2007). The combination of IF-AT and TBL has also demonstrated to stimulate small group discussion and identify and correct content misconceptions (Cotner, Baepler, & Kellerman, 2008). However, there appears to be a gap in the research. Specifically, there are several research questions which arise:

  1. Does the combination of IF-AT and TBL increase metacognitive awareness?
  2. Does the combination of IF-AT and TBL increase the accuracy of a student’s calibration?
  3. Does the combination of IF-AT and TBL increase a student’s repertoire of cognitive and learning strategies?
  4. What other metacognitive processes may be enhanced by using IF-AT in a TBL setting?

As I mentioned in my first blog on IF-AT (Richmond, 2017) and here, I think there are enormous SoTL research opportunities for investigating the effects of IF-AT and TBL to improve metacognition. This, invariably, leads to the proverbial phrase: A little knowledge is a dangerous thing—so get to work!

Please follow me on Twitter: @AaronSRichmond

References
Carmichael, J. (2009). Team-based learning enhances performance in introductory biology. Journal of College Science Teaching, 38(4), 54–61.

Clark, M. C., Nguyen, H. T., Bray, C., & Levine, R. E. (2008). Team-based learning in an undergraduate nursing course. Journal of Nursing Education, 47, 111–117.

Cotner, S., Baepler, P., & Kellerman, A. (2008). Scratch this! The IF-AT as a technique for stimulating group discussion and exposing misconceptions. Journal of College Science Teaching37(4), 48.

Haberyan, A., (2007). Team-based learning in an industrial/organizational psychology course. North American Journal of Psychology, 9,143–152.

Hefley, T., & Tyre, A. J. (2012). Favorable team scores under the team-based learning paradigm: A statistical artifact?. RURALS: Review of Undergraduate Research in Agricultural and Life Sciences6(1), 1. Retrieved from http://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1043&context=rurals

Ives, J. (2011). Two-stage group quizzes part 1: What, how and why. Science Learnification: Teaching and learning in the sciences with a focus on physics education research (PER) from the trenches.  Retrieved from https://learnification.wordpress.com/2011/03/23/two-stage-group-quizzes-part-1-what-how-and-why/

Richmond, A. S. (2017, February 24th). Scratch and win or scratch and lose? Immediate Feedback Assessment Technique. Retrieved from https://www.improvewithmetacognition.com/scratch-win-scratch-lose-immediate-feedback-assessment-technique/

Slepkov, A. D., & Shiell, R. C. (2014). Comparison of integrated testlet and constructed-response question formats. Physical Review Special Topics-Physics Education Research10(2), 020120.


Scratch and Win or Scratch and Lose? Immediate Feedback Assessment Technique

By Aaron S. Richmond, Ph. D., Metropolitan State University of Denver

When prepping my courses for this spring semester, I was thinking about how I often struggle with providing quick and easy feedback on quiz and exam performance to my students. I expressed this to my colleague, Dr. Anna Ropp (@AnnaRopp), and she quickly suggested that I check out Immediate Feedback Assessment Technique (IF-AT) by Epstein Educational Enterprises. When she showed me the IF-ATs, I was intrigued and thought I might as well give it a try—so I ordered some. IF-AT is used to instantaneously provide performance feedback to learners by allowing students to scratch off what they believe to be the correct answer on a multiple-choice exam, quiz or test. See Figure 1a and 1b for student examples of a completed IF-AT.  Students can find out what the incorrect or correct answer is by just scratching the chosen answer (see question 1 in Figure 1a). Students can scratch more than one answer to find the correct answer (see question 2 in Figure 1a). You may also use it as a way of providing partial credit for sequenced attempts (e.g., scratch 1 choice for full credit if correct, then scratch a second choice, and maybe a third, to get decreasing amounts of partial credit). See question 6 in Figure 1b for an example of this.  Epstein and colleagues suggest that IF-AT not only assesses student learning, but it can also teach at the same time. However, it occurred to me that this is not only an assessment and teaching tool, rather it is a great opportunity to increase metacognition.

                                                        (a)                                                (b)

Figure 1. Completed and Unscored 10-Question IF AT Completed 10-Question IF AT Student and Teacher Scored

How to Use IF-AT
Epstein and colleagues suggest that IF-AT is fair, fast, active, fun, and respectful and builds knowledge. The IF-AT scratch assessments come in 10, 25, or 50-item test options with 5 different orders of correct answers. The Epsteins suggest that IF-AT can be used in many ways. For example, they can be used for chapter tests; individual study (at home or in class); quizzes; pyramidal-sequential-process quizzing; exams; team-based and cooperative-learning; study-buddy learning; and most importantly as a feedback mechanism (see http://www.epsteineducation.com/home/about/uses.aspx website for further explanation). There are several metacognitive functions (although the Epstein’s do not couch their claims using this term) of the IF-AT. First, the Epstein’s argue that you can arrange your IF-AT so that the first question (and the immediate feedback of the correct answer) can be used in a pyramidal sequential process. That is, the correct answer to the first question is needed to answer subsequent questions as it is foundational knowledge for the remaining question. This sequential process allows the instructor and student to pin-point where the student’s knowledge of the integrated content broke down. This is implicit metacognitive modeling of a student’s metacognitive knowledge that should be made explicit. Meaning, by explaining to your students how the exam is set up, students can use cues and knowledge from previous questions and answers on the test to assist in their understanding of subsequent questions and answers. This is a key step to the learning process. Second, the IF-AT may also be used in a Team-Based way (i.e., distributed cognition) by forming groups, problem solving, and the team discovering what the correct answer is. IF-AT may also be used in dyads to allow students to discuss correct and incorrect answers. Students read a question, discuss the correct and incorrect answer, then cooperatively make a decision and receive immediate feedback. Third, IF-AT may be used to increase cognitive and metacognitive strategies. That is, by providing feedback immediately, students (if you explicitly instruct them to do so) may adjust their cognitive and metacognitive strategies for future study. For example, if a student used flashcards to study, and did poorly, they may want to adjust how they construct and use flashcards (e.g., distributed practice). Finally, and most importantly, IF-AT may improve student’s metacognitive regulation via calibration (i.e., the accuracy of knowing when you do and don’t know the answer to a question). That is, by providing immediate feedback, students may become more accurate in their judgments of knowing or even their feelings of knowing based on the feedback.

Is it Scratch to Win or is Scratch to Lose?
As described, by using the IF-AT, students get immediate feedback on whether they got the question correct, incorrect, and what is the appropriate answer. From a metacognitive perspective, this is outstanding. Students can calibrate (i.e., adjust their estimations and confidence in knowing an answer) in real-time, engage in distributed cognition, provide feedback on choice of cognitive and metacognitive strategies, can increase cognitive monitoring, and regulation control. These are all WIN, WIN, WIN, byproducts. HOWEVER, is there a down-side to instantaneously knowing you are wrong? That is, is there an emotional regulation and reactivity to IF-AT? As I have been implementing the use of the IF-AT, I have noticed (anecdotally) that about 1 in 10 students react negatively and it seems to increase their test anxiety. Presumably, the other 90% of the students love it and appreciate the feedback. Yet, what about the 10%? Does IF-AT stunt or hinder their performance? Again, my esteemed colleague Dr. Anna Ropp and I engaged in some scholarly discourse to answer this question, and Anna suggested that I make the first 3-5 questions on each IF-AT “soft-ball” questions. That is, questions that 75% of students will get correctly so that students’ fears and anxiety is remediated to some degree. Another alternative is to provide students with a copy of the test or exam and let them rank order or weight their answers (see Chris Was’ IwM Blog, 2014; on how to do this). Despite these two sound suggestions, there still may be an affective reaction that could be detrimental to student learning. To date, there has been no research to investigate this issue and there are only a hand full of well-designed studies to investigate IF-AT (e.g., Brosvic et al., 2006; Dihov et al., 2005; Epstein et al., 2002, 2003; Slepkov & Sheill, 2014). As such, more well-constructed and executed empirical research is needed to investigate this issue (Hint: all you scholars looking for a SoTL project…here’s your sign).

Concluding Thoughts and Questions for You
After investigating, reflecting on, and using IF-AT in my classroom, I think that it is a valuable assessment tool in your quiver of assessments to increase metacognition—but of course not an educational panacea. Furthermore, in my investigation of this assessment technique, (as usual), more questions popped up on the use of IF-AT. So, I will leave you with a charge and call to help me answer the questions below:

  1. Are there similar assessments that provide immediate feedback that you use? If so, are they less expensive or free?
  2. If you are using IF-AT, what is your favorite way to use it?
  3. Do you think IF-AT could cause substantial test anxiety? If so, to whom and to what level within your classes?
  4. How could you use IF-AT be used as a tool for calibration more efficiently? Or, what other ways do you think IF-AT can be used to increase metacognition?
  5. I think there are enormous opportunities for SoTL on IF-AT (e.g., the effects on calibration, distributed cognition, cognitive monitoring, conditional knowledge of strategy use, etc.), which means we all have some more work to do. J

References
Brosvic, G. M., Epstein, M. L., Dihoff, R. E., & Cook, M. J. (2006). Acquisition and retention of Esperanto: The case for error correction and immediate feedback. The Psychological Record56(2), 205.

Dihoff, R. E., Brosvic, G. M., Epstein, M. L., & Cook, M. J. (2005). Adjunctive role for immediate feedback in the acquisition and retention of mathematical fact series by elementary school students classified with mild mental retardation. The Psychological Record55(1), 39.

Epstein, M. L., Brosvic, G. M., Costner, K. L., Dihoff, R. E., & Lazarus, A. D. (2003). Effectiveness of feedback during the testing of preschool children, elementary school children, and adolescents with developmental delays. The Psychological Record53(2), 177.

Epstein, M. L., Lazarus, A. D., Calvano, T. B., & Matthews, K. A. (2002). Immediate feedback assessment technique promotes learning and corrects inaccurate first responses. The Psychological Record52(2), 187.

Slepkov, A. D., & Shiell, R. C. (2014). Comparison of integrated testlet and constructed-response question formats. Physical Review Special Topics-Physics Education Research10(2), 020120.

Was, C. (2014, August). Testing improves knowledge monitoring. Improve with Metacognition. Retrieved from https://www.improvewithmetacognition.com/testing-improves-knowledge-monitoring/


Can Reciprocal Peer Tutoring Increase Metacognition in Your Students?

Aaron S. Richmond, Ph. D.

How many of you use collaborative learning in your classroom? If you do, do you specifically use it to increase metacognition in your students? If the answer is yes, you are likely building on the work of Hadwin, Jarvela, and Miller (2011) and Schraw, Crippen, and Hartley (2006). For those of you unfamiliar with collaborative learning, I tend to agree with Slavich and Zimbardo’s (2012) definition, in collaborative learning students “…tackle problems and question with peers—especially more knowledgeable peers—insofar as such experiences provide students with opportunities to learn new problem-solving strategies and to debate ideas in a way that challenges their understanding of concepts” (p. 572). There are many ways to use collaborative learning in the classroom, jigsaw classroom, paired annotations, send-a-problem, think-pair-share, three-step interview, peer tutoring, number heads, etc. Of particular interest, recent research on collaborative learning suggests that reciprocal peer tutoring may be particularly useful when your goal is to not only learn course material, but to increase your student’s metacognition (De Backer, Van Keer, Moerkerke, & Valcke, 2016).

In their innovative study, De Backer and colleagues (2016) investigated the effects of using reciprocal peer tutoring (RPT) to support and increase metacognitive regulation in higher education. De Backer and colleagues defined RPT as “the structured exchange of the tutor role among peers in groups/pairs…and enables each student to experience the specific benefits derived from providing and receiving academic guidance.” (p. 191) De Backer et al. had students, over a course of the semester, complete eight peer tutoring sessions. All students were trained to be a tutor,  experienced being a tutor, and tutored their peers at least twice. Tutoring sessions were 120 minutes in length and occurred outside of class. The tutor’s role was to manage the tutees and promote collaborative learning. During each tutoring session, the tutees were asked to solve a problem related to the class content. Each problem had three specific components:

(1) An outline of learning objectives to guide peers’ discussion to central course-related topics; (2) a subtask aimed at getting familiar with the theme-specific terminology; and (3) a subtask in which students were instructed to apply theoretical notions to realistic instructional cases. (De Backer et al., 2016, p. 193)

The problems presented, often did not have clear-cut answers and required considerable cognitive effort. De Backer et al. video recorded all the tutoring sessions and then scored each session on the amount and type of metacognitive regulation that occurred by both tutors and tutees. For example, they looked at the student’s ability to orient, plan, monitor, and evaluate. They also measured the level of processing (whether it was shallow or deep processing of metacognitive strategies). Appendix D of De Backer et al.’s article provided examples of how to code metacognitive data. See Table 1 for an example of the scoring (De Backer et al., 2016, p. 41). They then scored the frequency of metacognitive regulations that occurred per session.

Table 1. Examples of Lower and Deep Level Metacognitive Regulation in Reciprocal Peer Tutoring by De Backer et al. (2016, pp. 41-42)
Metacognition–Monitoring
Comprehension Monitoring Noting lack of comprehension T: “Does everyone understand the outlines of instructional behaviorism?”
t1: “I still don’t understand the concept of aptitude.”
Checking comprehension by repeating (LL) T: “Does everyone agree now that instructional behaviorism and instructional constructivism are opposites?”
t1: “I think (…) because in behaviorism the instructor decides on everything but constructivism is about learners being free to construct their own knowledge.:
t2: “Yes constructivist learners are much more independent and active, not so?”
Checking comprehension by elaborating (DL) T: “The behavioristic instructor permanently provides feedback. Who knows why?”
t1: “Is it not to make sure that learners don’t make mistakes?”
t2: “Could that also be the reason why they structure the learning materials extensively? And why they don’t like collaborative learning? Because collaborative learning requires

spontaneous discussions between students. You cannot really structure it in advance, not

so?”

Note. DL = Deep learning, LL = low or shallow learning, T = tutor, t1 and t2 = tutees.

De Backer and colleagues (2016) found that as the semester progressed, students engaged in more and more metacognitive regulatory processes. Specifically, their orientation increased, their monitoring increased and their evaluation increased (in general the frequency was 3 times greater at the end of the semester than at the beginning of the semester). However, planning stayed stagnant over the course of the semester. Specifically, the frequency of planning use continued to be low throughout the semester.  Far more interesting was that students (over the course of the semester) decreased their use of shallow or low-level metacognitive strategies and increased their use of deep-level metacognitive strategies as result. Increases in metacognitive regulation occurred across most types of metacognitive strategies (e.g., regulation, orientation, activating prior knowledge, task analysis, monitoring, and evaluation).

 As demonstrated by De Backer and colleagues study and the work of other researchers (e.g., King, 1997; De Backer, Van Keer, & Valcke, 2012), RPT and other collaborative learning instructional methods may be a useful in increasing metacognitive processes of students.

Concluding Thoughts and Questions for You

After reading De Backer et al. (2016), I was fascinated by the possible use of RPT in my own classroom. So, I started to think about how to implement it myself. Some questions arose that I thought you might help me with:

  1. How do I specifically scaffold the use of RPT in my classroom? More so, what does a successful RPT session resemble? Fortunately, De Backer and colleagues did provide an appendix to their study (Appendix C) that gives an example of what a tutoring session may look like.
  2. How many tutoring sessions is enough to increase the metacognition in my students? De Backer et al. had 8 sessions. This would be difficult for me to squeeze into my course planning. Would 3-4 be enough? What do you think? But then not all students could be a tutor. Do they get more (metacognitively) out of being a tutor vs. a tutee? This is something that De Backer and colleagues did not analyze. (Hint, hint all you folks—SoTL project in the making;)
  3. De Backer et al. briefly described that the tutors had a 10-page manual on how to be a tutor. Hmm…I don’t know if my students would be able to effectively learn from this. What other simple ways might we use to teach students how to be effective tutors in the context of RPT?
  4. Finally, are you do anything like De Backer et al.? And if so, do you think it is improving your student’s metacognitive regulation?

 References

De Backer, L., Van Keer, H., Moerkerke, B., & Valcke, M. (2016). Examining evolutions in the adoption of metacognitive regulation in reciprocal peer tutoring groups. Metacognition and Learning, 11, 187-213. doi:10.1007/s11409-015-9141-7

De Backer, L., Van Keer, H., & Valcke, M. (2012). Exploring the potential impact of reciprocal peer tutoring on higher education students’ metacognitive knowledge and metacognitive regulation. Instructional Science, 40, 559–588.

Hadwin, A. F., Järvelä, S., & Miller, M. (2011). Self-regulated, co-regulated, and socially shared regulation of learning. In B. J. Zimmerman & D. H. Schunk (Eds.), Handbook of self-regulation of learning and performance (pp. 65–84). New York: Routledge.

King, A. (1997). Ask to think-tell why©: A model to transactive peer tutoring for scaffolding higher level complex learning. Educational Psychologist, 32, 221–235.

Schraw, G., Crippen, K. J., & Hartley, K. (2006). Promoting self-regulation in science education: metacognition as part of a broader perspective on learning. Research in Science Education, 36, 111–139.

Slavich, G. M., & Zimbardo, P. G. (2012). Transformational teaching: Theoretical underpinnings, basic principles, and core methods. Educational Psychology Review, 24, 569-608. doi:10.1007/s10648-012-9199-6


When & Where to Teach Metacognitive Skills to College Students

Aaron S. Richmond, Ph.D.
Metropolitan State University of Denver

In past blogs, I’ve written about topics that focus on the relationship between academic procrastination and metacognition (Richmond, 2016), or different instructional methods to increase your student’s metacognition (Richmond 2015a, 2015b), or even how to use metacognitive theory to improve teaching practices (Richmond, 2014). However, during my morning coffee the other day I was reading a 2016 article in Metacognition in Learning by Foster, Was, Dunlosky, and Isaacson (yes, I am a geek like that). Studying the importance of repeated assessment and feedback, Foster and colleagues found that over the course of a semester sophomore and junior level education psychology students who were tested 13 separate times and provided feedback remained highly overconfident in their knowledge of the material. As many other researchers have concluded, severe overconfidence erodes accurate self-regulation and self-monitoring which can have a severe detrimental effect on student learning. After finishing my coffee, I thought about the potential long-term and pervasive impacts the lack of metacognition these students had and it dawned on me that in IwM we have not discussed when and where metacognitive skills should be taught in the college curriculum. Thus, I choose to focus this blog on potential suggestions/strategies on when and where to introduce teaching metacognitive skills in the college classroom.

When Should We Teach Metacognitive Skills?
First and foremost, as college and university teachers, we need to acknowledge that our students do not come to us from a vacuum and that they already have many developed, albeit sometimes erroneous and ineffective, metacognitive skills. Considering this fact, we need to adapt our metacognitive instruction on an individual student level to best teach our students. Now, to the question: When should we teach metacognitive skills? The answer is—of course ASAP! As one of the goals to metacognitive skills is to transfer across academic domains, introducing it during the first semester of college is imperative.

One of the most notable early interventions for metacognitive skills was done by Ken Kiewra at the University of Nebraska. Kiewra created a class “Academic Success” taught at the sophomore level using his Selection, Organization, Association, and Regulation (SOAR) model (Jairam & Kiewra, 2009). Jairam and Kiewra had modest effects of increasing student learning (e.g., recalling facts and associating relevant information among zoology terms) via these metacognitive skills. However, there are a few areas in which this approach to teaching metacognitive skills can be improved. First, this is not a class that all students were required to take (only education students). Thus, all other academic disciplines could benefit from this class (see more on this below). Second, most of the students who took this course were at the sophomore and junior college level. This course should be a first semester course for all students, rather than midway through the college career.

The final note regarding when we should teach metacognitive skills almost negates or precludes the initial question. That is, the ‘when’ is immediately, but immediately doesn’t mean or suggest once. Rather, metacognitive skills should be taught continuously throughout the college career with increasingly more advanced and effective memory and learning strategies. Just as a student would take an introductory course to a major, why not have a beginner, intermediate, and advanced metacognitive skills course?

Where Should Metacognitive Skills Be Taught?
Obviously, those at IwM, and presumably our readers, would quickly answer this question: EVERYWHERE! That is, metacognitive skills should be taught across the college curriculum. However, there are some academics who believe (a) our students have already learned effective learning strategies (Jairam & Kiewra, 2009), and (b) that metacognitive skills are not part of their curriculum. In response to the first belief, many of our incoming college and university students do not have effective metacognitive skills so it is important that we teach these skills in all different types of academic domains (Jairam & Kiewra, 2009). In response to the second belief, metacognition should be taught across all academic domains. This includes mathematics, philosophy, chemistry, nursing, psychology, anthropology. I will go so far as to suggest that metacognitive skills are tantamount to reading skills as it pertains to the learning process and should be incorporated throughout the curriculum. But herein lies the rub. I have yet to find a current model or research example of infusing metacognitive skill training across the curriculum. For example, in general studies education, why not have a metacognitive student learning objective that cuts across all academic domains. Or in a first-year-success program that is often taught in teams, why not incorporate metacognitive skill training via thematic instruction (e.g., various academic disciplines are asked to center their instruction around a similar topic) among several introductory level classes. That is, teach metacognition in General Psychology, Speech 101, Biology 101, etc. by using a threaded theme (e.g., racism) that requires teachers to teach metacognitive skills to help learn a particular topic. In the end, it is clear that all students in all disciplines could benefit from metacognitive skill training, yet researchers nor teachers have tackled these specific issues.

There Are Always More Questions Than Answers.
I’ve done it again, I’ve written a blog that touches on what I believe to be an important issue in metacognition and higher education that needs far more research. As such, I must wrap up this blog (as I always do) with a few questions/challenges/inspirational ideas.

  1. Should metacognition, learning strategies, etc. be taught throughout the curriculum?
    1. If so, how?
  2. If not, should they be taught in a self-contained introduction to college course?
    1. Should all college students to be required to take this course?
  3. What other models of introducing and teaching metacognitive skills are there that may be more effective than a self-contained course vs. a thematic curriculum approach?
  4. Once students have been introduced to metacognitive skills, what is the best method for continuing education of metacognitive skills?

References
Foster, N. L., Was, C. A., Dunlosky, J., & Isaacson, R. M. (2016). Even after thirteen class exams, students are still overconfident: The role of memory for past exam performance in student predictions. Metacognition and Learning, 1-19. doi:10.1007/s11409-016-9158-6

Jairam, D., & Kiewra, K. A. (2009). An investigation of the SOAR study method. Journal of Advanced Academics, 20(4), 602-629.

Richmond, A. S. (2016, February 16th). Are academic procrastinators metacognitively deprived?. Retrieved from https://www.improvewithmetacognition.com/are-academic-procrastinators-metacognitively-deprived/

Richmond, A. S. (2015a, November 5th). A minute a day keeps the metacognitive doctor away. Retrieved from https://www.improvewithmetacognition.com/a-minute-a-day-keeps-the-metacognitive-doctor-away/

Richmond, A. S. (2015b, July 20th). How do you increase your students metacognition?. Retrieved from https://www.improvewithmetacognition.com/how-do-you-increase-your-students-metacognition/

Richmond, S. (2014, August 28th). Meta-teaching: Improve your teaching while improving your student’s metacognition. Retrieved from https://www.improvewithmetacognition.com/meta-teaching-improve-your-teaching-while-improving-students-metacognition/


Are Academic Procrastinators Metacognitively Deprived?

By Aaron S. Richmond
Metropolitan State University of Denver

Academic Procrastinators Brief Overview

One of my favorite articles is Academic Procrastination of Undergraduates: Low Self-Efficacy to Self-Regulate Predicts Higher Levels of Procrastination by Robert M. Klassen, Lindsey. L. Krawchuk, and Sukaina Rajani (2007). Klassen and colleagues state that “…the rate for problematic academic procrastination among undergraduates is estimated to be at least 70-95% (Ellis & Knaus, 1977; Steel, 2007), with estimates of chronic or severe procrastination among undergraduates between 20% and 30%” (p. 916). Academic procrastination is, “the intentional delay of an intended course action, in spite of an awareness of negative outcomes (Steel, 2007; as cited in Klassen et al., 2006, p. 916). Based on the above stated statistics, it is obvious that academic procrastination is an issue in higher education, and that understanding what factors influence it and are related to its frequency is of utmost importance.

In their 2007 article, Klassen and colleagues conducted two studies to understand the relationship among academic procrastination and self-efficacy, self-regulation, and self-esteem and then understand this relationship within “negative procrastinators” (p. 915). In study 1, they surveyed 261 undergraduate students. They found that academic procrastination was inversely correlated to college/university GPA, self-regulation, academic self-efficacy and self-esteem. Meaning as students’ frequency of academic procrastination went down, their GPA and self-reported scores of self-efficacy, self-esteem, and self-regulation went up. They also found that self-regulation, self-esteem, and self-efficacy predicted academic procrastination.

In study 2, Klassen and colleagues (2007) they were interested in knowing whether there was a difference between negative and neutral procrastinators. That is when procrastinating caused a negative affect (e.g., grade penalty for assignment tardiness) or a neutral affect (e.g., no penalty for assignment tardiness). They surveyed 194 undergraduates and asked students to rate how academic procrastination affected, either positively or negatively, specific academic tasks (reading, research, etc.). They then, divided the sample into a group of students that self-reported that academic procrastination negatively affected them in some way or positive/neutrally affected them in some way.  What they found is that there were significant differences in GPA, daily procrastination, task procrastination, predicted class grade, actual class grade, and self-reported self-regulation between negative procrastinators and neutral procrastinators. They also found that students most often procrastinated on writing tasks.

So Where Does Metacognition Come in to Play?

Because a main factor of their focus was self-regulation, I think Klassen and colleagues study, gives us great insight and promise into the potential role (either causal or predictive) that metacognition plays in academic procrastination. First, in Study 1, they used the Motivated Strategies for Learning Questionnaire (MSLQ; Pintrich, Smith, Garcia, & MckKeachie, 1993) to measure self-efficacy for self-regulation. This MSLQ subscale assesses students’ awareness of knowledge and control of cognition (Klassen et al., 2007). It asks question like “If course materials are difficult to understand, I change the way I read the material.” or “I try to change the way I study in order to fit the course requirements and instructor’s teaching style.” (p. 920). As self-efficacy for self-regulation are a subset of metacognition, it is clear to me, that these questions indirectly, if not directly, partially measure elements of metacognition.

This makes me wonder, it would be interesting if the results of Klassen et al.’s study hold true with other forms of metacognition, such as metacognitive awareness. For example, how does it relate to metacognitive awareness factors that Schraw and Dennison (1994) suggest, such as knowledge and cognition (e.g., declarative knowledge, procedural knowledge, conditional knowledge) vs. regulation of cognition (e.g., planning, information management, monitoring, evaluation)?  Or, as Klassen et al. did not use the entire battery of measures in the MSLQ, how does academic procrastination relate to other aspects of the MSLQ like Learning Strategies, Help Seeking Scale, Metacognitive Self-Regulation, etc. (Pintrich et al., 1993). Or how might Klassen’s results relate to behavioral measures of metacognition such as calibration or, how does it relate to the Need for Cognition (Cacioppo & Petty, 1982)?  These questions suggest that metacognition could play a very prominent role in academic procrastination.

There Are Always More Questions Than Answers

To my knowledge, researchers have yet to replicate Klassen et al.’s (2007) with an eye towards investigating whether metacognitive variables predict and mediate rates of academic procrastination.  Therefore, I feel like I must wrap up this blog (as I always do) with a few questions/challenges/inspirational ideasJ

  1. What is the relationship among metacognitive awareness and academic procrastination?
  2. If there is a relationship between metacognition and academic procrastination, are there mediating and moderating variables that contribute to the relationship between metacognition and academic procrastination? For example, critical thinking? Intelligence? Past academic performance? The type of content and experience with this content (e.g., science knowledge)?
  3. Are there specific elements of metacognition (e.g., self-efficacy vs. metacognitive awareness vs. calibration, vs. monitoring, etc.) that predict the frequency of academic procrastination?
  4. Can metacognitive awareness training reduce the frequency of academic procrastination?
  5. If so, what type of training best reduces academic procrastination?

 References

Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42(1), 116.

Ellis, A., & Knaus, W. J. (1977). Overcoming procrastination. NY: New American Library

Klassen, R. M., Krawchuk, L. L., & Rajani, S. (2008). Academic procrastination of undergraduates: Low self-efficacy to self-regulate predicts higher levels of procrastination. Contemporary Educational Psychology, 33, 915-931. doi:10.1016/j.cedpsych.2007.07.001

Pintrich, P. R., Smith, D. A. F., Garcia, T., & McKeachie, W. J. (1993). Reliability and predictive validity of the motivated strategies for learning questionnaire (MSLQ). Educational and Psychological Measurement, 53, 801–813.

Schraw, G., & Dennison, R. S. (1994). Assessing metacognitive awareness. Contemporary Educational Psychology, 19, 460-475.

Steel, P. (2007). The nature of procrastination: A meta-analytic and theoretical review of quintessential self regulatory failure. Psychological Bulletin, 133, 65–94.


A Minute a Day Keeps the Metacognitive Doctor Away!

Aaron S. Richmond

Metropolitan State University of Denver

First and foremost, what I am about to discuss with you all is not an educational or metacognitive teaching panacea (aka silver-bullet). But I would like introduce and discuss is the idea of using Classroom Assessment Techniques (affectionately known as CATs) as a form of a metacognitive instructional strategy.

CATs: A Very Brief Review

Described best by Angelo and Cross (1993), CATs are designed to serve two purposes. First, they are meant as a formative assessment tool for teachers to understand how much their students are learning in the course. Second, CATs are designed to provide you, the teacher, feedback on the efficacy of your teaching strategies/methods. CATs are typically very brief and take very little instructional time (a minute or two).  CATs are also created based on your assessment needs. For instance, if you are interested in assessing course-related knowledge and skills then you might want to use the one-minute paper, focused listening, background knowledge probe (see Cunningham & Moore, n.d.). Or, if you are interested in assessing skill in analysis and critical thinking, you may want to use pro and con grids, or analytic memos, or content, form, and function outlines (see Cunningham & Moore, n.d.). If you would like to assess your students’ skill in synthesis and creative thinking you may want to use one-sentence summary, or concept maps, or approximate analogies. The list of different types of CATs goes on and on (see Cunningham & Moore, n.d. for complete list and summary) so I would like to focus on previously established CATs that lend themselves to be quite quick, easy, and potentially effective metacognitive improvement tools. I like to call these the Metacognitive Cats or MCATs!

The MCATs

Cunningham and Moore (n.d.) recently categorized 50 of Angelo and Cross’ (1993) CATs based on the purpose of the assessment needed (some described previously in the blog). Among these categories, Cunningham and Moore posit that some CATs are meant for “Techniques for Assessing Learner Attitudes, Values, and Self-Awareness” (p. 4). Several of the CATs in this category lend themselves to be almost metacognitive awareness activities. Specifically, these include course-related self-confidence surveys, focused autobiographical sketches, muddiest point, productivity study time log, and diagnostic learning logs. Let me take a moment to describe these potential MCATs (Angelo & Cross, 1993).

  • Course-related self-confidence surveys: At the end of class you have students fill out an anonymous questionnaire that assesses their confidence in mastering the material discussed in class.  
  • Focused autobiographical sketches: At the end or beginning of class, have students write a brief statement on a “successful” study strategy or learning strategy that they used to learn the class material.
  • Muddiest point: At the conclusion of a lesson, ask students to write down the one concept that they are still struggling with in one or two sentences. You can use this to identify which concepts students are struggling with.
  • Productivity study time log: Have students keep a daily log and record both the amount of time spent studying and the quality of time spent studying for your course. Students can complete this before, or at the beginning or end of class.
  • Diagnostic learning logs: Have students write a log for assignments or assessments in which the student identifies what study methods and knowledge that they had correct and have them diagnose what they did not have correct and how to solve this error for the future. These can be done before, during or after class as well.

Now, these MCATs are just CATs unless you help students connect the CAT to the Metacognition. The trick is, how do you do this? One answer may be direct feedback and reflection to the learner. What I mean, is that if you employ a CAT (e.g., muddiest point), then you need to make it metacognitive by providing feedback directly to your student on their performance, have students elaborate and reflect on their answers, and provide constructive solutions/assistance in improving their metacognition.  Let me illustrate using the MCAT of a muddiest point. After your students turn in their muddiest point, take a few minutes to talk to the students about why they are confused about the content. You may ask your student about their note-taking strategies in class. Or you may ask your student about their reading strategies when they read the chapter before class. You may ask them about their attention to the lesson (i.e., the amount of cell phone or computer use). You may ask them about their use of other study strategies. Then, have your student reflect on why they didn’t understand the course material based on your conversation and have them come up with changing just one thing about how they studied. The next time, you repeat the MCAT muddiest point, the process can start over and you can revisit the same questions with your students. Incorporating direct feedback, reflection, and solution of CATs may just turn them into MCATs.

Concluding Questions

As, to my knowledge, educational and metacognitive researchers have not investigated the efficacy of these potential MCATs as metacognitive instructional tools. Therefore, I feel like I must wrap up this blog with a few questions/challenges/inspirational ideasJ

  1. Can the use of MCATs increase metacognitive awareness in students?
  2. Can the use of MCATs increase metacognitive knowledge in students?
  3. Can the use of MCATs increase academic performance of students?
  4. If the answer to any of the previous questions is yes, then the questions becomes are some MCATs better than others and can students transfer the use of these MCATs to other content domain?

References

Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers (2nd ed.). San Francisco, CA: Jossey-Bass.

Cunningham, K., & Moore, D. (n.d.). 50 CATS by Angelo and Cross. Retrieved from http://pages.uoregon.edu/tep/resou


How Do You Increase Your Student’s Metacognition?

Aaron S. Richmond

Metropolitan State University of Denver

 

How many times has a student come to you and said “I just don’t understand why I did so bad on the test?” or “I knew the correct answer but I thought the question was tricky.” or “I’ve read the chapter 5 times and I still don’t understand what you are talking about in class.”? What did you say or do for these students? Did it prompt you to wonder what you can do to improve your students’ metacognition? I know many of us at Improve with Metacognition (IwM), started pursuing research on metacognition because of these very experiences. As such, I have compiled a summary of some of the awesome resources IwM bloggers have posted (see below). These instructional strategies can be generally categorized into either self-contained lessons. That is a lesson that can teach some aspect of metacognition in one or two class sessions. Or metacognitive instructional strategies that require an entire semester to teach.

Self-Contained Instructional Strategies

In Stephen Chew’s Blog, Metacognition and Scaffolding Student Learning, he suggests that one way to improve metacognitive awareness is through well-designed review sessions (Chew, 2015). Chew suggests that students would metacogntively benefit by actively participate and incentivize participation in study review sessions. Second, Chew suggests that students should self-test before review so that it is truly a review. Third, have students predict their exam scores based on the review performance and have them reflect on their predictions after the exam.

Ed Nuhfer (2015) describes a way to increase metacognition through role-play. Ed suggests that we can use Edward De Bono’s Six Thinking hats method to train our students to increase their metacognitive literacy. In essence, using this method we can train our students to think in a factual way (white hat), be positive and advocate for specific positions (yellow hat), to be cautious (black hat), recognize all facets of our emotions (red hat), be provocative (green hat), and be reflective and introspective (blue hat). We can do this through several exercises where students get a turn to have different hats.

In David Westmoreland’s (2014) blog, he discusses a classroom exercise to improve metacognition. David created a “metacognitive lab that attempts to answer the question How do you know?” In the lab, he presents students in small groups a handful of “truth” statements (e.g., Eggs are fragile.). Then students must take the statement and justify (on the board) how it is true. Then the class eliminates the justifications if they know them not to be true. Then the students with one another about the process and why the statements were eliminated.

Course Long Instructional Strategies

Chris Was (2014) investigated whether “variable weight-variable difficulty tests” would improve students’ calibration (i.e., knowing when you know something and knowing when you don’t). Chris has his students take several quizzes. In each quiz, students can weight each question for varied amount of points (e.g., question 1 is easy so I will give it 5 points whereas question 4 is hard so I will only give it 2 points). Then students answer whether they believe they got the question correct or not. After each quiz is graded, a teaching assistant goes over the quiz and discusses with the students why they weighted the question the way they did and why the thought they would or would not get the question correct. Was found that this activity caused his students to become better at knowing when they knew or did not know something.

Similarly, Shumacher and Taraban (2015) discussed the use of the testing effect as a method to improve metacognition. They suggest there are mixed results of the testing method as an effective instructional method. That is, when students were repeatedly tested and were exposed to questions on multiple exams, only low achieving students metacognitively benefited.

John Draeger (2015) uses just-in-time teaching in attempt to improve metacognition. John asks students metacognitive prompting questions (e.g., What is the most challenging part of the reading?) prior to class and they submit their answers before coming to class. Although, he has not measured the efficacy of this method, students have responded positively to the process.

Parting Questions to Further this Important Conversation

There are many other instructional methods used to increase student metacognition described throughout IwM that are both self-contained and semester long. Please check them out!

But even considering all of what has been presented in this blog and available on IwM, I couldn’t help but leave you with some unanswered questions that I myself have:

  1. What other instructional strategies have you used to increase student metacognition?
  2. If you were to choose between a self-contained or semester long method, which one would you choose and why? Meaning, what factors would help you determine which method to use? Insructional goals? How closely related to course content? Time commitment? Level of student metacogntive knowledge? Level of course?
  3. Once you have chosen a self-contained or semester long method, how should implementation methods differ? That is, what are the best practices used when implementing a self-contained vs. semester long technique?
  4. Finally, often in the metacognition research in higher education, instructional strategies for improving metacognition are pulled from studies and experiments conducted in k-12 education. Are there any studies, which you can think of, that would be suitable for testing in higher education? If so, how and why?

References

Beziat, T. (2015). Goal monitoring in the classroom. Retrived from https://www.improvewithmetacognition.com/goal-monitoring-in-the-classroom/

Chew, S. (2015). Metacognition and scaffolding student learning. Retrieved from https://www.improvewithmetacognition.com/metacognition-and-scaffolding-student-learning/

Draeger, J. (2015). Using Justin-in-Time assignments to promote metacognition. Retrieved from https://www.improvewithmetacognition.com/using-just-in-time-assignments-to-promote-metacognition/

Nilson, L. B. (2015). Metacognition and specifications grading: The odd couple? Retrieved from https://www.improvewithmetacognition.com/metacognition-and-specifications-grading-the-odd-couple/

Nuhfer, E. (2015). Developing metacognitive literacy through role play: Edward De Bono’s six thinking hats. Retrieved from https://www.improvewithmetacognition.com/developing-metacognitive-literacy-through-role-play-edward-de-bonos-six-thinking-hats/

Shumbacher, J., & Traban, R. (2015). To test or not to test: That is the metacognitive question. Retrieved from https://www.improvewithmetacognition.com/to-test-or-not-to-test-that-is-the-metacognitive-question/

Was, C. (2014). Testing improves knowledge monitoring. Retrieved from https://www.improvewithmetacognition.com/testing-improves-knowledge-monitoring/

Westmoreland, D. (2014). Science and social controversy—A classroom exercise in metacognition. Retrieved from https://www.improvewithmetacognition.com/science-and-social-controversy-a-classroom-exercise-in-metacognition/

 


So Your Students Think They Are Left-Brained Thinkers or Kinesthetic Learners: Please God, No! How Metacognition Can Explain Student’s Misconceptions

By Aaron S. Richmond, Hannah M. Rauer, and Eric Klein

Metropolitan State University of Denver

Have you heard students say, “We only use 10% of our brain!” or “MMR shots cause Autism” or “My cousin has ESP…no seriously!” or “I am really good at multi-tasking.” or “I have high bodily-kinesthetic intelligence!”? Sadly, the list can go on, and on, and on. Our students, and the general public for that matter, have many misconceptions and preformed inaccurate naïve theories of the world which often impairs learning in the classroom (Dochy et al., 1999). These misconceptions are pervasive and extremely hard to change (Lilienfeld et al., 2009). Our research suggests that metacognition may be the key to understanding misconceptions.

My undergraduate students and I sought to understand the role metacognition could play in the susceptibility to common psychology and education misconceptions. Prior to our study, most research in this area focused on the persistence of misconceptions (e.g., Kowalski & Taylor, 2009), or how they relate to critical thinking skills (Taylor & Kowalski, 2004), or how to reduce misconceptions by direct instruction (e.g., Glass et al., 2008). However, our study was the first to investigate how metacognitive beliefs (e.g., metacognitive awareness, need for cognition, cognitive and learning strategy use, or actual metacognitive performance) may predict prevalence of psychological and educational misconceptions.

We gave over 300 undergraduate freshman a 65-item psychological and educational misconceptions inventory that were pooled from several studies (e.g., Amsel et al., 2009; Standing & Huber, 2003). We assessed metacognitive beliefs using the Need for Cognition Scale (NCS; Cacioppo, Petty, Feinstein, & Jarvis, 1996), the Memory for Self-Efficacy Questionnaire (MSEQ; Berry, West, & Dennehy, 1989), the Metacognitive Awareness Inventory (MAI; Schraw & Dennison, 1994), the Motivated Strategies for Learning Questionnaire (MSLQ; Pintrich, Smith, Garcia, & McKeachie, 1991), and one direct measure of metacognition, calibration. Calibration is the degree to which learners understand what they know and what they do not know.

We found that metacognitive variables were highly predictive of student’s susceptibility to believing in educational and psychological misconceptions. Interestingly, the most powerful predictor was the student’s actual measure of metacognition (e.g., calibration as measured through gamma). Meaning, the more accurate students were at knowing when they knew or did not know something (i.e., calibration), the less they believed in misconceptions. Also, when students had higher scores on need for cognition, had more advanced beliefs on how to regulate cognition, stronger self-efficacy for learning preferences and control of learning beliefs, the less susceptible they were to believing in misconceptions.

What does this research tell us? We think that this is the first step in understanding the role metacognition has in conceptual development (both inaccurate and accurate). Second, if teachers stress the importance of metacognitive development and teach how to improve student metacognition, then one of the added benefits maybe that students will have more accurate conceptual development. The natural progression in this research is to experimentally manipulate metacognitive instruction and see if it reduces educational and psychological misconceptions.

References

Amsel, E., Johnston, A., Alvarado, E., Kettering, J., Rankin, L., & Ward, M. (2009). The effect of perspective on misconceptions in psychology: A test of conceptual change theory. Journal of Instructional Psychology, 36(4), 289-295.

Berry, J. M., West, R. L. & Dennehey, D. M. (1989). Reliability and validity of the Memory Self-Efficacy Questionnaire. Developmental Psychology, 25(5), 701-713. doi:10.1037/0012-1649.25.5.701

Kowalski, P., & Taylor, A. K. (2009). The effect of refuting misconceptions in the introductory psychology class. Teaching of Psychology, 36(3), 153-159. doi:10.1080/00986280902959986

Pintrich, P. R., Smith. D. A., Garcia, T., & McKeachie. W. (1991) A manual for the use of the motivated strategies for learning questionnaire. Ann Arbor, MI: University of Michigan.

Schraw, G., & Dennison, R. S. (1994). Assessing metacognitive awareness. Contemporary Educational Psychology, 19(4), 460-475. doi:10.1006/ceps.1994.1033

Standing, L. G., & Huber, H. (2003). Do psychology courses reduce beliefs in psychological myths? Social Behavior & Personality: An International Journal, 31(6), 585-585. doi:10.2224/sbp.2003.31.6.585

Taylor, A. K., & Kowalski, P. (2004). Naïve psychological science: The prevalence, strength, and sources of misconceptions. The Psychological Record, 54(1), 15-25.


The Metacognitive Syllabus!

By Aaron S. Richmond, Ph.D.
Metropolitan State University of Denver

This blog may be like no other in Improve with Metacognition (IwM). I am asking you, the readers to actively participate. Yes, I mean YOU, YOU, and YOU☺. But let me clarify—I do not ask rhetorical questions. As such, please respond using the comment function in IwM or Tweet your answer to the three questions in this blog.

Question #1: How can we use the syllabus as a metacognitive tool?
As delineated by scores of researchers and teachers, the syllabus can be many things. The syllabus can be a contract (Slattery & Carlson, 2005). These elements of the syllabus typically include policies on attendance, late work, ethics, grading, etc. The syllabus can also be a permanent record (Parkes & Harris, 2002). Permanent record elements of a syllabus include course objectives, assessment procedures, course description, and course content. The syllabus is also a communication device that can set the tone for your class and is an opportunity to gain your students trust and respect by modeling your pedagogical beliefs (Bain, 2004) .

As the syllabus can be many things, the syllabus, it is very possible that the syllabus can serve as a metacognitive tool. Several researchers suggest that the syllabus is also a cognitive map (Parkes & Harris, 2002) and a learning tool (Matejka & Kurke, 1994). These elements typically include a description of how to succeed in the course, common pitfalls and misconceptions that occur in the course, campus resources that can assist the students in learning (e.g., writing center), a teaching philosophy, and embedded explanations of class assignments, structure, and student learning. If we consider the syllabus in this context, I believe that we can easily incorporate metacognitive elements. For instance, in my personal teaching philosophy, I specifically mention my focus on improving metacognition. Another example is that I have at least one student learning objective that is megacogntively based with assignments designed to assess this objective. For example, Students will understand what metacognition is and how it improves their own learning (assessed by experiential learning assignment 1 and comprehensive exam). Or Students will understand what it means to develop a culture of metacognition in the classroom (assessed by classroom observation and mid-term exam). Finally, I actively incorporate course content on learning strategies and the metacognitive explanations for those strategies which sets the tone for the importance of metacognition in the class.

Question #2: How are you using the syllabus as a metacognitive tool?
I really want to hear from you on how you may be using the syllabus as a metacognitive tool. For example, what specific statements do you include related to metacognition goals? What assignments do you mention that link to metacognitive development?

Question #3: If the syllabus can be used as a metacognitive tool, how do we know it is effective?
What is your answer to this question? My answer centers on the Scholarship of Teaching and Learning. That is, we don’t have empirical evidence yet to say that the syllabus is a metacognitive tool. That doesn’t mean that it can’t be or isn’t already in practice. But I think you(we) should take up this challenge and investigate this issue. The syllabus can have profound impact on how student learning, instruction, and student ratings of instruction (Richmond, Becknell, Slattery, Morgan, & Mitchell, 2015; Saville, Zinn, Brown, & Marchuk, 2010). so let’s investigate how to improve the syllabus through metacognition.

UsCourse syllabi can be a metacognitive tool. Share on X

 

References
Bain, K. (2004). What the best college teachers do. Cambridge, MA: Harvard University Press.
Matejka, K., & Kurke, L. B. (1994). Designing a great syllabus. College Teaching, 42(3), 115-117. doi:10.1080/87567555.1994.9926838
Parkes, J., & Harris, M. B. (2002). The purposes of a syllabus. College Teaching, 50(2), 55-61. doi:10.1080/87567550209595875
Richmond, A. S., Becknell, J., Slattery, J., Morgan, R., & Mitchell, N. (2015, August). Students’ perceptions of a student-centered syllabus: An experimental analysis. Poster presented the annual meeting of the American Psychological Association, Toronto, Canada.
Saville, B. K., Zinn, T. E., Brown, A. R., & Marchuk, K. A. (2010). Syllabus detail and students’ perceptions of teacher effectiveness. Teaching of Psychology, 37, 186-189. doi:10.1080/00986283.2010.488523
Slattery, J. M., & Carlson, J. F. (2005). Preparing an effective syllabus: Current best practices. College Teaching, 53, 159-164. doi:10.3200/CTCH.53.4.159-164


Measuring Metacognitive Judgments

In Gregg Schraw’s (2009) chapter, Measuring Metacognitive Judgments, he artfully provides a taxonomy of calibration measures that attempt to assesses metacognitive judgment of learning. For more information, follow the hyperlink below.

Schraw, G. (2009). Measuring Metacognitive Judgments. In D. J. Hacker, J. Dunlosky, &  A. C. Graesser (Eds.). Handbook of metacognition in education, 415.


Effects of Strategy Training and Incentives on Students’ Performance, Confidence, and Calibration

“This study examined the effect of strategy instruction and incentives on performance, confidence, and calibration accuracy. Individuals (N = 107) in randomly assigned treatment groups received a multicomponent strategy instruction intervention, financial incentives for high performance, or both. The authors predicted that incentives would improve performance, while strategy instruction would improve performance, confidence, and calibration accuracy as a result of better monitoring and self-regulation of learning. The authors compared pre- and posttest items and 20 new posttest-only items. They found significant effects for strategy training on performance, confidence, and calibration accuracy, as well as the interaction between strategy training and time on calibration accuracy. Incentives improved performance and calibration accuracy, either directly, or through an interaction with strategy training. Implications for future research are discussed.” For more information about this article, follow the link below.

Gutierrez, A. P., & Schraw, G. (2014). Effects of Strategy Training and Incentives on Students’ Performance, Confidence, and Calibration. The Journal of Experimental Education, (ahead-of-print), 1-19.


Four cornerstones of calibration research: Why understanding students’ judgments can improve their achievement

“The target articles make significant advances in our understanding of students’ judgments of their cognitive processes and products. In general, the advances are relative to a subset of common themes, which we call the four cornerstones of research on metacognitive judgments. We discuss how the target articles build on these cornerstones (judgment bases, judgment accuracy, judgment reliability, and control) and how they are relevant to improving student achievement.” (p. 58) For more information about this article, follow the link below.

Dunlosky, J., & Thiede, K. W. (2013). Four cornerstones of calibration research: Why understanding students’ judgments can improve their achievement. Learning and Instruction, 24, 58-61.


Advancing Task Involvement, Intrinsic Motivation and Metacognitive Regulation in Physical Education Classes: The Self-Check Style of Teaching Makes a Difference

In a metacognitive field study, Papaioannou, Theodosiou, Pashali, and Digeelidis (2012) found that having 6th grade students use metacognitive techniques (self-check) significantly improved several mastery oriented variables over that of a practice technique in a physical education course. For more information about the article, please see the reference below.

Papaioannou, A., Theodosiou, A., Pashali, M., & Digelidis, N. (2012). Advancing task involvement, intrinsic motivation and metacognitive regulation in physical education classes: the self-check style of teaching makes a difference. Advances in Physical Education, 2(03), 110-118.


A review of research on metacognition in science education: current and future directions

In an extremely comprehensive meta-analytic review, Zohare and Barsilai (2013) analyzed 178 studies of metacognition in science education (mainly K-12). They identified several key trends and made suggestions for future research. One of their findings was that the use of metacognitive cues was the most common metacognitive intervention for learning science content.  For more information, please see the reference below.

Zohar, A., & Barzilai, S. (2013). A review of research on metacognition in science education: Current and future directions. Studies in Science Education, 49(2), 121-169. doi:10.1080/03057267.2013.847261