Psychological Myths are Hurting Metacognition

by Dana Melone, Cedar Rapids Kennedy High School

Every year I start my psychology class by asking the students some true or false statements about psychology. These statements are focused on widespread beliefs about psychology and the capacity to learn that are not true or have been misinterpreted.  Here are just a few:

  • Myth 1: People learn better when we teach to their true or preferred learning style
  • Myth 2: People are more right brained or left brained
  • Myth 3: Personality tests can determine your personality type

Many of these myths are still widely believed and used in the classroom, in staff professional development, in the workplace to make employment decisions, and so much more.  Psychological myths in the classroom hurt metacognition and learning.  All of these myths allow us to internalize a particular aspect of ourselves we believe must be true, and this seeps into our cognition as we examine our strengths and weaknesses. 

Myth 1: People learn better when we teach to their true or preferred learning styles

The learning style myth persists.  A Google search of learning styles required me to proceed to page three of the search before finding information on the fallacy of the theory.  The first two pages of the search contained links to tests to find your learning style, and how to use your learning style as a student and at work.  In Multiple Intelligences, by Howard Gardner (1983), the author developed the theory of multiple intelligences.  His idea theorizes that we have multiple types of intelligences (kinesthetic, auditory, visual, etc.) that work in tandem to help us learn.  In the last 30 years his idea has become synonymous with learning styles, which imply we each have one predominant way that we use to learn.  There is no research to support this interpretation of learning styles, and Gardner himself has discussed the misuse of his theory.  If we perpetuate this learning styles myth as educators, employees, or employers, we are setting ourselves up and the people we influence to believe they can only learn in the fashion that best suits them. This is a danger to metacognition.  For example, if I am examining why I did poorly on my last math test and I believe I am a visual learner, I may attribute my poor grade to my instructor’s use of verbal presentation instead of accurately reflecting on the errors I made in studying or calculation. 

image of human brain with list of major functions of the left and right hemispheres

Myth 2: People are more right brained or left brained

Research on the brain indicates a possible difference between the right and left-brain functions.  Most research up to this point examines the left brain as our center for spoken and written language while the right brain controls visual, imagery, and imaginative functions among others.  The research does not indicate, however, that a particular side works alone on this task.  This knowledge of the brain has led to the myth that if we perceive ourselves as better at a particular topic like art for example, we must be more right brained.  In one of numerous studies dispelling this myth, researchers used Magnetic Resonance Imaging (MRI) to examine the brain while completing various “typical” right and left brained tasks.  This research clearly showed what psychologists and neurologists have known for some time.  The basic functions may lie in those areas, but the two sides of the brain work together to complete these tasks (Nielsen, Zielenski, et. al., 2013). How is this myth hurting metacognition?  Like Myth 1, if we believe we are predetermined to a stronger functioning on particular tasks, we may avoid tasks that don’t lie with that strength.  We may also use incorrect metacognition in thinking that we function poorly on something because of our “dominant side.” 

Myth 3: Personality tests can determine your personality type

In the last five years I have been in a variety of work-related scenarios where I have been given a personality test to take.  These have ranged from providing me with a color that represents me or a series of letters that represents me.  In applying for jobs, I have also been asked to undertake a personality inventory that I can only assume weeds out people they feel don’t fit the job at hand.  The discussion / reflection process following these tests is always the same.  How might your results indicate a strength or weakness for you in your job and in your life, and how might this affect how you work with people who do and do not match the symbolism you were given?   Research shows that we tend to agree with the traits we are given if those traits contain a general collection of mostly positive and but also a few somewhat less positive characteristics. However, we need to examine why we are agreeing.  We tend not to think deeply when confirming our own beliefs, and we may be accidentally eliminating situational aspects from our self-metacognition.  This is also true when we evaluate others. We shouldn’t let superficial assumptions based on our awareness of our own or someone else’s personality test results overly control our actions. For example, it would be short-sighted to make employment decisions or promotional decisions based on assumptions that, because someone is shy, they would not do well with a job that requires public appearances. 

Dispelling the Myths

The good news is that metacognition itself is a great way to get students and others to let go of these myths. I like to address these myths head on.  A quick true false exercise can get students thinking about their current beliefs on these myths. Then I get them talking and linking with better decision-making processes.  For example, I ask what is the difference between a theory or correlation and an experiment?  An understanding of what makes good research and what might just be someone’s idea based on observation is a great way to get students thinking about these myths as well as all research and ideas they encounter.  Another great way to induce metacognition on these topics is to have students take quizzes that determine their learning style, brain side, and personality.  Discuss the results openly and engage students in critical thinking about the tests and their results.  How and why do they look to confirm the results?  More importantly what are examples of the results not being true for them?  There are also a number of amazing Ted Talks, articles and podcasts on these topics that get students thinking in terms of research instead of personal examples. Let’s take it beyond students and get the research out there to educators and companies as well.   Here are just a few resources you might use:

Hidden Brain Podcast: Can a Personality Test Tell Us About Who We Are?: https://www.npr.org/2017/12/04/568365431/what-can-a-personality-test-tell-us-about-who-we-are

10 Myths About Psychology Debunked: Ben Ambridge: https://tedsummaries.com/2015/02/12/ben-ambridge-10-myths-about-psychology-debunked/

The Left Brain VS. Right Brain Myth: Elizabeth Waters: https://ed.ted.com/lessons/the-left-brain-vs-right-brain-myth-elizabeth-waters

Learning Styles and the Importance of Critical Self-Reflection: Tesia Marshik: https://www.youtube.com/watch?v=855Now8h5Rs

The Myth of Catering to Learning Styles: Joanne K. Olsen: https://www.nsta.org/publications/news/story.aspx?id=52624


Metacognitive Instruction: Suggestions for Faculty

by Audra Schaefer, Ph.D., Assistant Professor of Neurobiology & Anatomical Sciences, University of Mississippi Medical Center

Intro: This guest editor miniseries, “The Evolution of Metacognition”, examines metacognition of students at various stages of education (undergraduate, graduate and professional), so is fitting to wrap up this miniseries on the evolution of metacognition with discussion of faculty and metacognitive instruction. As educators we often find ourselves focused on enhancing the metacognition of our students.  Yet, in order for us to continue to improve/develop as teachers it is important for us to apply similar metacognitive approaches to our teaching.  This final post of the series will include my experiences in being a metacognitively aware educator and evidence-based suggestions for educators at any level who are looking to employ metacognition in their teaching. ~ Audra Schaefer, PhD, guest editor

————————————————————————————————

Being a teacher is not simple.  Teaching goes well beyond a content expert delivering information into the minds of students.  Harden and Crosby (2000) describe twelve roles of teachers that can be grouped into the broader roles of information provider, role model, facilitator, examiner, planner and resource developer. Effectively carrying out all of these roles can become a time consuming process, and while our focus is on our students, it’s important for us to step back and reflect on ourselves.  How effectively are we carrying out the various roles we manage in the process of teaching?

This seemingly straightforward question can be complicated to answer. Assessing yourself in the various roles encompassed by teaching may fall to the side when an instructor is unsure how to perform those roles in the first place. Educators in higher education often receive little to no explicit training in effective teaching methods and likely aren’t receiving much experience with learning theories.  So, naturally, instructors end up doing what they experienced as a student or mimicking what their colleagues are doing in class.  While this isn’t inherently problematic, simply doing it because that is what you experienced, or even because it works for someone else, is insufficient reasoning to keep doing it.

A metacognitive instructor asks why they are proceeding in a particular manner. How does this approach help you reach the goals you have for your course? How does it help your students achieve the objectives you’ve set for them? By the time an individual begins teaching at any level they’ve encountered plenty of situations that required them to apply problem-solving skills, be it in daily life or applying the scientific method to research. Why then as educators would we not apply the same concepts to our teaching?

What additional kinds of questions can, and perhaps should, metacognitive instructors be asking themselves?  If we consider that planning, monitoring and evaluating are all core components of metacognitive regulation (Schraw, 2006), these provide logical stages for instructors to reflect.  In my own experience, being reflective in each step is incredibly useful for improving the same session in future semesters, as well as preparing for other similar sessions.  The following are examples of questions I frequently ask myself, and if you’re interested in finding additional questions, Kimberly Tanner (2012) has a well-written article that provides numerous questions for faculty to ask students and to ask themselves in an effort to promote metacognition. 

When planning a course or class session, consider not only what your goals are for that course or session, but how you plan to reach those goals.  What do students already know about the topic for that session and how do you know what they know? 

In the middle of a class session it is useful to keep tabs on the session pace and to be adaptable in the moment to make improvements as you go.  What do I notice about student behavior in a given session and what might be the cause?  After a session, and after a course is completed I also take the time to think about how I want to change things and why I want to change them in the future. A common thread here being the “why”.  Why do I do what I do? Why did the students respond the way they did? Why would I make a particular change?

the word "Why" written using question marks

Finally, I’ve found modeling metacognitive behaviors to be quite useful with my students (Tanner, 2012).  When discussing challenging topics with students I make a deliberate effort to think about what aspects of that topic gave me difficulty when I first learned it, and then I make a point to explain that topic to the students in a way that helped me (or past students) make sense of it. 

For example, in neuroanatomy there are numerous pathways in the brain and spinal cord that become confusing very quickly.  I frequently teach these by creating the same simplified drawings I made as a student to sort out these pathways.  I also encourage students to draw with me in class to engage them during a lecture, and numerous students have commented that they begin drawing on their own while studying, an approach that they had not previously used.

I also make a point to model how to proceed when faced with a limitation in my knowledge. This has become an important goal for me in my teaching because I’ve noticed many students struggle to recognize what they do and do not know, or struggle with how to proceed when they don’t know something. Earlier in this miniseries, Dr. Husmann and Dr. Hoffman each provided examples of undergraduate and medical students struggling with metacognition, demonstrating that these difficulties span multiple educational levels. We can shape our instruction to students of any educational stage to gain a sense of their current skill level(s) and adjust our teaching to help improve those skills.

In class, when I am inevitably asked a question to which I do not have an answer, I own it and don’t make a big deal out of it. I share what I do know and then proceed to either have them help me research the answer, or I follow up with them to be sure that we’ve all learned from it.  Although I haven’t explicitly assessed whether it’s having any effect on students, I am getting little bits of data to suggest there are at least a few students who’ve noticed.  One of my favorites is this comment from a medical student after completing a neuroscience course I taught, “If she has taught me anything (besides a lot of Neuro) is that it is okay to not know the answer, we aren’t always going to know everything and it’s completely okay.” 

As educators, at any level, we serve as role models. If we expect our students to become better at regulating their learning, we should expect the same of ourselves. In the first post of this miniseries Caroline Mueller discussed how as a graduate student she is working to implement metacognitive approaches to both her learning and teaching. Regardless of your experience with teaching, metacognitive instruction can help you continue to improve. Our attitudes and mindset are important for setting the tone for a classroom environment. If we aim to help our students develop their metacognitive skills, we should aim to do so ourselves.

Harden, R.M., & J. Crosby. (2000). AMEE Guide No 20: The good teacher is more than a lecturer – the twelve roles of the teacher. Medical Teacher, 22:4, 334-347.

Schraw, G., Crippen, K.J., & K. Hartley. (2006). Promoting self-regulation in science education: Metacognition as a part of a broader perspective on learning. Research in Science Education, 36, 111-139.

Tanner, K.D. (2012). Promoting Student Metacognition. CBE – Life Sciences Education, 11, 113-120. [https://www.improvewithmetacognition.com/promoting-student-metacognition/]


Enhancing Medical Students’ Metacognition

by Leslie A. Hoffman, PhD, Assistant Professor of Anatomy & Cell Biology, Indiana University School of Medicine – Fort Wayne

The third post in this guest editor miniseries examines how metacognition evolves (or doesn’t) as students progress into professional school.  Despite the academic success necessary to enter into professional programs such as medical school or dental school, there are still students who lack the metacognitive awareness/skills to confront the increased academic challenges imposed by these professional programs.  In this post Dr. Leslie Hoffman reflects on her interactions with medical students and incorporates data she has collected on self-directed learning and student reflections on their study strategies and exam performance. ~Audra Schaefer, PhD, guest editor

————————————————————————————————-

The beginning of medical school can be a challenging time for medical students.  As expected, most medical students are exceptionally bright individuals, which means that many did not have to study very hard to perform well in their undergraduate courses.  As a result, some medical students arrive in medical school without well-established study strategies and habits, leaving them overwhelmed as they adjust to the pace and rigor of medical school coursework.  Even more concerning is that many medical students don’t realize that they don’t know how to study, or that their study strategies are ineffective, until after they’ve performed poorly on an exam.  In my own experience teaching gross anatomy to medical students, I’ve found that many low-performing students tend to overestimate their performance on their first anatomy exam (Hoffman, 2016).  In this post I’ll explore some of the reasons why many low-performing students overestimate their performance and how improving students’ metacognitive skills can help improve their self-assessment skills along with their performance.

Metacognition is the practice of “thinking about thinking” that allows individuals to monitor and make accurate judgments about their knowledge, skills, or performance.  A lack of metacognitive awareness can lead to overconfidence in one’s knowledge or abilities and an inability to identify areas of weakness.  In medicine, metacognitive skills are critical for practicing physicians to monitor their own performance and identify areas of weakness or incompetence, which can lead to medical errors that may cause harm to patients.  Unfortunately, studies have found that many physicians seem to have limited capacity for assessing their own performance (Davis et al., 2006).  This lack of metacognitive awareness among physicians highlights the need for medical schools to teach and assess metacognitive skills so that medical students learn how to monitor and assess their own performance. 

Cartoon of a brain thinking about a brain

In my gross anatomy course, I use a guided reflection exercise that is designed to introduce metacognitive processes by asking students to think about their study strategies in preparation for the first exam and how they are determining whether those strategies are effective.   The reflective exercise includes two parts: a pre-exam reflection and a post-exam reflection.  

The pre-exam reflection asks students to identify the content areas in which they feel most prepared (i.e. their strengths) and the areas in which they feel least prepared (i.e. their weaknesses).  Students also discuss how they determined what they needed to know for the upcoming exam, and how they went about addressing their learning needs.  Students were also asked to assess their confidence level and make a prediction about their expected performance on the upcoming exam.  After receiving their exam scores students completed a post-exam reflection, which asked them to discuss what, if any, changes they intended to make to their study strategies based on their exam performance. 

My analysis of the students’ pre-exam reflection comments found that the lowest performing students (i.e. those who failed the exam) often felt fairly confident about their knowledge and predicted they would perform well, only to realize during the exam that they were grossly underprepared.  This illusion of preparedness may have been a result of using ineffective study strategies that give students a false sense of learning.  Such strategies often included passive activities such as re-watching lecture recordings, re-reading notes, or looking at flash cards.  In contrast, none of the highest performing students in the class over-estimated their exam grade; in fact, many of them vastly underestimated their performance. A qualitative analysis of students’ post-exam reflection responses indicated that many of the lowest performing students intended to make drastic changes to their study strategies prior to the next exam.  Such changes included utilizing different resources, focusing on different content, or incorporating more active learning strategies such as drawing, labeling, or quizzing.  This suggests that the lowest performing students hadn’t realized that their study strategies were ineffective until after they’d performed poorly on the exam.  This lack of insight demonstrates a deficiency in metacognitive awareness that is pervasive amongst the lowest performing students and may persist in these individuals beyond medical school and into their clinical practice (Davis et al., 2006).

So how can we, as educators, improve medical students’ (or any students’) metacognitive awareness to enable them to better recognize their shortcomings before they perform poorly on an exam?  To answer this question, I turned to the highest performing students in my class to see what they did differently.  My analysis of reflection responses from high-performing students found that they tended to monitor their progress by frequently assessing their knowledge as they were studying.  They did so by engaging in self-assessment activities such as quizzing, either using question banks or simply trying to recall information they’d just studied without looking at their notes.  They also tended to study more frequently with their peers, which enabled them to take turns quizzing each other.  Working with peers also provided students with feedback about what they perceived to be the most relevant information, so they didn’t get caught up in extraneous details. 

The reflective activity itself is a technique to help students develop and enhance their metacognitive skills.  Reflecting on a poor exam performance, for example, can draw a student’s attention to areas of weakness that he or she was not able to recognize, or ways in which his or her preparation may have been inadequate.   Other techniques for improving metacognitive skills include the use of think-aloud strategies in which learners verbalize their thought process to better identify areas of weakness or misunderstanding, and the use of graphic organizers in which learners create a visual representation of the information to enhance their understanding of relationships and processes (Colbert et al., 2015). 

Ultimately, the goal of improving medical students’ metacognitive skills is to ensure that these students will go on to become competent physicians who are able to identify their areas of weakness, create a plan to address their deficiencies, and monitor and evaluate their progress to meet their learning goals.   Such skills are necessary for physicians to maintain competence in an ever-changing healthcare environment.

Colbert, C.Y., Graham, L., West, C., White, B.A., Arroliga, A.C., Myers, J.D., Ogden, P.E., Archer, J., Mohammad, S.T.A., & Clark, J. (2015).  Teaching metacognitive skills: Helping your physician trainees in the quest to ‘know what they don’t know.’  The American Journal of Medicine, 128(3), 318-324.

Davis, D.A., Mazmanian, P.E., Fordis, M., Harrison, R., Thorpe, K.E., & Perrier, L. (2006). Accuracy of physician self-assessment compared with observed measures of competence: A systematic review. JAMA, 296, 1094-1102. Hoffman, L.A. (2016). Prediction, performance, and adjustments: Medical students’ reflections on the first gross anatomy exam.  The FASEB Journal 30 (1 Supplement): 365.2.


Metacognition v. pure effort: Which truly makes the difference in an undergraduate anatomy class?

by Polly R. Husmann, Ph.D., Assistant Professor of Anatomy & Cell Biology, Indiana University School of Medicine – Bloomington

Intro: The second post of “The Evolution of Metacognition” miniseries is written by Dr. Polly Husmann, and she reflects on her experiences teaching undergraduate anatomy students early in their college years, a time when students have varying metacognitive abilities and awareness.  Dr. Husmann also shares data collected that demonstrate a relationship between students’ metacognitive skills, effort levels, and final course grades. ~ Audra Schaefer, PhD, guest editor

————————————————————————————————–

I would imagine that nearly every instructor is familiar with the following situation: After the first exam in a course, a student walks into your office looking distraught and states, “I don’t know what happened.  I studied for HOURS.”  We know that metacognition is important for academic success [1, 2], but undergraduates often struggle with how to identify study strategies that work or to determine if they actually “know” something.  In addition to metacognition, recent research has also shown that repeated recall of information [3] and immediate feedback also improve learning efficiency [4].  Yet in large, content-heavy undergraduate classes both of these goals are difficult to accomplish.  Are there ways that we might encourage students to develop these skills without taking up more class time? 

Online Modules in an Undergraduate Anatomy Course

I decided to take a look at this through our online modules.  Our undergraduate human anatomy course (A215) is a large (400+) course mostly taken by students planning to go into the healthcare fields (nursing, physical therapy, optometry, etc.).  The course is comprised of both a lecture (3x/week) and a lab component (2x/week) with about forty students in each lab section.  We use the McKinley & O’Loughlin text, which comes with access to McGraw-Hill’s Connect website.  This website includes an e-book, access to online quizzes, A&P Revealed (a virtual dissection platform with images of cadavers) and instant grading.  Also available through the MGH Connect site are LearnSmart study modules. 

These modules were incorporated into the course along with the related electronic textbook as optional extra credit assignments about five years ago as a way to keep students engaging with the material and (hopefully) less likely to just cram right before the tests. Each online module asks questions over a chapter or section of a chapter using a variety of multiple-choice, matching, rank order, fill-in-the-blank, and multiple answer questions. For each question, students are not only asked for their answer, but also asked to rank their confidence for their answer on a four-point Likert scale. After the student has indicated his/her confidence level, the module will then provide immediate feedback on the accuracy of their response. 

During each block of material (4 total blocks/semester) in our anatomy course during the Fall 2017 semester, 4 to 9 LearnSmart modules were available and 2 were chosen by the instructor after the block was completed to be included for up to two points of extra credit (total of 16 points out of 800).  Given the frequency of the opening scenario, I decided to take a look at these data and see what correlations existed between the LearnSmart data and student outcomes in our course.

Results

The graphs (shown below) illustrated that the students who got As and Bs on the first exam had done almost exactly the same number of LearnSmart practice questions, which was nearly fifty more questions than the students who got Cs, Ds, or Fs.  However, by the end of the course the students who ultimately got Cs were doing almost the exact same number of practice questions as those who got Bs!  So they’re putting the same effort into the practice questions, but where is the problem? 

The big difference is seen in the percentage of these questions for which each group was metacognitively aware (i.e., accurately confident when putting the correct answer or not confident when putting the incorrect answer).  While the students who received Cs were answering plenty of practice questions, their metacognitive awareness (accuracy) was often the worst in the class!  So these are your hard-working students who put in plenty of time studying, but don’t really know when they accurately understand the material or how to study efficiently. 

Graphs showing questions completed as well as accuracy of self-assessment.

The statistics further confirmed that both the students’ effort on these modules and their ability to accurately rate whether or not they knew the answer to a LearnSmart practice question were significantly related to their final outcome in the course. (See right-hand column graphs.) In addition to these two direct effects, there was also an indirect effect of effort on final course grades through metacognition.  So students who put in the effort through these practice questions with immediate feedback do generally improve their metacognitive awareness as well.  In fact, over 30% of the variation in final course grades could be predicted by looking at these two variables from the online modules alone.

Flow diagram showing direct and indirect effects on course grade

Effort has a direct effect on course grade while also having an indirect effect via metacognition.

Take home points

  • Both metacognitive skills (ability to accurately rate correctness of one’s responses) and effort (# of practice questions completed) have a direct effect on grade.
  • The direct effect between effort and final grade is also partially mediated by metacognitive skills.
  • The amount of effort between students who get A’s and B’s on the first exam is indistinguishable.  The difference is in their metacognitive skills.
  • By the end of the course, C students are likely to be putting in just as much effort as the A & B students; they just have lower metacognitive awareness.
  • Students who ultimately end up with Ds & Fs struggle to get the work done that they need to.  However, their metacognitive skills may be better than many C level students.

Given these points, the need to include instruction in metacognitive skills in these large classes is incredibly important as it does make a difference in students’ final grades.  Furthermore, having a few metacognitive activities that you can give to students who stop into your office hours (or e-mail) about the HOURS that they’re spending studying may prove more helpful to their final outcome than we realize.

Acknowledgements

Funding for this project was provided by a Scholarship of Teaching & Learning (SOTL) grant from the Indiana University Bloomington Center for Innovative Teaching and LearningTheo Smith was instrumental in collecting these data and creating figures.  A special thanks to all of the students for participating in this project!

References

1. Ross, M.E., et al., College Students’ Study Strategies as a Function of Testing: An Investigation into Metacognitive Self-Regulation. Innovative Higher Education, 2006. 30(5): p. 361-375.

2. Costabile, A., et al., Metacognitive Components of Student’s Difficulties in the First Year of University. International Journal of Higher Education, 2013. 2(4): p. 165-171.

3. Roediger III, H.L. and J.D. Karpicke, Test-Enhanced Learning: Taking Memory Tests Improves Long-Term Retention. Psychological Science, 2006. 17(3): p. 249 – 255.

4. El Saadawi, G.M., et al., Factors Affecting Felling-of-Knowing in a Medical Intelligent Tutoring System: the Role of Immediate Feedback as a Metacognitive Scaffold. Advances in Health Science Education, 2010. 15: p. 9-30.


Learning about learning: A student perspective

by Caroline Mueller, B.S., Clinical Anatomy PhD student, University of Mississippi Medical Center

Intro: In this guest editor miniseries, “The Evolution of Metacognition”, we will be discussing a progression of metacognitive awareness and development of metacognition in multiple stages of education, from undergraduate, to graduate and professional students, and even faculty. In this first post Caroline Mueller, a doctoral student in an anatomy education program, is providing a student perspective.  She shares reflections on learning about metacognition, how it has shaped her approaches to learning, and how it is influencing her as an emerging educator.  ~Audra Schaefer, PhD, guest editor

———————————————————————————————-

As a second-year graduate student hearing the word “metacognition” for the first time, I thought the idea of “thinking about thinking” seemed like another activity necessitated by teachers to take up more time. After looking into what metacognition actually meant and the processes it entails, my mindset changed. It is logical to think about the thought processes that occur during learning. Engaging in metacognitive thought seems like an obvious, efficient activity for students to do to test their knowledge—yet very few do it, myself included. In undergrad, I prided myself on getting high grades, thinking that my method of reading, re-writing, memorizing, and then repeating was a labor-intensive but effective method. It did the job, and it resulted in high grades. However, if my goals included retaining the content, this method failed me. If someone today asked me about the Krebs Cycle, I could not recite it like I could for the test, and I definitely could not tell you about its function (something to do with glucose and energy?).

Upon entering graduate school, what I thought were my “fool-proof” methods of study soon became insufficient and fallible. The work load in medical gross anatomy and medical histology increased by at least 20 times (well, it felt like it anyway). It was laborious to keep up with taking notes in lecture, re-writing, reading the text, and then testing myself with practice questions. I felt as though I was drowning in information, and I saw a crippling arthritis in my near future. I then faced my first devastating grade. I felt cheated that my methods did not work, and I wondered why. Needing a change, I started trying different study methods. I started reviewing the information, still re-writing, but self-quizzing with a small group of classmates instead of by myself. We would discuss what we got wrong and explain answers if we knew them. It helped me improve my grades, but I wish I had more guidance about metacognition at that point.

As I begin studying for my terrifying qualifying exams this semester, I am currently facing the daunting task of studying all the material I have learned in the last 2 years of graduate school. Easy task, right? Even though you may sense my dread, I have a different approach to studying because of what I’ve recently learned about metacognition. An important aspect of metacognition is self-assessment, using tools such as pre-assessment and the most confusing point (muddiest point). The pre-assessment is a tool that allows students to examine their current understanding of a topic and to direct them to think about what they do and do not know. It helps guide students to focus their efforts on those elements they do not know or understand well (Tanner, 2012). The muddiest point tool can be used at the end of a long day of studying. Students reflect on the information covered in a class or study session and assess what was the muddiest point (Tanner, 2012).

Both tools have shaped my approach to studying.  Now I study by human body systems, starting each system off by writing what I do know about the subject and then writing down what I want to know by the end of my review. This aids in my assessment of what I do and do not know, so that I can orient myself to where I struggle the most. At first, it seemed like a time-intensive activity, but it quickly made me realize that it was more efficient then rewriting and rereading the content I already knew. I implemented muddiest point in my studies too because after a strenuous day of trying to grasp intense information, I end up feeling like I still do not know anything. After reviewing the information and filling in the gaps, at the end of my week of review, I quiz myself and ask myself what I was most confusing. It helps me plan for future study sessions.

Metacognition feels like it takes a lot of time when you first start doing it because it makes the learner deal with the difficult parts of a subject matter. Students, myself included, want the act of acquiring new information to be rewarding, quick, and an affirmation of their competency of the material. An example of this is when I would get an answer correct when I did practice questions while preparing for an exam, but I never thought about why the correct answer was correct. Getting it right could have been pure luck; in my mind, I must have known the material. By thinking about the “why,” it prompts students to think deeply about their thought process to picking that answer. This act alone helps solidify understanding of the topic. If one can explain how they got to the answer, or why they believe an answer to be true, it allows them to assess how well they understand the content matter.

cartoon of a brain working out using books as weights

My role as a student is beginning to change—I have become a teacher’s assistant, slowly on my way to full-on teacher status. After learning about metacognition and applying it as a student, I attempted to try it on the students I teach.

For example, an important part of metacognition is learning to recognize what you do and do not know. In anatomy lab, in order to prompt students to think deeper about material, I ask students what they know, rather than just giving them the answer to their questions. I let them describe the structure and ask them to explain why they think that structure is what it is.

When I first did this, students resisted—the stress of the first-year medical school makes students desire the answer immediately and to move on. But I persisted in asking questions, explaining to students that finding out what you do know and do not know allows you to focus your studying to filling in those gaps.

Since I am a new convert to teacher assistant from student, students often ask me the best ways to study and about how I studied. I again urge them to take an approach that helps identify gaps in their knowledge. I encourage them to go over the chapter headings and write down what they know about each one, essentially completing a preassessment I previously mentioned.

At this point, I might be a little rough in my approach to instill the incredible power of metacognitive skills in students, but I am still working out the kinks. I am still learning—learning to be an effective teacher, learning the content as a student, and learning to learn about teaching and learning. As a student and a teacher, my hope for the future of my teaching is that I learn how to implement metacognitive methods effectively and to be able to assess these methods and keep trying to improve on them.

Tanner, K.D. (2012). Promoting student metacognition. CBE-Life Sciences Education, 11, 113-120. [https://www.improvewithmetacognition.com/promoting-student-metacognition/]


Metacognition, the Representativeness Heuristic, and the Elusive Transfer of Learning

by Dr. Lauren Scharff, U. S. Air Force Academy*

When we instructors think about student learning, we often default to immediate learning in our courses. However, when we take a moment to reflect on our big picture learning goals, we typically realize that we want much more than that. We want our students to engage in transfer of learning, and our hopes can be grand indeed…

  • We want our students to show long-term retention of our material so that they can use it in later courses, sometimes even beyond those in our disciplines.
  • We want our students to use what they’ve learned in our course as they go through life, helping them both in their profession and in their personal lives.

These grander learning goals often involve learning of ways of thinking that we endeavor to develop, such as critical thinking and information literacy. And, for those of us who believe in the broad value of metacognition, we want our students to develop metacognition skills. But, as some of us have argued elsewhere (Scharff, Draeger, Verpoorten, Devlin, Dvorak, Lodge & Smith 2017), metacognition might be key for the transfer of learning and not just a skill we want our students to learn and then use in our course.

Metacognition involves engaging in intentional awareness of a process and using that awareness to guide subsequent behavioral choices (self-regulation). In our 2017 paper, we argued that students don’t engage in transfer of learning because they aren’t aware of the similarities of context or process that would indicate that some sort of learning transfer would be useful or appropriate. What we didn’t explore in that paper is why that first step might be so difficult.

If we look to research in cognitive psychology, we can find a possible answer to that question – the representativeness heuristic. Heuristics are mental short-cuts based on assumptions built from prior experience. There are several different heuristics (e.g. representativeness heuristic, availability heuristic, anchoring heuristic). They allow us to more quickly and efficiently respond to the world around us. Most of the time they serve us well, but sometimes they don’t.

The representativeness heuristic occurs when we attend to obvious characteristics of some type of group (objects, people, contexts) and then use those characteristics to categorize new instances as part of that group. If obvious characteristics aren’t shared, then the new instances are categorized separately.

For example, if a child is out in the countryside for the first time, she might see a four-legged animal in the field. She might be familiar with dogs from her home. When she sees the four-legged creature in the field, so might immediately characterize the new creature as a dog based on that characteristic. Her parents will correct her, and say, “No. Those are cows. They say moo moo. They live in fields.” The young girl next sees a horse in a field. She might proudly say, “Look another cow!” Her patient parents will now have to add characteristics that will help her differentiate between cows and horses, and so on. At some level, however, the young girl must also learn meta-characteristics that make all these animals connected as mammals: warm-blooded, furred, live-born, etc. Some of these characteristics may be less obvious from a glance across a field.

Now – how might this natural, human way-of-thinking impact transfer of learning in academics?

  • To start, what are the characteristics of academic situations that support the use of the representative heuristic in ways that decrease the likelihood of transfer of learning?
  • In response, how might metacognition help us encourage transfer of learning?

There are many aspects of the academic environment that might answer the first question – anything that leads us to perceive differences rather than connections. For example, math is seen as a completely different domain than literature, chemistry, or political science. The content and the terminology used by each discipline are different. The classrooms are typically in different buildings and may look very different (chemistry labs versus lecture halls or small group active learning classrooms), none of which look or feel like the physical environments in “real life” beyond academics. Thus, it’s not surprising that students do not transfer learning across classes, much less beyond classes.

In response to the second question, I believe that metacognition can help increase the transfer of learning because both mental processes rely on awareness/attention as a first step. Representativeness categorization depends on the characteristics that are attended. Without conscious effort, the attended characteristics are likely to be those most superficially obvious, which in academics tend to highlight differences rather than connections.

But, with some guidance and encouragement, other less obvious characteristics can become more salient. If these additional characteristics cross course/disciplinary/academic boundaries, then opportunities for transfer will enter awareness. The use of this awareness to guide behavior, transfer of learning in this case, is the second step in metacognition.

Therefore, there are multiple opportunities for instructors to promote learning transfer, but we might have to become more metacognitive about the process in order to do so. First we must develop awareness of connections that will promote transfer, rather than remaining within the comfort zone of their disciplinary expertise. Then we must use that awareness and self-regulate our interactions with students to make those connections salient to students. We can further increase the likelihood of transfer behaviors by communicating their value.

We typically can’t do much about the different physical classroom environments that reinforce the distinctions between our courses and nonacademic environments. Thus, we need to look for and explicitly communicate other types of connections. We can share examples to bridge terminology differences and draw parallels across disciplinary processes.

For example, we can point out that creating hypotheses in the sciences is much like creating arguments in the humanities. These disciplinary terms sound like very different words, but both involve a similar process of thinking. Or we can point out that MLA and APA writing formats are different in the details, but both incorporate respect for citing others’ work and give guidance for content organization that makes sense for the different disciplines. These meta-characteristics unite the two formatting approaches (as well as others that students might later encounter) with a common set of higher-level goals. Without such framing, students are less likely to appreciate the need for formatting and may interpret the different styles as arbitrary busywork that doesn’t deserve much thought.

We can also explicitly share what we know about learning in general, which also crosses disciplinary boundaries. A human brain is involved regardless of whether it’s learning in the social sciences, the humanities, the STEM areas, or the non-academic professional world. In fact, Scharff et al (2017) found significant positive correlations between thinking about learning transfer and thinking about learning processes and the likelihood to use awareness of metacognition to guide practice.

Cognitive psychologists know that we can reduce errors that occur from relying on heuristics if we turn conscious attention to the processes involved and disengage from the automatic behaviors in which we tend to engage. Similarly, as part of a metacognitive endeavor, we can help our students become aware of connections rather than differences across learning domains, and encourage behaviors that promote transfer of learning.

Scharff, L., Draeger, J., Verpoorten , D., Devlin, M., Dvorakova, L., Lodge, J. & Smith, S. (2017). Exploring Metacognition as Support for Learning Transfer. Teaching and Learning Inquiry, Vol 5, No. 1. DOI: http://dx.doi.org/10.20343/5.1.6 A Summary of this work can also be found at https://www.improvewithmetacognition.com/researching-metacognition/

* Disclaimer: The views expressed in this document are those of the author and do not reflect the official policy or position of the U. S. Air Force, Department of Defense, or the U. S. Govt.


Metacognition at Goucher II: Training for Q-Tutors

by Dr. Justine Chasmar & Dr. Jennifer McCabe; Goucher College

In the first post of this series, we described various implementations of Goucher College’s metacognition-focused model called the “New 3Rs”: Relationships, Resilience, and Reflection. Here we focus on how elements of metacognition have driven the training of tutors in Goucher’s Quantitative Reasoning (QR) Center.

image from https://www.goucher.edu/explore/ (faculty and student giving a high five)

The QR Center was established in the fall of 2017 to support the development of numeracy in our students and also specifically to bolster our new data analytics general education requirement (part of the Goucher Commons Curriculum, described in depth in our first article). The QR Center started at a time of transition as Goucher shifted from a one-course quantitative reasoning requirement to a set of two required courses: foundational data analytics and data analytics within a discipline. The QR Center mission is to help students with quantitative skill and content development across all disciplines, with a focus on promoting quantitative literacy. To foster these skills, the QR Center offers programming such as appointment-based tutoring, drop-in tutoring, workshops, and academic consultations, with peers (called Q-tutors) as the primary medium of support.

Metacognition is a guiding principle for the QR Center – especially reflection and self-regulated learning. This theme is woven through each piece of QR Center programming, from a newly-developed tutor training course to the focus on academic skill-building at tutoring sessions.

To support the professional development and training of the Q-tutors, the director (co-author of this blog, Dr. Justine Chasmar) created a one-credit course required for all students new to the position. This course combines education, mathematics, quantitative reasoning, and data analytics, and focuses on the intersection of teaching pedagogy within each realm. Because it is primarily set within the context of quantitative content, this course is more focused, and inherently more meaningful, than traditional tutor training. The course is also unique in combining practical exercises with metacognitive reflection. Individual lessons range from basic pedagogy to reviews of essential quantitative content for the tutoring position. Learning is scaffolded by supporting professional practice with continuous reflection and applications toward improved self-regulated learning – both for the tutors and for the students they will assist.

The content of each tutor preparation class meeting is sandwiched by metacognitive prompting. Before class, the Q-tutors prepare, engage, and reflect; for example, they may read a relevant piece of literature and respond to several open-ended reflective prompts about the reading (see “Suggested Readings” below). The synchronous tutor preparation class lesson, attended by all new Q-tutors and the director who teaches the course, involves discussion and other activities relating to the assigned reading, especially emphasizing conversation about issues or concerns the tutors are facing in their new roles. The “metacognition sandwich” is completed by a reflective post to a discussion board, where the Q-tutors respond and build on each other’s reflections, describing what they had learned that day, asking and answering questions, and elaborating on how to apply the lesson to tutoring.

In addition to these explicit reflection activities, the tutor preparation course facilitates discussion of the use and importance of self-regulated learning strategies (SRL) and behaviors. Q-tutors are provided many opportunities to reflect on their own learning. For example, they complete and discuss multiple SRL-based inventories, such as the GAMES (Svinicki, 2006) and the Index of Learning Styles Questionnaire (credit to Richard Felder and Barbara Solomon). Class lessons revolve around evidence-based learning strategies, such as self-testing, help-seeking, and techniques to transform information.

One assignment requires tutors to create and present a “study hack,” an idea adapted from a thread on a popular and supportive listserv for academic support professionals (LRNASST). The assignment, inherently reflective, allows the tutors to consider strategies they successfully utilize, summarize that information, and translate the SRL strategy into a meaningful presentation and worksheet for the tutor group. The Q-tutors present their “study hacks” during class time, with examples from past semesters ranging from mindfulness exercises to taking notes with color coding. These worksheets are also saved as a resource for students so they can learn from SRL strategies endorsed by Q-tutors.

Q-tutors are encouraged to “pay forward” their metacognitive training by focusing on SRL and reflection during their tutoring sessions. They teach study strategies such as self-testing and learning-monitoring, and support student reflection through “checking for understanding” activities at the end of each tutoring session. Tutors know that teaching study skills is one of the major priorities during tutoring sessions; and they close the loop by meeting with other tutors regularly to discuss new and useful skills they can communicate to students they work with. Tutors also get a regular reminder about the importance of study skill development when they read the end-of-appointment survey responses from their tutees, particularly in response to the prompt for “study skill reviewed.”

As a summative assignment in the course, Q-tutors write a Tutoring Philosophy, similar to a teaching statement. By this time, the tutors have gained an awareness of the importance of SRL and metacognitive reflection, as seen in excerpts from sample philosophies from previous semesters:

I strive to strengthen numeracy within our tutees, rid them of their anxieties surrounding quantitative subjects, and build up their skills to become better learners.

Once the tutee gains enough trust and confidence in the material, it is essential for them to begin guiding the direction of the session toward their own learning goals.

By practicing good study habits, self-advocacy, organizational skills, and a     calm demeanor when tutoring, tutees learn what it takes to be a better student.

By thinking intentionally about what it means to be an effective tutor,these students realize that they must model what they teach in a reflective, continuous mutual-learning process: “[In tutoring] my job is to identify what each person needs, use my skills to support their learning, and reflect on these interactions to improve my methods over time.”

In sum, using an intentional metacognitive lens, Q-tutor training at Goucher College supports quantitative skills and general learning strategies in the many students the QR Center reaches. Through this metacognitive cycle, the QR Center supports Goucher’s learning community in improving the Reflection component of the Goucher 3Rs.

Suggested References

Scheaffer, R. L. (2003). Statistics and quantitative literacy. Quantitative Literacy: Why Numeracy Matters for Schools and Colleges, 145-152. Retrieved from https://www.maa.org/sites/default/files/pdf/QL/pgs145_152.pdf

Siegle, D., & McCoach, D. B. (2007). Increasing student mathematics self-efficacy through teacher training. Journal of Advanced Academics, 18, 278–312. https://doi.org/10.4219/jaa-2007-353

Svinicki, M. D. (2006). Helping students do well in class: GAMES. APS Observer, 19(10). Retrieved from https://www.psychologicalscience.org/observer/helping-students-do-well-in-class-games


Williamson, G. (2015). Self-regulated learning: an overview of metacognition, motivation and behaviour. Journal of Initial Teacher Inquiry, 1, 25-27. Retrieved from http://hdl.handle.net/10092/11442


Paired Self-Assessment—Competence Measures of Academic Ranks Offer a Unique Assessment of Education

by Dr. Ed Nuhfer, California State Universities (retired)

What if you could do an assessment that simultaneously revealed the student content mastery and intellectual development of your entire institution, and you could do so without taking either class time or costing your institution money? This blog offers a way to do this.

We know that metacognitive skills are tied directly to successful learning, yet metacognition is rarely taught in content courses, even though it is fairly easy to do. Self-assessment is neither the whole of metacognition nor of self-efficacy, but self-assessment is an essential component to both. Direct measures of students’ self-assessment skills are very good proxy measures for metacognitive skill and intellectual development. A school developing measurable self-assessment skill is likely to be developing self-efficacy and metacognition in its students.   

This installment comes with lots of artwork, so enjoy the cartoons! We start with Figure 1A, which is only a drawing, not a portrayal of actual data. It depicts an “Ideal” pattern for a university educational experience in which students progress up the academic ranks and grow in content knowledge and skills (abscissa) and in metacognitive ability to self-assess (ordinate). In Figure 1B, we now employ actual paired measures. Postdicted self-assessment ratings are estimated scores that each participant provides immediately after seeing and taking a test in its entirety.

Figure 1.

Figure 1. Academic ranks’ (freshman through professor) mean self-assessed ratings of competence (ordinate) versus actual mean scores of competence from the Science Literacy Concept Inventory or SLCI (abscissa). Figure 1A is merely a drawing that depicts the Ideal pattern. Figure 1B registers actual data from many schools collected nationally. The line slopes less steeply than in Fig. 1A and the correlation is r = .99.

The result reveals that reality differs somewhat from the ideal in Figure 1A. The actual lower division undergraduates’ scores (Fig. 1B) do not order on the line in the expected sequence of increasing ranks. Instead, their scores are mixed among those of junior rank. We see a clear jump up in Figure 1B from this cluster to senior ranks, a small jump to graduate student rank and the expected major jump to the rank of professors. Note that Figure 1B displays means of groups, not ratings and scores of individual participants. We sorted over 5000 participants by academic rank to yield the six paired-measures for the ranks in Figure 1B.

We underscore our appreciation for large databases and the power of aggregating confidence-competence paired data into groups. Employment of groups attenuates noise in such data, as we described earlier (Nuhfer et al. 2016), and enables us to perceive clearly the relationship between self-assessed competence and demonstrable competence.  Figure 2 employs a database of over 5000 participants but depicts them in 104 randomized (from all institutions) groups of 50 drawn from within each academic rank. The figure confirms the general pattern shown in Figure 1 by showing a general upwards trend from novice (freshmen and sophomores), developing experts (juniors, seniors and graduate students) through experts (professors), but with considerable overlap between novices and developing experts.

Figure 2

Figure 2. Mean postdicted self-assessment ratings (ordinate) versus mean science literacy competency scores by academic rank.  Figure 2 comes from selecting random groups of 50 from within each academic rank and plotting paired-measures of 104 groups.

The correlations of r = .99 seen in Figure 1B have come down a bit to r = .83 in Figure 2. Let’s learn next why this occurs. We can understand what is occurring by examining Figure 3 and Table 1. Figure 3 comes from our 2019 database of paired measures, that is now about four times larger than the database used in our earlier papers (Nuhfer et al. 2016, 2017), and these earlier results we reported in this same kind of graph continue to be replicated here in Figure 3A.  People generally appear good at self-assessment, and the figure refutes claims that most people are either “unskilled and unaware of it” or “…are typically overly optimistic when evaluating the quality of their performance….” (Ehrlinger, Johnson, Banner, Dunning, & Kruger, 2008). 

Figure 3

Figure 3. Distributions of self-assessment accuracy for individuals (Fig. 3A) and of collective self-assessment accuracy of groups of 50 (Fig. 3B).

Note that the range in the abscissa has gone from 200 percentage points in Fig 3A to only 20 percentage points in Fig. 3B. In groups of fifty, 81% of these groups estimate their mean scores within 3 ppts of their actual mean scores. While individuals are generally good at self-assessment, the collective self-assessment means of groups are even more accurate. Thus, the collective averages of classes on detailed course-based knowledge surveys seem to be valid assessments of the mean learning competence achieved by a class.

The larger the groups employed, the more accurately the mean group self-assessment rating is likely to approximate the mean competence test score of the group (Table 1). In Table 1, reading across the three columns from left to right reveals that, as group sizes increase, greater percentages of each group converge on the actual mean competency score of the group.

Table 1

Table 1. Groups’ self-assessment accuracy by group size. The ratings in ppts of groups’ postdicted self-assessed mean confidence ratings closely approximate the groups’ actual demonstrated competency mean scores (SLCI). In group sizes of 200 participants, the mean self-assessment accuracy for every group is within ±3 ppts. To achieve such results, researchers must use aligned instruments that produce reliable data as described in Nuhfer, 2015 and Nuhfer et al. 2016.

From Table 1 and Figure 3, we can now understand how the very high correlations in Figure 1B are achievable by using sufficiently large numbers of participants in each group. Figure 3A and 3B and Table 1 employ the same database.

Finally, we verified that we could achieve high correlations like those in Figure 2B in single institutions, even when we examined only the four undergraduate ranks within each. We also confirmed that the rank orderings and best-fit line slopes formed patterns that differed measurably by the institution.  Two examples appear in Figure 4. The ordering of the undergraduate ranks and the slope of the best-fit line in graphs such as those in Fig. 4 are surprisingly informative.

Figure 4

Figure 4. Institutional profiles from paired measures of undergraduate ranks. Figure 4A is from a primarily undergraduate, public institution. Figure 4B comes from a public research-intensive university. The correlations remain very high, and the best-fit line slopes and the ordering pattern of undergraduate ranks are distinctly different between the two schools. 

In general, steeply sloping best-fit lines in graphs like Figures 1B, 2, and 4A indicate when significant metacognitive growth is occurring together with the development of content expertise. In contrast, nearly horizontal best-fit lines (these do exist in our research results but are not shown here) indicate that students in such institutions are gaining content knowledge through their college experience but are not gaining  metacognitive skill. We can use such information to guide the assessment stage of “closing the loop.” The information provided does help taking informed actions. In all cases where undergraduate ranks appear ordered out of sequence in such assessments (as in Fig. 1B and Fig. 4B), we should seek understanding why this is true.

In Figure 4A, “School 7” appears to be doing quite well. The steeply sloping line shows clear growth between lower division and upper division undergraduates in both content competence and metacognitive ability. Possibly, the school might want to explore how it could extend gains of the sophomore and senior classes. “School 3”  (Fig. 4B) probably should want to steepen its best-fit line by focusing first on increasing self-assessment skill development across the undergraduate curriculum.

We recently used paired measures of competence and confidence to understand the effects of privilege on varied ethnic, gender, and sexual orientation groups within higher education. That work is scheduled for publication by Numeracy in July 2019. We are next developing a peer-reviewed journal article to use the paired self-assessment measures on groups to understand institutions’ educational impacts on students. This blog entry offers a preview of that ongoing work.

Notes. This blog follows on from earlier posts: Measuring Metacognitive Self-Assessment – Can it Help us Assess Higher-Order Thinking? and Collateral Metacognitive Damage, both by Dr. Ed Nuhfer.

The research reported in this blog distills a poster and oral presentation created by Dr. Edward Nuhfer, CSU Channel Islands & Humboldt State University (retired); Dr. Steven Fleisher, California State University Channel Islands; Rachel Watson, University of Wyoming; Kali Nicholas Moon, University of Wyoming; Dr. Karl Wirth, Macalester College; Dr. Christopher Cogan, Memorial University; Dr. Paul Walter, St. Edward’s University; Dr. Ami Wangeline, Laramie County Community College; Dr. Eric Gaze, Bowdoin College, and Dr. Rick Zechman, Humboldt State University. Nuhfer and Fleisher presented these on February 26, 2019 at the American Association of Behavioral and Social Sciences Annual Meeting in Las Vegas, Nevada. The poster and slides from the oral presentation are linked in this blog entry.


Setting Common Metacognition Expectations for Learning with Your Students

by Patrick Cunningham, Ph.D., Rose-Hulman Institute of Technology

We know that students’ prior subject knowledge impacts their learning in our courses. Many instructors even give prior knowledge assessments at the start of a term and use the results to tailor their instruction. But have you ever considered the impact of students’ prior knowledge and experiences with learning on their approaches to learning in your course? It is important for us to recognize that our students are individuals with different expectations and learning preferences. Encouraging our students’ metacognitive awareness and growth can empower them to target their own learning needs and establish common aims for learning.

image of target with four colored arrows pointed at the center

Among other things, our students often come to us with having experienced academic success using memorization and pattern matching approaches to material, i.e., rehearsal strategies. Because they have practiced these approaches over time and have gotten good grades in prior courses or academic levels, these strategies are firmly fixed in their learning repertoire and are their go-to strategies. Further, when they get stressed academically, they spend more time employing these strategies – they want more examples, they re-read and highlight notes, they “go-over” solutions to old exams, they memorize equations for special cases, and more. And many of us did too, when we were in their shoes.

However, rehearsal strategies only result in shorter-term memory of concepts and surface-level understanding. In order to build more durable memory of concepts and deeper understanding, more effortful strategies are needed. Recognizing this and doing something about it is metacognitive activity – knowing about how we process information and making intentional choices to regulate our learning and learning approaches. One way to engage students in building such metacognitive self-awareness and set common expectations for learning in your course starts with a simple question,

‘What does it mean to learn something?”

I often ask this at the start of a course. In an earlier post, Helping Students Feel Responsible for Their Learning, I introduced students’ common responses. Learning something, they say, means being able to apply it or explain it. With some further prompting we get to applying concepts to real situations and explaining material to a range of people, from family member to bosses, to cross-functional design teams. These are great operational definitions of learning, and I affirm my students for coming up with them.

Then I go a step further, explaining how transferring to new applications and explaining to a wide range of audiences requires a richly interconnected knowledge framework. For our knowledge to be useful and available, it must be integrated with what we already know.

So, I tell my students, in this class we will be engaging in activities to connect and organize our knowledge. I also try to prepare my students for doing this, acknowledging it will likely be different than what they are used to. In my engineering courses students love to see and work more and more example problems – i.e., rehearsal. Examples are good to a point, particularly as you engage a new topic, but we should be moving beyond just working and referencing examples as we progress in our learning. Engaging in this discussion about learning helps make my intentions clear.

I let my students know that as we engage with the material differently it will feel effortful, even hard at times. For example, I ask my students to come up with and explore variations on an example after we have solved it. A good extension is to have pairs working different variations explain their work to each other. Other times I provide a solution with errors and ask students to find them and take turns explaining their thinking to a neighbor. In this effortful processing, they are building connections. My aim is to grow my students’ metacognitive knowledge by expanding their repertoire of learning strategies and lowering the ‘activation energy’ to using these strategies on their own. It is difficult to try something new when there is so much history behind our habitual approaches.

Another reason I like this opening discussion, is that it welcomes opportunities for metacognitive dialogue and ongoing conversations about metacognition. I have been known to stop class for a “meta-moment” where we take time to become collectively more self-aware, recognizing growth or monitoring our level of understanding. The discussion about what it means to learn something also sets a new foundation and changes conversations about exam, quiz, and homework preparations and performance. You might ask, “How did you know you knew the material?” Instead of suggesting “working harder” or “studying more”, we can talk meaningfully about the context and choices and how effective or ineffective they were.

Such metacognitive self-examination can be challenging for students and even a little uncomfortable, especially if they exhibit more of a fixed mindset toward learning. It may challenge their sense of self, their identity. It is vital to recognize this. Some students may exhibit resistance to the conversation or to the active and constructive pedagogies you employ. Such resistance is challenging, and we must be careful with our responses. Depersonalizing the conversation by focusing on the context and choices can make it feel less threatening. For example, if a student only studied the night or two before an exam, instead of thinking they are lazy or don’t care about learning, we can acknowledge the challenge of managing competing priorities and ask them what they could choose to do differently next time. We need to be careful not to assume too much, e.g., a student is lazy. Questions can help us understand our students better and promote student self-awareness. For more on this approach to addressing student resistance see my post on Addressing Student Resistance to Engaging in their Metacognitive Development.

Students’ prior learning experiences impact how they approach learning in specific courses. Engaging students early in a metacognitive discussion can help develop a common set of expectations for learning in your course, clarifying your intentions. It also can open doors for metacognitive dialogue with our students; one-on-one, in groups, or as a class. It welcomes metacognition as a relevant topic into the course. However, as we engage in these discussions, we must be sensitive to our students, respectfully and gently nudging their metacognitive growth. Remember, this is hard work and it was (and often still is) hard for us too!

Acknowledgements This blog post is based upon metacognition research supported by the National Science Foundation under Grant Nos. 1433757 & 1433645. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.


Distributed Metacognition: Are Two Heads Better Than One—Or Does it Even Exist?

by Aaron S. Richmond

Metropolitan State University of Denver

In many of the wonderful blog posts in Improve with Metacognition, scholars around the globe have described various teaching techniques and strategies to improve metacognition in our students. Many of these techniques require students to openly describe their learning behaviors with the hopes that they will become more metacognitively aware. For example, asking students how, where, and when they study, to reflect on the use of the strategies, and how they can improve their study strategies. Many of these examples are executed in a class setting and even sometimes students are asked to share their strategies with one another and discuss how these strategies work, when they work, when they don’t, etc. In such cases when students are sharing their beliefs about metacognition (e.g., learning strategies) we know that students benefit by improving their own metacognition through this process, but is it possible that they are improving the overall level of the group or class metacognition? Meaning, is it possible that there is more than just an individual metacognitive process occurring—is it possible that there is some form of distributed metacognition occurring across the students that is shared?

What is Distributed Metacognition?

Little is known about the concept of distributed metacognition (Chiu & Kuo, 2009; Wecker, & Fischer, 2007, July). In fact, in a Google Scholar search, there are only 38 results with the exact phrase “distributed metacognition”. In this limited research there is no clear operational definition of distributed metacognition. Therefore, I am interested in understanding and have a discussion with you all about the concept of distributed metacognition. From what I gather, it is not the spreading of metacognition over time (akin to distributed practice, spacing, or studying). Nor am I referring to what Philip Beaman (2016) referred to in the context of machine learning and human distraction in his IwM blog.  However, could it be that distributed metacognition is the ability of two or more individuals to first talk and discuss their personal metacognition (the good, the bad, and the ugly) and to then to use these metacognitive strategies in a distributed manner (i.e., the group discusses and uses a strategy as a group)? Furthermore,  Chiu and Kuo’s (2009) definition of social metacognition may be akin to distributed metacognition.  Although they have no empirical evidence, they suggest that metacognitive tasks can be distributed across group members and thus they engage in “social metacognition”.  For instance, in task management, students can simultaneously evaluate and monitor, regulate others to reduce mistakes and distraction, and divide and conquer to focus on subsets of the problem. Finally, in discussion with John Draeger (an IwM co-creater), he asked whether distributed metacognition was “…something over and above collaborative learning experiences that involve collective learning about metacognition and collectively practicing the skill?” After giving it some thought, my answer is, “I think so.”  As such, let me try to give an example to illustrate whether distributed metacognition exists and how we may define it.

Using IF-AT’s Collaboratively

I have written on the utility of using immediate feedback assessment techniques (IF-AT) as a metacognitive tool for assessment.  I often use IF-AT in a team-based and collaborative learning way. I have students get into groups or dyads and discuss and debate 1 or 2 questions on the assessment. They then scratch off their answer and see how they do. Regardless of whether they were correct or not, I have students discuss, debate, and even argue why they were so adamant about their answer as an individual and as a group. I then have students then discuss, debate, and answer two more questions with one another. They have to, as a group, come up with strategies for monitoring their performance, steps to solve the problem, etc. They repeat this process until the quiz is finished. When my students are doing this IF-AT process, I find (I know introspection is not the best science) that they become so intrigued by other students’ metacognitive processes, that they often slightly modify their own metacognitive processes/strategies AND collectively come up with strategies to solve the problems.

So, what is going on here? Are students just listening to other students’ metacognitive and epistemological beliefs and choosing to either internalize or ignore these beliefs?  Or in contrast, when there is a group task at hand, do students share (i.e., distribute) the metacognitive strategy that they learned through the group process and then use it collectively?  For example, when students perform activities like dividing tasks and assigning them to others (i.e., resource demand and monitoring), regulating others’ errors or recognize correct answers (i.e., monitoring) within the group— would these behaviors count as distributed metacognition?  Is it possible that in these more collaborative situations, the students are not only engaging in their own internal metacognition, but that they are also engaging in a collective distributed cognition among the group used in a collective manner? That is, in the IF-AT activity example, students may be both becoming more metacognitively aware, changing their metacognitive beliefs, and experimenting with different strategies—on an individual level—AND they may also have a meta-strategy that exists among the group members (distributed metacognition) that they then use to answer the quiz questions and become more effective and successful at completing the task.

Currently (haha), I am leaning towards the latter. I think that the students might be engaging in both individual and distributed metacognition in part because of an article in the Proceedings of the Annual Meeting of the Cognitive Science Societyby Christopher Andersen (2003). Andersen found that when students worked in pairs to solve two science tasks, that over time, students who were in pairs rather than working individually made more valid inferences (correct judgments and conclusion about the task) than when they worked alone. Specifically, on the first trial of solving a problem, the dyads use relatively ineffective strategies, on the second trial they expanded and adapted their use of effective strategies, and by the third trial, the dyad expanded even more effective strategies. Andersen (2003) concluded that the students were collectively consolidating their metacognitive strategies.  Meaning, when working collaboratively, students employed more effective metacognitive strategies that led to solving the problem correctly. Although this is only one study, it provides a hint that distributed metacognition may exist.

Tentative Conclusions

So, where does this leave us? As almost always, I am awash with questions. I have more questions than answers. Thus, what do you think? As defined, do you think that distributed metacognition exists? If not, how would you describe what is going on when students share their metacognitive strategies and then employ metacognitive strategies in a group setting? Is this situation just a product of collaborative or cooperative learning?

If you do believe distributed metacognition exists, how do we measure it? How do we create instructional methods that may increase it?  Again, I am full of questions and my mind is reeling about this topic, and I would love to hear from you to know your thoughts and opinions.

References

Andersen, C. (2003, January). Distributed metacognition during peer collaboration. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 25, No. 25).

Beaman, P. (2016, May 14th). Distributed metacognition: Insights from machine learning and human distraction. Retrieved from https://www.improvewithmetacognition.com/distributed-metacognition-insights-machine-learning-human-distraction/

Chiu, M. M., & Kuo, W. S. (2009). From metacognition to social metacognition: Similarities, differences, and learning. Journal of Educational Research, 3(4), 1-19. Retrieved from https://www.researchgate.net/profile/Ming_Chiu/publication/288305672_Social_metacognition_in_groups_Benefits_difficulties_learning_and_teaching/links/58436dba08ae2d217563816b/Social-metacognition-in-groups-Benefits-difficulties-learning-and-teaching.pdf

Richmond, A. S. (2017, February 22nd). Scratch and win or scratch and lose? Immediate feedback assessment technique. Retrieved from https://www.improvewithmetacognition.com/scratch-win-scratch-lose-immediate-feedback-assessment-technique/

Wecker, C., & Fischer, F. (2007, July). Fading scripts in computer-supported collaborative learning: The role of distributed monitoring. In Proceedings of the 8th international conference on Computer supported collaborative learning (pp. 764-772).


How Metacognition Helps Develop a New Skill

by Roman Taraban, Ph.D., Texas Tech University

Metacognition is often described in terms of its general utility for monitoring cognitive processes and regulating information processing and behavior. Within memory research, metacognition is concerned with assuring the encoding, retention, and retrieval of information. A sense of knowing-you-know is captured in tip-of-the-tongue phenomena. Estimating what you know through studying is captured by judgments of learning. In everyday reading, monitoring themes and connections between ideas in a reading passage might arouse metacognitive awareness that you do not understand a passage that you are reading, and so you deliberately take steps to repair comprehension.  Overall, research shows that metacognition can be an effective aid in these common situations involving memory, learning, and comprehension (Dunlosky & Metcalfe, 2008).

image from https://www.champagnecollaborations.com/keepingitreal/keeoing-it-real-getting-started

But what about new situations?  If you are suddenly struck with a great idea, can metacognition help? If you want to learn a new skill, how does metacognition come into play? Often, we want to develop fluency, we want to accurately and quickly solve problems. The classic model of skill development proposed by Fitts and Posner (1967) did not explicitly incorporate metacognition into the process.  A recent model by Chein and Schneider (2012), however, does give metacognition a prominent role.  In this blog, I will review the Fitts and Posner model, introduce the Chein and Schneider model, and suggest ways that the latter model can inform learning and development.  

In Fitts and Posner’s (1967) classic description of the development of skilled performance there are three overlapping phases:

  • Initially, facts and rules for a task are encoded in declarative memory, i.e., the part of memory that stores information.
  • The person then begins practicing the task, which initiates proceduralization (i.e., encoding the action sequences into procedural memory), which is that part of memory dedicated to action sequences.  Errors are eliminated during this phase and performance becomes smooth. This phase is conscious and effortful and gradually shifts into the final phase.
  • As practice continues, the action sequence, carried out by procedural memory, becomes automatic and does not draw heavily on cognitive resources.

An example of this sequence is navigating from point A to point B, like from your home to your office.  Initially, the process depends on finding streets and paying attention to where you are at any given time, correcting for wrong turns, and other details.  After many trials, you leave home and get to the office without a great deal of effort or awareness.  Details that are not critical to performance will fall out of attention.  For instance, you might forget the names of minor streets as they are no longer necessary for you to find your way. Another more academic example of Fitts and Posner includes learning how to solve math problems (Tenison & Anderson, 2016). In math problems, for instance, retrieval of relevant facts from declarative memory and calculation via procedural memory become accurate and automatic along with speed-up of processing.

Chein and Schneider (2012) present an extension of the Fitts and Posner model in their account of the changes that take place from the outset of learning a new task to the point where performance becomes automatic. What is distinctive about their model is how they describe metacognition. Metacognition, the first stage of skill development, “guides the establishment of new routines” (p. 78) through “task preparation” (p. 80) and “task sequencing and initiation” (p. 79). “[T]he metacognitive system aids the learner in the establishing the strategies and behavioral routines that support the execution of the task” (p. 79).  Chein and Schneider suggest that the role of metacognition could go deeper and become a characteristic pattern of a person’s thoughts and behaviors: “We speculate that individuals who possess a strong ability to perform in novel contexts may have an especially well-developed metacognitive system which allows them to rapidly acquire new behavioral routines and to consider the likely effectiveness of alternative learning strategies (e.g., rote rehearsal vs. generating explanations to oneself; Chi, 2000).”

In the Chein and Schneider model, metacognition is the initiator and the organizer.  Metacognitive processing recruits and organizes the resources necessary to succeed at learning a task.  These could be cognitive resources, physical resources, and people resources. If, for example, I want to learn to code in Java, I should consider what I need to succeed, which might include YouTube tutorials, a MOOC, a tutor, a time-management plan, and so on. Monitoring and regulating the cognitive processes that follow getting things set up are also part of the work of metacognition, as originally conceived by Flavell (1979).  However, Chein and Schneider emphasize the importance of getting the bigger picture right at the outset. In other words, metacognition can work as a planning tool. We tend to fall into thinking of metacognition as a guide for when things go awry. While we know that it can be helpful in setting learning goals so that we can track progress towards those goals and resources to help us achieve them, we may fall into thinking of metacognition as a “check-in” when things go wrong. Of course, metacognition can be that too, but metacognition can be helpful on the front end, especially when it comes to longer-term, challenging, and demanding goals that we set for ourselves. Often, success depends on developing and following a multi-faceted and longer-term plan of learning and development.

In summary, the significant contribution to our understanding of metacognition that Chein and Schneider (2012) make is that metacognitive processing is responsible for setting up the initial goals and resources as a person confronts a new task. With effective configuration of learning at this stage and sufficient practice, performance will become fluent, fast, and relatively free of error.  The Chein and Schneider model suggests that learning and practice should be preceded by thoughtful reflection on the resources needed to succeed in the learning task and garnering and organizing those resources at the outset. Metacognition as initiator and organizer sets the person off on a path of successful learning.

References

Chein, J. M., & Schneider, W. (2012). The brain’s learning and control architecture. Current Directions in Psychological Science, 21, 78-84.

Chi, M. T. (2000). Self-explaining expository texts: The dual processes of generating inferences and repairing mental models. In R. Glaser (Ed.), Advances in instructional psychology, (Vol. 5), pp. 161-238. Mahwah, NJ: Erlbaum.

Dunlosky, J., & Metcalfe, J. (2008). Metacognition. SAGE, Los Angeles

Fitts, P. M., & Posner, M. I. (1967). Human performance. Belmont, CA: Brooks/Cole.

Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry. American Psychologist34, 906-911.

Tenison, C., & Anderson, J. R. (2016). Modeling the distinct phases of skill acquisition. Journal of Experimental Psychology: Learning, Memory, and Cognition42(5), 749-767.


On the Benefits of Metacognition: Seeking Justice by Overcoming Shallow Understanding

By John Draeger, SUNY Buffalo State

In his “Letter from Birmingham Jail,” Martin Luther King Jr. responds to the white moderates of Birmingham who believed his protests were ill-timed and unnecessary. He writes:

I have almost reached the regrettable conclusion that the Negro’s great stumbling block in his stride toward freedom is not the White Citizen’s Council or the Ku Klux Klanner, but the white moderate, who is more devoted to “order” than to justice; who prefers a negative peace which is the absence of tension to a positive peace which is the presence of justice; who constantly says: “I agree with you in the goal you seek, but I cannot agree with your methods of direct action”; who paternalistically believes he can set the timetable for another man’s freedom; who lives by a mythical concept of time and who constantly advises the Negro to wait for a “more convenient season.” Shallow understanding from people of good will is more frustrating than absolute misunderstanding from people of ill will. Lukewarm acceptance is much more bewildering than outright rejection. (King, 295)

White moderates baffled King because he knew them to be people of good will. Why would they talk the equality talk without walking the walk? For example, they worried that King’s protests threatened to undermine the rule of law. Yet, King argued that respect for the law and for the human beings governed by those laws, demanded standing against injustice even when, perhaps especially when, it would be convenient for whites to do otherwise. Moreover, King’s respect for the system of law was underscored by the fact that the protests were nonviolent and the protestors were willing to accept the consequences of their lawbreaking. King’s letter challenged the white moderates of Birmingham to consider why they were so reluctant to side with those being treated unjustly. In short, King called on them (and us today) to be more metacognitive.

The Benefits of Metacognition

            Metacognition is the ongoing awareness of a process and a willingness to adjust when necessary. King’s letter argued that the white moderates needed to become aware of a broader set of issues and adjust their actions accordingly. For example, white moderates were concerned about the safety of their families and the fact that protests might turn violent. This seems reasonable until we consider the living conditions and often violent treatment of their black neighbors. King suggests that white moderates were emotionally disconnected from the lived experience of those affected by segregation and this disconnect helped explain their tepid endorsement of the civil rights movement. Willful ignorance can shield us from uncomfortable truths about ourselves and the world around us. It is often easier not to ask tough questions than to face unflattering answers. However, metacognition prompts us to consider the quality of our thought processes and then take action based on a new awareness of ourselves.

Raising awareness by purposefully engaging our reasons for action (or non-action) might prompt us to ask the following sorts of metacognitive prompting questions.

  • How well do I understand those around me?
  • When am I less likely to question what I am doing?
  • What are the forces that keep me from being connected to the suffering of others?
  • When am I less likely to see the harms done to others? Are the harms invisible (e.g., internal struggles that I could only see with careful listening)? Or would harms be visible to me if I were paying attention?
  • Why am I not paying attention to others?
  • Do I tend to avoid bad news because ignorance is psychologically easier?
  • Am I afraid of asking myself difficult questions because I doubt I can do anything about it anyway?
  • Am I afraid to rock the boat?
  • Am I afraid to ask questions that will paint me in a bad light?

The list of relevant questions could go on for pages and it will likely depend on the particular circumstances, but it is worth remembering that it was in inability of white moderates to ask such questions led King to write his letter. If we want to avoid similar pitfalls, then each of us must find the wherewithal to take a hard look in the mirror and adjust when necessary.

Looking forward

            I find King’s letter especially relevant at a time when many of us are coming to grips with how address issues raised by the #BlackLivesMatter and #MeToo movements as well as the worldwide conversation surrounding immigration. I believe that there are rich research opportunities at the intersection of metacognition and ethical reasoning. For example, how might metacognition help overcome implicit bias or microaggression? How might it support the development of respect for humankind? I hope to consider these issues in future posts.

References

King, M. L. (1963).  “Letter from Birmingham Jail,” in A Testament of Hope: The Essential Writings and Speeches of Martin Luther King Jr., ed. James Washington (San Francisco: Harper Collins, 1986).


Utilizing Student-Coded Exams for Responsive Teaching and Learning

by Dana Melone, Cedar Rapids Kennedy High School

Welcome to the start of a semester for most teachers.  My name is Dana Melone and I teach AP Psychology and AP Research at Cedar Rapids Kennedy High School.  Most educators will give some sort of multiple-choice test during the semester, and as educators we want our students to use their exams as a learning tool, not just as a summative experience.  Unfortunately, many students just pop a graded exam into their folder and move on.  Today I would like to give you some strategies you can use as a teacher to get students to learn from their mistakes as well as their correct answers.  

pencil laying across a multiple-choice test question

These strategies also give teachers the opportunity to look at their own teaching and find commonalities in the mistakes their students are making. If your students are all making similar mistakes you can reteach this topic in a new way.  If mistakes are spread out it may inform you that your students need to work on study skills.  Your students can use these examples to examine their own thinking and learning (become more metacognitive) and become advocates for themselves.   You and your students utilize metacognitive processes to become better teachers and learners.

Let’s start with the exam itself.  Students often get their exam back and struggle to remember what their thinking was when they took it. If you are giving a paper exam, students can use a coded system as they take their test to remember their thinking later.  For example, if a student feels they knew the answer to the question and they feel confident in their choice then they can put a checkmark next to that question.  If they were able to narrow it down but were not entirely sure they made the right choice they can put a dash next to the question.  If they had no idea than they can use an x.  This allows students to remember their thinking as they look back at their exam. Students can find out if they are always missing similar style or topic questions that they thought they already knew.  They can use these self-coded exams as they get close to finals as a study tool.  Students can also take note whether or not their thinking was correct.  If they are the ones about which they felt confident wrong, they need to explore that further.   Student-coded exams also allow teachers to look at patterns for their own use and modify their teaching appropriately, i.e. be metacognitive in their teaching.  For example, teachers can change their focus if a large number of students indicated that they did not know similar concepts or struggled with application questions.  Or, if students indicate that they narrowed down to the best two choices but chose poorly, teachers can share strategies to deal with that issue.  Why do this?  The hope is that students will become more aware of what is working and what isn’t and that by making them more aware, they will make adjustments. By regularly practicing these metacognitive skills, we hope that students will learn to adjust on their own.

Once students get their exam back a next step for many teachers is to have students complete exam corrections.  I have seen many formats of exam corrections.  The methods that really get students thinking about the content and their own testing strategy produce metacognitive awareness.  Here are some methods that you could use individually or combine:

  1. Have students write why they think they got the question wrong.  Was it an error in reading the question?  Did they not know the content?  Did they narrow it down to two but chose incorrectly?
  2. Have students explain why the answer they chose is incorrect or why the correct answer is correct.
  3. Have students rewrite the question to make their wrong answer right.
  4. Have students write a memory aid to help them remember that concept in the future.
  5. Have students write out what they found tricky about that concept.
  6. Have students write out how that concept relates to them or another concept in the course.
  7. Have students categorize the concepts they missed by learning target or standard and draw a conclusion about that target or standard as a whole.  Many classrooms are moving to standards-based learning or a select few overrising concepts students must master to be proficient in the course.  If you can organize your exam to show students patterns they are making with these standards, it can help them make good study decisions and help you make good teaching decisions.

How can we as educators know if students have gotten the most out of this process?  Try including questions on the most commonly missed topics on future exams at no cost to the students. Meaning, do not penalize their score.  Make these questions formative to see if they are making progress.   Do you have great ideas for test corrections that produce metacognition? Let us know.