How to Use Post-Test Analysis to Facilitate Metacognition in the College Classroom

by Gina Burkart, Ed.D., Learning Specialist, Clarke University

Pedagogy for Embedding Strategies into Classes

The transition to college is difficult. Students quickly discover that their old strategies from high school do not serve them well in college when they fail their first exam. As the Learning Specialist, I guide these students in modifying strategies and behaviors and in finding new strategies. This also involves helping them move away from a fixed mindset where they believe some students are just born smarter than others and move toward a growth mindset where they reflect on habits and strategies and how to set goals and make changes to achieve desired outcomes. Reflective metacognitive discussion and exercises that develop a growth mindset are necessary for this type of triaging with students (Dweck, 2006; Masters, 2013; Efklides, 2008; VanZile-Tamsen & Livingston, 1999; Livingston, 2003).

As the Learning Specialist at the University, I work with students who are struggling, and I also work with professors in developing better teaching strategies to reach students. When learning is breaking down, I have found that oftentimes the most efficient and effective method of helping students find better strategies is to collaborate with the professor and facilitate strategy workshops in the classroom tailored to the course curriculum. This allows me to work with several students in a short amount of time—while also supporting the professor by demonstrating teaching strategies he or she might integrate into future classes.

magnifying glass with the words Exam Analysis shown

An example of a workshop that works well when learning is breaking down in the classroom is the the post-test analysis workshop. The post-test analysis workshop (see activity details below) often works well in classes after the first exam. Since most students are stressed about their test results, the metacognitive workshop de-escalates anxiety by guiding students in strategic reflection of the exam. The reflection demonstrates how to analyze the results of the exam so that they can form new habits and behaviors in attempt to learn and perform better on the next exam. The corrected exam is an effective tool for fostering metacognition because it shows the students where errors have occurred in their cognitive processing (Efklides, 2008). The activity also increases self-awareness, imperative to metacognition, as it helps students connect past actions with future goals (Vogeley, Jurthen, Falkai, & Maier, 1999). This is an important step in helping students take control of their own learning and increasing motivation (Linvingston & VanZile Tamsen, 1999; Palmer & Goetz, 1988; Pintrich & DeGroot, 1990).

Post-Test Analysis Activity

When facilitating this activity, I begin by having the professor hand back the exams. I then take the students through a serious of prompts that engage them in metacognitive analysis of their performance on the exams. Since metacognitive experiences also require an awareness of feeling (Efklides, 2008), it works well to have students begin by recalling how they felt after the exam:

  • How did you feel?
  • How did you think you did?
  • Were your feelings and predictions accurate?

The post-test analysis then prompts the students to connect their feelings with how they prepared for the exam:

  • What strategies did you use to study?
    • Bloom’s Taxonomy—predicting and writing test questions from book and notes
    • Group study
    • Individual study
    • Concept cards
    • Study guides
    • Created concept maps of the chapters
    • Synthesized notes
    • Other methods?

Students are given 1-3 minutes to reflect in journal writing upon those questions. They are then prompted to analyze where the test questions came from (book, notes, power point, lab, supplemental essay, online materials, etc.) It may be helpful to have students work collaboratively for this.     

An Analysis of the Test—Where the Information Came From

  • For each question identify where the test question came from:
    • Book (B)
    • In-class notes (C)
    • Online materials (O)
    • Supplemental readings (S)
    • Not sure (?)

After identifying where the test information came from, students are then prompted to reflect in journal writing upon the questions they missed and how they might study differently based upon the questions they missed and where the questions came from. For example, a student may realize that he or she missed all of the questions that came from the book. That student may then make a goal to synthesizing class notes right after class with material from the book 30 minutes after class, and then use note reduction to create a concept map to study for the next test.

Another student might realize that he or she missed questions because of test-taking errors. For example, she didn’t carefully read the entire question and then chose the wrong response. To resolve this issue, she decided she would underline question words on the test and in attempt to slow down while reading test questions. She also realized that she changed several responses that she had correct. She will resist the urge to overthink her choices and change responses on the next test.

Next, students are taught about Bloom’s Taxonomy and how it is used by professors to write exams. In small groups, students then use Bloom’s Taxonomy to identify question types. This will take about 20-30 minutes—depending upon the length of the test. For example, students would identify the following test question as a comprehension-level question: Which of the following best describes positive reinforcement? Whereas, the following question would be noted as an application-level question: Amy’s parents give her a lollipop every time she successfully uses the toilet. What type of reinforcement is this?

Question Type: Identify What Level of Bloom’s Taxonomy the Test Question is Assessing

  • Knowledge-level questions
  • Comprehension
  • Application
  • Analysis
  • Synthesis
  • Evaluation

Students sometimes struggle with distinguishing the different levels of questions. So, it is helpful to also ask small groups to share their identified questions with the large group, as well as how they determined it to be that level of question. The professor also is a helpful resource in this discussion.

After discussion of the questions types, students then return to individual reflection, as they are asked to count the number of questions they missed for each level of Bloom’s Taxonomy. They are also asked to reflect upon what new strategies they will use to study based on this new awareness.

Adding It All Up

  • Count the number of questions missed in each level of Bloom’s Taxonomy.
  • Which types of questions did you miss most often?
  • Compare this with your study methods.
  • What adjustments might you make in your studying and learning of class material based on this information? Which levels of Bloom’s Taxonomy do you need to focus more on with your studying?

Finally, students are asked to use the class reflections and post-test assessment to create a new learning plan for the course. (See the learning plan in my previous post, Facilitating Metacognition in the Classroom: Teaching to the Needs of Your Students). Creating the Learning Plan could be a graded assignment that students are asked to do outside of class and then turn in. Students could also be referred to the Academic Resource Center on campus for additional support in formulating the Learning Plan. Additionally, a similar post-test assessment could be assigned outside of class for subsequent exams and be assigned a point value. This would allow for ongoing metacognitive reflection and self-regulated learning.

This type Cognitive Strategy Instruction (Scheid, 1993) embedded into the classroom offers students a chance to become more aware of their own cognitive processes, strategies for improving learning, and the practice of using cognitive and metacognitive processes in assessing their success (Livingston, 2003). Importantly, these types of reflective assignments move students away from a fixed mindset to a growth mindset (Dweck, 2006). As Masters (2013) pointed out “Assessment information of this kind provides starting points for teaching and learning.” Additionally, because post-test assessment offers students greater self-efficacy, control of their own learning, purpose, and an emphasis on the learning rather than the test score, it also positively affects motivation (VanZile-Tamsen & Livingston, 1999).

References

Dweck, C. S. (2006). Mindset: The new psychology of success. New York: Balantine Books.

Efklides, A. (2008). Metacognition: Defining its facets and levels of functioning in relation to self-regulation and co-regulation. European Psychologist, 13 (4), 277-287. Retrieved from https://www.researchgate.net/publication/232452693_Metacognition_Defining_Its_Facets_ad_Levels_of_Functioning_in_Relation_to_Self-Regulation_and_Co-regulation

Livingston, J. A. (2003). Metacognition: An overview. Retrieved from https://files.eric.ed.gov/fulltext/ED474273.pdf

Masters, G. N. (2013). Towards a growth mindset assessment. Retrieved from https://research.acer.edu.au/cgi/viewcontent.cgi?article=1017&context=ar_misc

Palmer, D. J., & Goetz, E. T. (1988). Selection and use of study strategies: The role of studier’s beliefs about self and strategies. In C. E. Weinstein, E. T. Goetz, & P. A. Alexander (Eds.), Learning and study strategies: Issues in assessment, instruction, and evaluation (pp. 41-61). San Diego, CA: Academic.

Palmer, D. J., & Goetz, E. T. (1988). Selection and use of study strategies: The role of studier’s beliefs about self and strategies. In C. E. Weinstein, E. T. Goetz, & P. A. Alexander (Eds.), Learning and study strategies: Issues in assessment, instruction, and evaluation (pp. 41-61). San Diego, CA: Academic.

Pintrich, P. R., & DeGroot, E. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82, 33-40

Palmer, D. J., & Goetz, E. T. (1988). Selection and use of study strategies: The role of studier’s beliefs about self and strategies. In C. E. Weinstein, E. T. Goetz, & P. A. Alexander (Eds.), Learning and study strategies: Issues in assessment, instruction, and evaluation (pp. 41-61). San Diego, CA: Academic.

Pintrich, P. R., & DeGroot, E. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82, 33-40

Pintrich, P. R., & DeGroot, E. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82, 33-40.

VanZile-Tamsen, C. & Livingston, Jennifer. J. A. (1999). The differential impact of motivation on the self regulated strategy use of high- and low-achieving college student. Journal of College Student Develompment, (40)1, 54-60. Retrieved from https://www.researchgate.net/publication/232503812_The_differential_impact_of_motivation_on_the_self-regulated_strategy_use_of_high-_and_low-achieving_college_students

Vogeley, K., Kurthen, M., Falkai, P., & Maier, W. (1999). Essential functions of the human self model are implemented in the prefrontal cortex. Consciousness and Cognition, 8, 343-363.


Getting to Know Your Students: Using Self-Assessment in the Classroom to Foster Metacognition

by Gina Burkart, EdD, Learning Specialist, Clarke University

The Process of Metacognition

As retention continues to dominate discussions at most universities, metacognition may provide much insight. Flavell’s (1979) early and hallmark work on metacognition defined metacognition as an individual’s reflection on how he or she learns and developed a model to depict the process of this reflection. According to Flavell (1979), metacognition “occurs through the actions and interactions” of “metacognitive knowledge,” “metacognitive experiences,” “goals/tasks,” “actions/strategies” (p. 906). Understanding how the metacognitive process impacts learning is key to developing effective curriculum, helping students learn material, and motivating students to learn.

Self-Assessment to Enhance One-on-one Mentoring

As the Learning Specialist at Clarke University, one of my responsibilities is meeting with, monitoring, and guiding students in finding effective learning strategies. In this role, I meet with students one-on-one, reach out to students who have received student concerns flags raised by professors, create and coordinate academic support (Academic Coaching and Supplemental instruction), collaborate and guide faculty in developing curriculum through workshops and consultations, and hire, train and supervise the Academic Coaches in the Academic Learning, mentor and meet with students placed on Academic Warning and Probation, and teach the College Study Strategy course and courses in the English department.

In working with students who have been placed on probation and warning, I find that students often fail because they lack motivation and purpose. And, commonly, the motivation and purpose have been affected by inaccurate metacognitive knowledge. Flavell’s (1979) model of Cognitive Monitoring offers a schema for understanding how this might occur and how to help students find motivation and purpose and improve their academic standing.

As noted earlier, Flavell (1979) found that our metacognitive knowledge is informed by our metacognitive experiences. Thus, negative experiences or experiences where distorted thought processes created inaccurate metacognitive knowledge about self might result in a lack of purpose or motivation. For example, if a first-year student fails two tests in Biology and compares himself or herself to some classmates who received As, he or she might conclude that he or she is incapable of learning Biology, is not capable of ever becoming a doctor, and should not attend college.

In meeting with the student, I would help the student reflect on how he or she was reading, studying, and taking notes in the Biology course. Additionally, I would help the student reflect on time management and organization strategies. I would also point out the flaw in comparing oneself with others in assessing one’s own abilities. Once the student realizes the flaws of thinking and forms new metacognitive knowledge and experiences, he or she works with me to establish realistic goals and implement new strategies for achieving the goals. Motivation and purpose then quickly improve, and students find success. In some instances students have moved from academic probation to Dean’s List in as little as one semester.

Helping students find success involves helping them discover what they believe about themselves (metacognitive self knowledge), setting goals and finding strategies to achieve the goals. To begin this process, students must first reflect on and assess themselves. As research has shown, unless the self-system is activated, learning will not occur (Mead, 1962/1934; Bandura, 1994; Marzano, 2001; Piaget, 1954; Vygotsky, 1934/1987; Burkart, 2010).

Incorporating Self-Assessment in the College Classroom

In addition to working with students in one-on-one mentoring, I have also found that this type of cognitive monitoring can be fostered in the classroom through the use of self-assessments. As demonstrated by Taraban (2019), self-assessments can be simple or more nuanced depending on the preferences of the professor and the needs of the course curriculum. The self-assessment can be created by the professor or be a nationally normed assessment. Additionally, the assessments can be closely connected to the outcomes of the course and revisited throughout the semester.

I have integrated self-assessments into my own teaching in a variety of ways. For example, in the College Study Strategy course that I teach, I begin the semester with an informal self-assessment by having students rate themselves (5 high and 1 low) in the following course content areas that impact academic performance: reading, time management, organization, test taking, and studying. Additionally, I have them identify strengths and weaknesses in each of these areas and set goals (See Figure 1 for a sample self-assessment and Figure 2 for a sample goal-setting chart).

Students then complete a more formal self-assessment, the nationally normed LASSI (Learning and Study Strategies Inventory). This self-assessment is quick and easy (takes about 10 minutes) and allows students to see how they compare nationally with other students taking the inventory in the following areas: Selecting Main Ideas, Information Processing, Time Management, Self-Testing, Motivation, Concentration, Attitude, Use of Academic Resources, and Test Taking. Students then share their assessments with each other in pairs and large group discussions. In almost all cases, the LASSI and informal assessments match and students find the LASSI results to be accurate. The comparison of data from a nationally normed self-assessment with an informal self-assessment offers students a way for checking the accuracy of their knowledge of self.

These assessments also provide purpose and focus for the course. Class discussion based on the self-assessment establishes buy-in from the students as they see personal need for the course. Additionally, I have found that starting the semester with these assessments frames the course in that I (as the professor) have a better understanding of their skill levels and needs and can connect their assessments to the course curriculum and outcomes.

For example, in Week One when we are going over Time Management and time management strategies, I can refer back to the students’ self-assessments and goals. Asking the students to recall their scores and goals begins the process of cognitive monitoring (Flavell, 1979). It creates purpose and motivation for the students to learn the curriculum I am teaching and integrate it into their courses so that they will begin to develop and apply the new time management strategies in order to achieve their time management goals.

Students are then tasked with implementing the strategies in their courses and asked to display artifacts of the implemented strategies in a midterm and final portfolio that is shared in a personal conference with me. For example, a student may include a long-term planner of the semester with mapped out projects, papers, tests, and athletic games to show that they have started to use macro-level planning for time management. They might also include sample pages from a weekly planner to show prioritized “to-do” lists and items crossed off—micro-level planning.

Students also assess themselves again with the same informal self-assessments at midterm and at the end of the semester. Additionally, they retake the LASSI at the end of the semester and use the self-assessments and artifacts to compile a portfolio that includes a one-page reflection. In the final conference meeting with me, students use the portfolio to demonstrate their growth, as they discuss their goals, strategies used, plans for future goals, and growth.

Integrating Self-Assessment—As a Tool of Metacognition

In assessing themselves, students gain knowledge of what they believe about themselves and how they learn. In reflecting on their assessments and discussing their experiences with me in conferences and with other students in the class, students uncover inaccurate perceptions of self. Additionally, they form goals and learn and develop strategies that positively affect their college learning experience. This sharing of information also allows me, as the professor and Learning Specialist, to also engage in metacognition as I teach and develop curriculum to meet the needs of my students throughout the semester (Burkart, 2017). And while some may question the validity of self-assessments, Nuhfer (2018), found self-assessments to not only be valid but also to be useful tools for both professors and students to monitor learning.

Above I offered examples of how self-assessment is easily integrated into a college study strategy course; however, it can easily be integrated into any course. For example, in teaching literature or writing courses, I create self-assessments unique to that content area and the course outcomes. In literature courses, on the first day of the semester, I ask students to assess themselves in the following areas: critical reading, writing, speaking, time management, and small group work. I also have students read through the syllabus, create goals for each of those areas, and identify strategies they will use to achieve those goals. Additionally, I have them respond to the following questions:

  • What do you hope to get out of this course? How does it connect with your career and life goals?
  • How can I help you achieve your goals?
  • What challenges do you anticipate this semester? What resources are available to help you meet those challenges?
  • What else do you want me to know about you and what you have going on this semester?

Students share their assessments in small groups. Then, as a large group, we discuss the assessments and the syllabus. I collect the assessments, comment on them, and then return them. Students refer back to them again at midterm and at the end of the semester when they complete synthesis reflections about their growth and achievement of course outcomes.

Benefits of incorporating student self-assessment

The inclusion of these assessments has been helpful in many ways. They have helped students feel that they are listened to by their professor. The assessments also assist me in quickly and easily conducting a needs assessment of my students so that I can reflect upon and adjust my teaching to their needs (i.e. engage in metacognitive instruction).

Most importantly, it encourages students to reflect on their own learning and empowers them to take control of their learning and results in increased motivation and a sense of purpose; this is the power of metacognition and why it matters to retention. When this recursive process activates the self-system (Mead, 1962/1934; Bandura, 1994; Marzano, 2001; Piaget, 1954; Vygotsky, 1934/1987; Burkart, 2010), it develops grit and a growth mindset (Burkart, 2010; Duckworth, 2019; Dweck, 2007). And as Flavell (1979) noted, fostering cognitive monitoring is an important part of learning, as there is “far too little rather than enough or too much cognitive monitoring in this world” (p. 910).

References

Bandura, A. (1994). Self-efficacy. In V. S. Ramachaudran (Ed.),        Encyclopedia of human behavior (Vol. 4, pp. 71-81). New     York: Academic Press. Retrieved from http://www.des.emory.edu/mfp/BanEncy.html

Burkart, G. (2017, 3rd ed). 16 weeks to college success. Kendall Hunt: Dubuque, IA.

Burkart, G. (2017, fall). Using the LASSI to engage metacognitive Strategies that foster a growth mindset in college students placed on academic probation (per request). LASSI in Action. Retrieved from https://www.hhpublishing.com/ap/_assessments/LASSI-in-Action-Articles/LASSI-In-Action-Fall-2017.pdf

Burkart, G. (2010, Dec). First-Year College Student Beliefs about Writing Embedded in Online Discourse: An Analysis and Its Implications to Literacy Learning. (Unpublished doctoral  dissertation). University of Northern Iowa, Cedar Falls, IA.

Burkart, G. (2010, May). An analysis of online discourse and its application to literacy learning, The Journal of Literacy and Learning. Retrieved from http://www.literacyandtechnology.org/uploads/1/3/6/8/136889/jlt_v11_1.pdf#page=64

Duckworth, A. L., Quirk, A. Gallop, R., Hoyle, R. H., Kelly, D. R., & Matthews, M. D. (2019). Cognitive and noncognitive predictors of success. Proceedings of the National Academy of Sciences, 116(47), 23499-23504. Doi:10.1073/pnas.1910510116.

Dweck, C. (2007). Mindset: The new psychology of success. New York: Ballantine Books.

Flavell, J. (1979). Metacognitive monitoring: A new area of cognitive development inquiry.           American Psychologist 34(10), 906-9-11.

Marzano, R. J. (2001). Designing a new taxonomy of educational objectives. Thousand Oaks, CA: Corwin Press.

Mead, G. H. (1934). Mind, self, and society: From the standpoint of a social behaviorist.

Nuhfer, E. (2018). Measuring metacognitive self-assessment: Can it help us assess higher-order thinking. Improve with Metacognition. Retrieved from

Piaget J. (1959). The language and thought of the child. Hove, UK: Psychology Press.

Taraban, R. (2019). The metacognitive reading strategies questionnaire (MRSQ): Cross-cultural comparisons. Improve with Metacognition. Retrieved from  https://www.improvewithmetacognition.com/metacognitive-reading-strategies/

Vygotsky L. S. (1934/1987). Thinking and speech. The collected works of Lev Vygotsky (Vol. 1). New York, NY: Plenum Press.


Metacognitive Self-Assessment, Competence and Privilege

by Steven Fleisher, Ph.D., California State University Channel Islands

Recently I had students in several of my classes take the Science Literacy Concept Inventory (SLCI) including self-assessment (Nuhfer, et al., 2017). Science literacy addresses one’s understanding of science as a way of knowing about the physical world. This science literacy instrument also includes self-assessment measures that run parallel with the actual competency measures. Self-assessment skills are some of the most important of the metacognitive competencies. Since metacognition involves “thinking about thinking,” the question soon becomes, “but thinking about what?”

Dunlosky and Metcalfe (2009) framed the processes of metacognition across metacognitive knowledge, monitoring, and control. Metacognitive knowledge involves understanding how learning works and how to improve it. Monitoring involves self-assessment of one’s understanding, and control then involves any needed self-regulation. Self-assessment sits at the heart of metacognitive processes since it sets up and facilitates an internal conversation in the learner, for example “Am I understanding this material at the level of competency needed for my upcoming challenge?” This type of monitoring then positions the learner for any needed control or self-regulation, for instance “Do I need a change my focus, or maybe my learning strategy?” Further, self-assessment is affective in nature and is central to how learning works. From a biological perspective, learning involves the building and stabilizing of cognitive as well as affective neural networks. In other words, we not only learn about “stuff”, but if we engage our metacognition (specifically self-assessment in this instance), we are enhancing our learning to include knowing about “self” in relation to knowing about the material.

This Improve with Metacognition posting provides information that was shared with my students to help them see the value of self-assessing and for understanding its relationship with their developing competencies and issues of privilege. Privilege here is defined by factors that influence (advantage or disadvantage) aggregate measures of competence and self-assessment accuracy (Watson, et al., 2019). Those factors involved: (a) whether students were first-generation college students, (b) whether they were non-native English-language students, and (c) whether they had an interest in science.

The figures and tables below result from an analysis of approximately 170 students from my classes. The narrative addresses the relevance of each of the images.

Figure 1 shows the correlation between students’ actual SLCI scores and their self-assessment scores using Knowledge Survey items for each of the SLCI items (KSSLCI). This figure was used to show students that their self-assessments were indeed related to their developing competencies. In Figure 2, students could see how their results on the individual SLCI and KSSLCI items were tracking even more closely than in Figure 1, indicating a fairly strong relationship between their self-assessment scores and actual scores.

scatterplot graph of knowledge survey compared to SCLI scores
Figure 1. Correlation with best-fit line between actual competence measures via a Science Literacy Concept Inventory or SLCI (abscissa) and self-assessed ratings of competence (ordinate) via a knowledge survey of the inventory (KSSLCI) wherein students rate their competence to answer each of the 25 items on the inventory prior to taking the actual test.
scatter plot of SCLI scores and knowledge survey scores by question
Figure 2. Correlation with best-fit line between the group of all my students’ mean competence measures on each item of the Science Literacy Concept Inventory (abscissa) and their self-assessed ratings of competence on each item of the knowledge survey of the inventory (KSSLCI).

Figure 3 demonstrates the differences in science literacy scores and self-assessment scores among their different groups as defined by the number of science courses taken. Students could readily see the relationship between the number of science courses taken and improvement in science literacy. More importantly in this context, students could see that these groups had a significant sense of whether or not they knew the information, as indicated by the close overlapping of each pair of green and red diamonds. Students learn that larger numbers of participants can provide more confidence to where the true means actually lies. Also, I can show the meaning of variation differences within and between groups. In answering questions about how we know that more data would clarify relationships, I bring up an equivalent figure from our national database that shows the locations of the means within 99.9% confidence and the tight relationship between groups’ self-assessed competence and their demonstrated competence.

categorical plot by number of college science courses completed
Figure 3. Categorical plot of my students in five class sections grouped by their self-identified categories of how many college-level science courses that they have actually completed. Revealed here are the groups’ mean SLCI scores and their mean self-assessed ratings. Height of the green (SLCI scores) and red (KSSLCI self-assessments) diamonds reveals with 95% confidence that the actual mean lies within these vertical bounds.

Regarding Figure 4, it is always fun to show students that there’s no significant difference between males and females in science literacy competency. This information comes from the SLCI national database and is based on over 24,000 participants.

categorical plot by binary gender
Figure 4. Categorical plot from our large national database by self-identified binary gender categories shows no significant difference by gender in competence of understanding science as a way of knowing.

It is then interesting to show students in that, in their smaller sample (Figure 5), there is a difference between the science literacy scores of males and females. The perplexed looks on their faces are then addressed by the additional demographic data in Table 1 below.

categorical plot by binary gender for individual class
Figure 5. Categorical plot of just my students by binary gender reveals a marginal difference between females and males, rather than the gender-neutral result shown in Fig. 4.

In Table 1, students could see that higher science literacy scores for males in their group were not due to gender, but rather, were due to significantly higher numbers of English as a non-native language for females. In other words, the women in their group were certainly not less intelligent, but had substantial, additional challenges on their plates.  

Table 1: percentages of male and female students as first generation, English and non-native speaker, and with respect to self-report interest to major in science

Students then become interested in discovering that the women demonstrated greater self-assessment accuracy than did the men, who tended to overestimate (Figure 6). I like to add here, “that’s why guys don’t ask for directions.” I can get away with saying that since I’m a guy. But more seriously, I point out that rather than simply saying women need to improve in their science learning, we might also want to help men improve in their self-assessment accuracy.   

categorical plot by gender including self-assessment data
Figure 6. The categorical plot of SLCI scores (green diamonds) shown in Fig. 5 now adds the self-assessment data (red diamonds) of females and males. The trait of females to more accurately self-assess that appears in our class sample is also shown in our national data. Even small samples taken from our classrooms can yield surprising information.

In Figure 7, students could see there was a strong difference in science literacy scores between Caucasians and Hispanics in my classes. The information in Table 2 below was then essential for them to see. Explaining this ethnicity difference offers a wonderful discussion opportunity for students to understand not only the data but what it reveals is going on with others inside their classrooms.

Figure 7. The categorical plot of SLCI scores by the two dominant ethnicities in my classroom. My campus is a Hispanic Serving Institution (HSI). The differences shown are statistically significant.

Table 2 showed that the higher science literacy scores in this sample were not simply due to ethnicity but were impacted by significantly greater numbers of first-generation students and English as a non-native language between groups. These students are not dumb but do not have the benefits in this context of having had a history of education speak in their homes and are navigating issues of English language learning. 

Table 2: percentage of white and hispanic students who report to be first generation students, English as non-native speakers, and interested in majoring in science.

When shown Figure 8, which includes self-assessment scores as well as SLCI scores, students were interested to see that both groups demonstrated fairly accurate self-assessment skills, but that Hispanics had even greater self-assessment accuracy than their Caucasian colleagues. Watson et. al (2019) noted that strong self-assessment accuracy for minority groups comes about from a need for being understandably cautious.

categorical plot by ethnicity and including self-assessment
Figure 8. The categorical plot of SLCI scores and self-assessed competence ratings for the two dominant ethnicities in my classroom. Groups’ collective feelings of competence, on average, are close to their actual competence. Explaining these results offered a wonderful discussion opportunity for students.

Figure 9 shows students that self-assessment is real. In seeing that most of their peers fall within an adequate range of self-assessment accuracy (between +/- 20 percentage points), students begin to see the value of putting effort into developing their own self-assessment skills. In general, results from this group of my students are similar to those we get from our larger national database (See our earlier blog post, Paired Self-Assessment—Competence Measures of Academic Ranks Offer a Unique Assessment of Education.)

distribution of self-assessment accuracy for individual course
Figure 9. The distribution of self-assessment accuracy of my students in percentage points (ppts) as measured by individuals’ differences between their self-assessed competence by knowledge survey and their actual competence on the Concept inventory.

Figure 10 below gave me the opportunity to show students the relationship between their predicted item-by-item self-assessment scores (Figure 9) and their postdicted global self-assessment scores. Most of the scores fall between +/- 20 percentage points, indicating good to adequate self-assessment. In other words, once students know what a challenge involves, they are pretty good at self-assessing their competency.

distribution of self-assessment accuracy for individual course after taking SCLI
Figure 10. The distribution of self-assessment accuracy of my students in percentage points (ppts) as measured by individuals’ differences between their postdicted ratings of competence after taking the SLCI and their actual scores of competence on the Inventory. In general, my students’ results are similar in self-assessment measured in both ways.

In order to help students further develop their self-assessment skills and awareness, I encourage them to write down how they feel they did on tests and papers before turning them in (postdicted global self-assessment). Then they can compare their predictions with their actual results in order to fine-tune their internal self-assessment radars. I find that an excellent class discussion question is “Can students self-assess their competence?” Afterward, reviewing the above graphics and results becomes especially relevant. We also review self-assessment as a core metacognitive skill that ties to an understanding of learning and how to improve it, the development of self-efficacy, and how to monitor their developing competencies and control their cognitive strategies.

References

Dunlosky, J. & Metcalfe, J. (2009). Metacognition. Sage Publications Inc., Thousand Oaks, CA.

Nuhfer, E., Fleisher, S., Cogan, C., Wirth, K., & Gaze, E. (2017). How Random Noise and a Graphical Convention Subverted Behavioral Scientists’ Explanations of Self-Assessment Data: Numeracy Underlies Better Alternatives. Numeracy, Vol 10, Issue 1, Article 4. DOI: http://dx.doi.org/10.5038/1936-4660.10.1.4

Watson, R., Nuhfer, E., Nicholas Moon, K., Fleisher, S., Walter, P., Wirth, K., Cogan, C., Wangeline, A., & Gaze, E. (2019). Paired Measures of Competence and Confidence Illuminate Impacts of Privilege on College Students. Numeracy, Vol 12, Issue 2, Article 2. DOI: https://doi.org/10.5038/1936-4660.12.2.2


How do you know you know what you know?

by Patrick Cunningham, Ph.D., Rose-Hulman Institute of Technology

Metacognition involves monitoring and controlling one’s learning and learning processes, which are vital for skillful learning. In line with this, Tobias and Everson (2009) detail the central role of accurate monitoring in learning effectively and efficiently. Metacognitive monitoring is foundational for metacognitive control through planning for learning, selecting appropriate strategies, and evaluating learning accurately (Tobias & Everson, 2009).

Hierarchy of Metacognitive Control, with Monitoring Knowledge at the bottom, followed by Selecting Strategies, Then Evaluating Learning, with Planning at the top

Figure 1 – Hierarchy of metacognitive regulatory processes. Adapted from Tobias and Everson (2009).

Unfortunately, students can be poor judges of their own learning or fail to engage in the judging of their learning and, therefore, often fail to recognize their need for further engagement with material or take inappropriate actions based on inaccurate judgements of learning (Ehrlinger & Shain, 2014; Winne and Nesbit, 2009). If a student inaccurately assesses their level of understanding, they may erroneously spend time with material that is already well known or they may employ ineffective strategies, such as a rehearsal strategy (e.g., flash cards) to build ROTE memory when they really need to implement an elaborative strategy (e.g., explaining the application of concepts to a new situation) to build richer integration with their current knowledge. This poor judgement extends to students’ perceptions of the effectiveness of their learning processes, as noted in the May 14th post by Sabrina Badali, Investigating Students’ Beliefs about Effective Study Strategies[. There Badali found that students were more confident in using massed practice over interleaved practice even though they performed worse with massed practice.

Fortunately, we can help our students to develop more accurate self-monitoring skills. The title question is one of my go-to responses to student claims of knowing in the face of poor performance on an assignment or exam. I introduced it in my April 4th blog post, Where Should I Start with Metacognition? It gently, but directly asks for evidence for knowing. In our work on an NSF grant to develop transferable tools for engaging students in their metacognitive development, my colleagues and I found that students struggle to cite concrete and demonstrable (i.e., objective) evidence for their learning (Cunningham, Matusovich, Hunter, Blackowski, and Bhaduri, 2017). It is important to gently persist. If a student says they “reviewed their notes” or “worked many practice problems,” you can follow up with, “What do you mean by review your notes?” or “Under what conditions were you working the practice problems?” The goal is to learn more about the students’ approach while avoiding making assumptions and helping the student discover any mismatches.

We can also spark monitoring with pedagogies that help students accurately uncover present levels of understanding (Ehrlinger & Shain, 2014). Linda Nilson (2013) provides several good suggestions in her book Creating Self-Regulated Learners. Retrieval practice takes little time and is quite versatile. Over a few minutes a student recalls all that they can about a topic or concept, followed by a short period of review of notes or a section of a book. The whole process can be done individually, or as individual recall followed by pair or group review. Things that are well-known are present with elaborating detail on the list. Less well-known material is present, but in sparse form. Omissions indicate significant gaps in knowledge. The practice is effortful, and students may need encouragement to persist with it.

I have used retrieval practice at the beginning of classes before continuing on with a topic from the previous day. It can also be employed as an end-of-class summary activity. I think the value added is worth the effort. Because of its benefits and compactness, I also encourage students to use retrieval practice as a priming activity before regular homework or study sessions. Using it in class can also lower students’ barriers to using it on their own, because it makes it more familiar and it communicates the value I place on it.

Nilson (2013) also offers “Quick-thinks” and Think Aloud problem -solving. “Quick-thinks” are short lesson breaks and can include “correct the error” in a short piece of work, “compare and contrast”, “reorder the steps”, or other activities. A student can monitor their understanding by comparing to the instructor’s answer or class responses. Think Aloud problem-solving is a pair activity where one student talks through their problem-solving process while the other student listens and provides support, when needed, for example, by prompting the next step or asking a guiding question. Students take turns with the roles. A student’s fluency in solving the problem or providing support indicates deeper learning of the material. If the problem-solving or the support are halting and sparse, then those concepts are less well-known by the student. As my students often study in groups outside of class, I recommend that they have the person struggling with a problem or concept talk through their thinking out loud while the rest of the group provides encouragement and support.

Related to Think Alouds, Chiu and Chi (2014) recommend Explaining to Learn. A fluid explanation with rich descriptions is consistent with deeper understanding. A halting explanation without much detail uncovers a lack of understanding. I have used this in various ways. In one form, I have one half of the class work one problem and the other half work a different problem or a variant of the first. Then I have them form pairs from different groups and explain their solutions to one another. Both students are familiar with the problems, but they have a more detailed experience with one. I also often use this as I help students in class or in my office. I ask them to talk me through their thinking up to the point where they are stuck, and I take the role of the supporter.

The strategies above provide enhancements to student learning in their own right, but they also provide opportunities for metacognitive monitoring – checking their understanding against a standard or seeking objective evidence to gauge their level of understanding. To support these metacognitive outcomes I make sure to explicitly draw students’ attention to the monitoring outcomes when I use pedagogies to support monitoring. I am also transparent about this purpose and encourage students to seek better evidence on their own, so they can truly know what they know.

As you consider adding activities to your course that support accurate self-assessment and monitoring, please see the references for further details. You may also want to check out Dr. Lauren Scharff’s post “Know Cubed” – How do students know if they know what they need to know? In this post Dr. Scharff examines common causes of inaccurate self-assessment and how we might be contributing to it. She also offers strategies we can adopt to support more accurate student self-assessment. Let’s help our student generate credible evidence for knowing the material, so they can make better choices for their learning!

References

Chiu, J. L. & Chi, M. T. H.  (2014). Supporting Self-Exlanation in the Classroom. In V. A. Benassi, C. E. Overson, & C. M. Hakala (Eds.). Applying science of learning in education: Infusing psychological science into the curriculum. Retrieved from the Society for the Teaching of Psychology web site: http://teachpsych.org/ebooks/asle2014/index.php

Cunningham, P., & Matusovich, H. M., & Hunter, D. N., & Blackowski, S. A., & Bhaduri, S. (2017), Beginning to Understand Student Indicators of Metacognition.  Paper presented at 2017 ASEE Annual Conference & Exposition, Columbus, Ohio. https://peer.asee.org/27820

Ehrlinger, J. & Shain, E. A.  (2014). How Accuracy in Students’ Self Perceptions Relates to Success in Learning. In V. A. Benassi, C. E. Overson, & C. M. Hakala (Eds.). Applying science of learning in education: Infusing psychological science into the curriculum. Retrieved from the Society for the Teaching of Psychology web site: http://teachpsych.org/ebooks/asle2014/index.php

Nilson, L. B. (2013). Creating Self-Regulated Learners: Strategies to Strengthen Students’ Self-Awareness and Learning Skills. Stylus Publishing: Sterling, VA.

Tobias, S. & Everson, H. (2009). The Importance of Knowing What You Know: A Knowledge Monitoring Framework for Studying Metacognition in Education. In Hacker, D., Dunlosky, J., & Graesser, A. (Eds.) Handbook of Metacognition in Education. New York, NY: Routledge, pp. 107-127.

Winne, P. & Nesbit, J. (2009). Supporting Self-Regulated Learning with Cognitive Tools. In Hacker, D., Dunlosky, J., & Graesser, A. (Eds.) Handbook of Metacognition in Education. New York, NY: Routledge, pp. 259-277.



Investigating Students’ Beliefs about Effective Study Strategies

By Sabrina Badali, B.S., Weber State University
Cognitive Psychology PhD student starting Fall ‘19, Kent State University

As an undergraduate, I became familiar with the conversations that took place after a major test. My classmates frequently boasted about their all-nighters spent reviewing textbooks and notes. Once grades were released, however, another conversation took place. The same students were confused and felt their scores did not reflect the time they spent preparing. My classmates were using relatively ineffective study strategies; most likely because they did not understand or appreciate the benefits of more effective alternatives.

Some of the most commonly reported study strategies include rereading a textbook and reviewing notes (Karpicke, Butler, & Roediger, 2009). However, those strategies are associated with lower memory performance than other strategies, such as testing oneself while studying, spreading out study sessions, and interleaving or “mixing” material while learning (Dunlosky, Rawson, Marsh, Nathan, & Willingham, 2013). Getting students to change their study habits can prove difficult. An effective way to start, perhaps, is getting students to change their beliefs about these strategies.

Before a learner will independently choose to implement a more effective study strategy (i.e. spreading out study sessions), they need to appreciate the benefits of the strategy and realize it will lead to improved performance. It seems this is often where the problem lies. Many students lack a metacognitive awareness of the benefits of these effective strategies. It is common for students to believe that strategies such as rereading a textbook or cramming are more beneficial than strategies such as testing oneself while learning or spacing out study sessions, a belief that does not match actual memory performance.

Researching Interleaving as a Study Strategy

This underappreciation of the benefits of these effective study strategies was something I recently investigated. In my research project, undergraduate participants completed two category learning tasks – learning to recognize different species of butterflies and learning artists’ painting styles. For each learning task, half of the butterfly species and half of the artists were assigned to the massed study condition. In the massed condition, all images of a category would be presented consecutively before moving on to the next species or artist. For example, all four images of one butterfly species would be presented back-to-back before moving on to images of the next species. The remaining half of the categories were assigned to the interleaved study condition. In the interleaved condition, images from a category were spread throughout the learning task and two images from the same category were never presented consecutively. For example, the first image of the “Tipper” butterfly may be shown early on, but the remaining three images would be distributed throughout the learning task such that participants viewed several other species before viewing the second image of the “Tipper”.  

Images illustrating both massed presentation (left side - all butterflies are in the same category) and interleaved presentation (right side - the butterflies come from four different categories).

After completing these tasks, and completing a final memory assessment, participants were given a brief explanation about the difference between the massed method of presentation and the interleaved method. After this explanation, participants provided a metacognitive judgment about their performance on the study. They were asked whether they thought they performed better on massed items, interleaved items, or performed the same on both.

Misalignment of Evidence and Beliefs

I found that 63% of the participants thought they performed better on massed items, even though actual memory performance showed that 84% of participants performed better on interleaved items. There was a clear disconnect between what the student participants thought was beneficial (massing) versus what was actually beneficial (interleaving). Participants did not realize the benefits of interleaving material while learning. Instead, they believed that the commonly utilized, yet relatively ineffective, strategy of massing was the superior choice. If students’ judgments showed they thought interleaving was less effective than massing, how could we expect these students to incorporate interleaving into their own studying? Metacognition guides students’ study choices, and, at least in this example, students’ judgments were steering them in the wrong direction. This poses a problem for researchers and instructors who are trying to improve students’ study habits.

Using these effective study strategies, such as interleaving, makes learning feel more effortful. Unfortunately, students commonly believe it is a bad thing if the learning process feels difficult. When learning feels difficult, our judgments about how well we will perform tend to be lower than when something feels easy. However, memory performance shows a different pattern. When learning is easy, the material is often quickly forgotten. Alternatively, when learning is more difficult, it tends to lead to improved longer-term retention and higher memory performance (Bjork, 1994). While this difficulty is good for learning outcomes, it can be bad for the accuracy of metacognitive judgments. Before we can get students to change their study habits, it seems we need to change their thoughts about these strategies. If we can get students to associate effortful learning with metacognitive judgments of superior memory performance, we may be able to help students choose these strategies over others.

When teaching these study strategies, explaining how to use the strategy is a vital component, but this instruction could also include an explanation of why the strategies are beneficial to help convince students they are a better choice. Part of this explanation could address the notion that these strategies will feel more difficult, but this difficulty is part of the reason why they are beneficial. If students can accept this message, their metacognitive judgments may start to reflect actual performance and students may become more likely to implement these strategies during their own studying.

References

Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. In J. Metcalfe and A. Shimamura (Eds.). Metacognition: Knowing about Knowing (pp. 185-205). Cambridge, MA: MIT Press.

Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14(1), 4-58.

Karpicke, J. D., Butler, A. C., & Roediger, H. L. (2009). Metacognitive strategies in student learning: Do students practise retrieval when they study on their own? Memory, 17(4), 471-479.


Psychological Myths are Hurting Metacognition

by Dana Melone, Cedar Rapids Kennedy High School

Every year I start my psychology class by asking the students some true or false statements about psychology. These statements are focused on widespread beliefs about psychology and the capacity to learn that are not true or have been misinterpreted.  Here are just a few:

  • Myth 1: People learn better when we teach to their true or preferred learning style
  • Myth 2: People are more right brained or left brained
  • Myth 3: Personality tests can determine your personality type

Many of these myths are still widely believed and used in the classroom, in staff professional development, in the workplace to make employment decisions, and so much more.  Psychological myths in the classroom hurt metacognition and learning.  All of these myths allow us to internalize a particular aspect of ourselves we believe must be true, and this seeps into our cognition as we examine our strengths and weaknesses. 

Myth 1: People learn better when we teach to their true or preferred learning styles

The learning style myth persists.  A Google search of learning styles required me to proceed to page three of the search before finding information on the fallacy of the theory.  The first two pages of the search contained links to tests to find your learning style, and how to use your learning style as a student and at work.  In Multiple Intelligences, by Howard Gardner (1983), the author developed the theory of multiple intelligences.  His idea theorizes that we have multiple types of intelligences (kinesthetic, auditory, visual, etc.) that work in tandem to help us learn.  In the last 30 years his idea has become synonymous with learning styles, which imply we each have one predominant way that we use to learn.  There is no research to support this interpretation of learning styles, and Gardner himself has discussed the misuse of his theory.  If we perpetuate this learning styles myth as educators, employees, or employers, we are setting ourselves up and the people we influence to believe they can only learn in the fashion that best suits them. This is a danger to metacognition.  For example, if I am examining why I did poorly on my last math test and I believe I am a visual learner, I may attribute my poor grade to my instructor’s use of verbal presentation instead of accurately reflecting on the errors I made in studying or calculation. 

image of human brain with list of major functions of the left and right hemispheres

Myth 2: People are more right brained or left brained

Research on the brain indicates a possible difference between the right and left-brain functions.  Most research up to this point examines the left brain as our center for spoken and written language while the right brain controls visual, imagery, and imaginative functions among others.  The research does not indicate, however, that a particular side works alone on this task.  This knowledge of the brain has led to the myth that if we perceive ourselves as better at a particular topic like art for example, we must be more right brained.  In one of numerous studies dispelling this myth, researchers used Magnetic Resonance Imaging (MRI) to examine the brain while completing various “typical” right and left brained tasks.  This research clearly showed what psychologists and neurologists have known for some time.  The basic functions may lie in those areas, but the two sides of the brain work together to complete these tasks (Nielsen, Zielenski, et. al., 2013). How is this myth hurting metacognition?  Like Myth 1, if we believe we are predetermined to a stronger functioning on particular tasks, we may avoid tasks that don’t lie with that strength.  We may also use incorrect metacognition in thinking that we function poorly on something because of our “dominant side.” 

Myth 3: Personality tests can determine your personality type

In the last five years I have been in a variety of work-related scenarios where I have been given a personality test to take.  These have ranged from providing me with a color that represents me or a series of letters that represents me.  In applying for jobs, I have also been asked to undertake a personality inventory that I can only assume weeds out people they feel don’t fit the job at hand.  The discussion / reflection process following these tests is always the same.  How might your results indicate a strength or weakness for you in your job and in your life, and how might this affect how you work with people who do and do not match the symbolism you were given?   Research shows that we tend to agree with the traits we are given if those traits contain a general collection of mostly positive and but also a few somewhat less positive characteristics. However, we need to examine why we are agreeing.  We tend not to think deeply when confirming our own beliefs, and we may be accidentally eliminating situational aspects from our self-metacognition.  This is also true when we evaluate others. We shouldn’t let superficial assumptions based on our awareness of our own or someone else’s personality test results overly control our actions. For example, it would be short-sighted to make employment decisions or promotional decisions based on assumptions that, because someone is shy, they would not do well with a job that requires public appearances. 

Dispelling the Myths

The good news is that metacognition itself is a great way to get students and others to let go of these myths. I like to address these myths head on.  A quick true false exercise can get students thinking about their current beliefs on these myths. Then I get them talking and linking with better decision-making processes.  For example, I ask what is the difference between a theory or correlation and an experiment?  An understanding of what makes good research and what might just be someone’s idea based on observation is a great way to get students thinking about these myths as well as all research and ideas they encounter.  Another great way to induce metacognition on these topics is to have students take quizzes that determine their learning style, brain side, and personality.  Discuss the results openly and engage students in critical thinking about the tests and their results.  How and why do they look to confirm the results?  More importantly what are examples of the results not being true for them?  There are also a number of amazing Ted Talks, articles and podcasts on these topics that get students thinking in terms of research instead of personal examples. Let’s take it beyond students and get the research out there to educators and companies as well.   Here are just a few resources you might use:

Hidden Brain Podcast: Can a Personality Test Tell Us About Who We Are?: https://www.npr.org/2017/12/04/568365431/what-can-a-personality-test-tell-us-about-who-we-are

10 Myths About Psychology Debunked: Ben Ambridge: https://tedsummaries.com/2015/02/12/ben-ambridge-10-myths-about-psychology-debunked/

The Left Brain VS. Right Brain Myth: Elizabeth Waters: https://ed.ted.com/lessons/the-left-brain-vs-right-brain-myth-elizabeth-waters

Learning Styles and the Importance of Critical Self-Reflection: Tesia Marshik: https://www.youtube.com/watch?v=855Now8h5Rs

The Myth of Catering to Learning Styles: Joanne K. Olsen: https://www.nsta.org/publications/news/story.aspx?id=52624


Enhancing Medical Students’ Metacognition

by Leslie A. Hoffman, PhD, Assistant Professor of Anatomy & Cell Biology, Indiana University School of Medicine – Fort Wayne

The third post in this guest editor miniseries examines how metacognition evolves (or doesn’t) as students progress into professional school.  Despite the academic success necessary to enter into professional programs such as medical school or dental school, there are still students who lack the metacognitive awareness/skills to confront the increased academic challenges imposed by these professional programs.  In this post Dr. Leslie Hoffman reflects on her interactions with medical students and incorporates data she has collected on self-directed learning and student reflections on their study strategies and exam performance. ~Audra Schaefer, PhD, guest editor

————————————————————————————————-

The beginning of medical school can be a challenging time for medical students.  As expected, most medical students are exceptionally bright individuals, which means that many did not have to study very hard to perform well in their undergraduate courses.  As a result, some medical students arrive in medical school without well-established study strategies and habits, leaving them overwhelmed as they adjust to the pace and rigor of medical school coursework.  Even more concerning is that many medical students don’t realize that they don’t know how to study, or that their study strategies are ineffective, until after they’ve performed poorly on an exam.  In my own experience teaching gross anatomy to medical students, I’ve found that many low-performing students tend to overestimate their performance on their first anatomy exam (Hoffman, 2016).  In this post I’ll explore some of the reasons why many low-performing students overestimate their performance and how improving students’ metacognitive skills can help improve their self-assessment skills along with their performance.

Metacognition is the practice of “thinking about thinking” that allows individuals to monitor and make accurate judgments about their knowledge, skills, or performance.  A lack of metacognitive awareness can lead to overconfidence in one’s knowledge or abilities and an inability to identify areas of weakness.  In medicine, metacognitive skills are critical for practicing physicians to monitor their own performance and identify areas of weakness or incompetence, which can lead to medical errors that may cause harm to patients.  Unfortunately, studies have found that many physicians seem to have limited capacity for assessing their own performance (Davis et al., 2006).  This lack of metacognitive awareness among physicians highlights the need for medical schools to teach and assess metacognitive skills so that medical students learn how to monitor and assess their own performance. 

Cartoon of a brain thinking about a brain

In my gross anatomy course, I use a guided reflection exercise that is designed to introduce metacognitive processes by asking students to think about their study strategies in preparation for the first exam and how they are determining whether those strategies are effective.   The reflective exercise includes two parts: a pre-exam reflection and a post-exam reflection.  

The pre-exam reflection asks students to identify the content areas in which they feel most prepared (i.e. their strengths) and the areas in which they feel least prepared (i.e. their weaknesses).  Students also discuss how they determined what they needed to know for the upcoming exam, and how they went about addressing their learning needs.  Students were also asked to assess their confidence level and make a prediction about their expected performance on the upcoming exam.  After receiving their exam scores students completed a post-exam reflection, which asked them to discuss what, if any, changes they intended to make to their study strategies based on their exam performance. 

My analysis of the students’ pre-exam reflection comments found that the lowest performing students (i.e. those who failed the exam) often felt fairly confident about their knowledge and predicted they would perform well, only to realize during the exam that they were grossly underprepared.  This illusion of preparedness may have been a result of using ineffective study strategies that give students a false sense of learning.  Such strategies often included passive activities such as re-watching lecture recordings, re-reading notes, or looking at flash cards.  In contrast, none of the highest performing students in the class over-estimated their exam grade; in fact, many of them vastly underestimated their performance. A qualitative analysis of students’ post-exam reflection responses indicated that many of the lowest performing students intended to make drastic changes to their study strategies prior to the next exam.  Such changes included utilizing different resources, focusing on different content, or incorporating more active learning strategies such as drawing, labeling, or quizzing.  This suggests that the lowest performing students hadn’t realized that their study strategies were ineffective until after they’d performed poorly on the exam.  This lack of insight demonstrates a deficiency in metacognitive awareness that is pervasive amongst the lowest performing students and may persist in these individuals beyond medical school and into their clinical practice (Davis et al., 2006).

So how can we, as educators, improve medical students’ (or any students’) metacognitive awareness to enable them to better recognize their shortcomings before they perform poorly on an exam?  To answer this question, I turned to the highest performing students in my class to see what they did differently.  My analysis of reflection responses from high-performing students found that they tended to monitor their progress by frequently assessing their knowledge as they were studying.  They did so by engaging in self-assessment activities such as quizzing, either using question banks or simply trying to recall information they’d just studied without looking at their notes.  They also tended to study more frequently with their peers, which enabled them to take turns quizzing each other.  Working with peers also provided students with feedback about what they perceived to be the most relevant information, so they didn’t get caught up in extraneous details. 

The reflective activity itself is a technique to help students develop and enhance their metacognitive skills.  Reflecting on a poor exam performance, for example, can draw a student’s attention to areas of weakness that he or she was not able to recognize, or ways in which his or her preparation may have been inadequate.   Other techniques for improving metacognitive skills include the use of think-aloud strategies in which learners verbalize their thought process to better identify areas of weakness or misunderstanding, and the use of graphic organizers in which learners create a visual representation of the information to enhance their understanding of relationships and processes (Colbert et al., 2015). 

Ultimately, the goal of improving medical students’ metacognitive skills is to ensure that these students will go on to become competent physicians who are able to identify their areas of weakness, create a plan to address their deficiencies, and monitor and evaluate their progress to meet their learning goals.   Such skills are necessary for physicians to maintain competence in an ever-changing healthcare environment.

Colbert, C.Y., Graham, L., West, C., White, B.A., Arroliga, A.C., Myers, J.D., Ogden, P.E., Archer, J., Mohammad, S.T.A., & Clark, J. (2015).  Teaching metacognitive skills: Helping your physician trainees in the quest to ‘know what they don’t know.’  The American Journal of Medicine, 128(3), 318-324.

Davis, D.A., Mazmanian, P.E., Fordis, M., Harrison, R., Thorpe, K.E., & Perrier, L. (2006). Accuracy of physician self-assessment compared with observed measures of competence: A systematic review. JAMA, 296, 1094-1102. Hoffman, L.A. (2016). Prediction, performance, and adjustments: Medical students’ reflections on the first gross anatomy exam.  The FASEB Journal 30 (1 Supplement): 365.2.


Metacognition v. pure effort: Which truly makes the difference in an undergraduate anatomy class?

by Polly R. Husmann, Ph.D., Assistant Professor of Anatomy & Cell Biology, Indiana University School of Medicine – Bloomington

Intro: The second post of “The Evolution of Metacognition” miniseries is written by Dr. Polly Husmann, and she reflects on her experiences teaching undergraduate anatomy students early in their college years, a time when students have varying metacognitive abilities and awareness.  Dr. Husmann also shares data collected that demonstrate a relationship between students’ metacognitive skills, effort levels, and final course grades. ~ Audra Schaefer, PhD, guest editor

————————————————————————————————–

I would imagine that nearly every instructor is familiar with the following situation: After the first exam in a course, a student walks into your office looking distraught and states, “I don’t know what happened.  I studied for HOURS.”  We know that metacognition is important for academic success [1, 2], but undergraduates often struggle with how to identify study strategies that work or to determine if they actually “know” something.  In addition to metacognition, recent research has also shown that repeated recall of information [3] and immediate feedback also improve learning efficiency [4].  Yet in large, content-heavy undergraduate classes both of these goals are difficult to accomplish.  Are there ways that we might encourage students to develop these skills without taking up more class time? 

Online Modules in an Undergraduate Anatomy Course

I decided to take a look at this through our online modules.  Our undergraduate human anatomy course (A215) is a large (400+) course mostly taken by students planning to go into the healthcare fields (nursing, physical therapy, optometry, etc.).  The course is comprised of both a lecture (3x/week) and a lab component (2x/week) with about forty students in each lab section.  We use the McKinley & O’Loughlin text, which comes with access to McGraw-Hill’s Connect website.  This website includes an e-book, access to online quizzes, A&P Revealed (a virtual dissection platform with images of cadavers) and instant grading.  Also available through the MGH Connect site are LearnSmart study modules. 

These modules were incorporated into the course along with the related electronic textbook as optional extra credit assignments about five years ago as a way to keep students engaging with the material and (hopefully) less likely to just cram right before the tests. Each online module asks questions over a chapter or section of a chapter using a variety of multiple-choice, matching, rank order, fill-in-the-blank, and multiple answer questions. For each question, students are not only asked for their answer, but also asked to rank their confidence for their answer on a four-point Likert scale. After the student has indicated his/her confidence level, the module will then provide immediate feedback on the accuracy of their response. 

During each block of material (4 total blocks/semester) in our anatomy course during the Fall 2017 semester, 4 to 9 LearnSmart modules were available and 2 were chosen by the instructor after the block was completed to be included for up to two points of extra credit (total of 16 points out of 800).  Given the frequency of the opening scenario, I decided to take a look at these data and see what correlations existed between the LearnSmart data and student outcomes in our course.

Results

The graphs (shown below) illustrated that the students who got As and Bs on the first exam had done almost exactly the same number of LearnSmart practice questions, which was nearly fifty more questions than the students who got Cs, Ds, or Fs.  However, by the end of the course the students who ultimately got Cs were doing almost the exact same number of practice questions as those who got Bs!  So they’re putting the same effort into the practice questions, but where is the problem? 

The big difference is seen in the percentage of these questions for which each group was metacognitively aware (i.e., accurately confident when putting the correct answer or not confident when putting the incorrect answer).  While the students who received Cs were answering plenty of practice questions, their metacognitive awareness (accuracy) was often the worst in the class!  So these are your hard-working students who put in plenty of time studying, but don’t really know when they accurately understand the material or how to study efficiently. 

Graphs showing questions completed as well as accuracy of self-assessment.

The statistics further confirmed that both the students’ effort on these modules and their ability to accurately rate whether or not they knew the answer to a LearnSmart practice question were significantly related to their final outcome in the course. (See right-hand column graphs.) In addition to these two direct effects, there was also an indirect effect of effort on final course grades through metacognition.  So students who put in the effort through these practice questions with immediate feedback do generally improve their metacognitive awareness as well.  In fact, over 30% of the variation in final course grades could be predicted by looking at these two variables from the online modules alone.

Flow diagram showing direct and indirect effects on course grade

Effort has a direct effect on course grade while also having an indirect effect via metacognition.

Take home points

  • Both metacognitive skills (ability to accurately rate correctness of one’s responses) and effort (# of practice questions completed) have a direct effect on grade.
  • The direct effect between effort and final grade is also partially mediated by metacognitive skills.
  • The amount of effort between students who get A’s and B’s on the first exam is indistinguishable.  The difference is in their metacognitive skills.
  • By the end of the course, C students are likely to be putting in just as much effort as the A & B students; they just have lower metacognitive awareness.
  • Students who ultimately end up with Ds & Fs struggle to get the work done that they need to.  However, their metacognitive skills may be better than many C level students.

Given these points, the need to include instruction in metacognitive skills in these large classes is incredibly important as it does make a difference in students’ final grades.  Furthermore, having a few metacognitive activities that you can give to students who stop into your office hours (or e-mail) about the HOURS that they’re spending studying may prove more helpful to their final outcome than we realize.

Acknowledgements

Funding for this project was provided by a Scholarship of Teaching & Learning (SOTL) grant from the Indiana University Bloomington Center for Innovative Teaching and LearningTheo Smith was instrumental in collecting these data and creating figures.  A special thanks to all of the students for participating in this project!

References

1. Ross, M.E., et al., College Students’ Study Strategies as a Function of Testing: An Investigation into Metacognitive Self-Regulation. Innovative Higher Education, 2006. 30(5): p. 361-375.

2. Costabile, A., et al., Metacognitive Components of Student’s Difficulties in the First Year of University. International Journal of Higher Education, 2013. 2(4): p. 165-171.

3. Roediger III, H.L. and J.D. Karpicke, Test-Enhanced Learning: Taking Memory Tests Improves Long-Term Retention. Psychological Science, 2006. 17(3): p. 249 – 255.

4. El Saadawi, G.M., et al., Factors Affecting Felling-of-Knowing in a Medical Intelligent Tutoring System: the Role of Immediate Feedback as a Metacognitive Scaffold. Advances in Health Science Education, 2010. 15: p. 9-30.


Paired Self-Assessment—Competence Measures of Academic Ranks Offer a Unique Assessment of Education

by Dr. Ed Nuhfer, California State Universities (retired)

What if you could do an assessment that simultaneously revealed the student content mastery and intellectual development of your entire institution, and you could do so without taking either class time or costing your institution money? This blog offers a way to do this.

We know that metacognitive skills are tied directly to successful learning, yet metacognition is rarely taught in content courses, even though it is fairly easy to do. Self-assessment is neither the whole of metacognition nor of self-efficacy, but self-assessment is an essential component to both. Direct measures of students’ self-assessment skills are very good proxy measures for metacognitive skill and intellectual development. A school developing measurable self-assessment skill is likely to be developing self-efficacy and metacognition in its students.   

This installment comes with lots of artwork, so enjoy the cartoons! We start with Figure 1A, which is only a drawing, not a portrayal of actual data. It depicts an “Ideal” pattern for a university educational experience in which students progress up the academic ranks and grow in content knowledge and skills (abscissa) and in metacognitive ability to self-assess (ordinate). In Figure 1B, we now employ actual paired measures. Postdicted self-assessment ratings are estimated scores that each participant provides immediately after seeing and taking a test in its entirety.

Figure 1.

Figure 1. Academic ranks’ (freshman through professor) mean self-assessed ratings of competence (ordinate) versus actual mean scores of competence from the Science Literacy Concept Inventory or SLCI (abscissa). Figure 1A is merely a drawing that depicts the Ideal pattern. Figure 1B registers actual data from many schools collected nationally. The line slopes less steeply than in Fig. 1A and the correlation is r = .99.

The result reveals that reality differs somewhat from the ideal in Figure 1A. The actual lower division undergraduates’ scores (Fig. 1B) do not order on the line in the expected sequence of increasing ranks. Instead, their scores are mixed among those of junior rank. We see a clear jump up in Figure 1B from this cluster to senior ranks, a small jump to graduate student rank and the expected major jump to the rank of professors. Note that Figure 1B displays means of groups, not ratings and scores of individual participants. We sorted over 5000 participants by academic rank to yield the six paired-measures for the ranks in Figure 1B.

We underscore our appreciation for large databases and the power of aggregating confidence-competence paired data into groups. Employment of groups attenuates noise in such data, as we described earlier (Nuhfer et al. 2016), and enables us to perceive clearly the relationship between self-assessed competence and demonstrable competence.  Figure 2 employs a database of over 5000 participants but depicts them in 104 randomized (from all institutions) groups of 50 drawn from within each academic rank. The figure confirms the general pattern shown in Figure 1 by showing a general upwards trend from novice (freshmen and sophomores), developing experts (juniors, seniors and graduate students) through experts (professors), but with considerable overlap between novices and developing experts.

Figure 2

Figure 2. Mean postdicted self-assessment ratings (ordinate) versus mean science literacy competency scores by academic rank.  Figure 2 comes from selecting random groups of 50 from within each academic rank and plotting paired-measures of 104 groups.

The correlations of r = .99 seen in Figure 1B have come down a bit to r = .83 in Figure 2. Let’s learn next why this occurs. We can understand what is occurring by examining Figure 3 and Table 1. Figure 3 comes from our 2019 database of paired measures, that is now about four times larger than the database used in our earlier papers (Nuhfer et al. 2016, 2017), and these earlier results we reported in this same kind of graph continue to be replicated here in Figure 3A.  People generally appear good at self-assessment, and the figure refutes claims that most people are either “unskilled and unaware of it” or “…are typically overly optimistic when evaluating the quality of their performance….” (Ehrlinger, Johnson, Banner, Dunning, & Kruger, 2008). 

Figure 3

Figure 3. Distributions of self-assessment accuracy for individuals (Fig. 3A) and of collective self-assessment accuracy of groups of 50 (Fig. 3B).

Note that the range in the abscissa has gone from 200 percentage points in Fig 3A to only 20 percentage points in Fig. 3B. In groups of fifty, 81% of these groups estimate their mean scores within 3 ppts of their actual mean scores. While individuals are generally good at self-assessment, the collective self-assessment means of groups are even more accurate. Thus, the collective averages of classes on detailed course-based knowledge surveys seem to be valid assessments of the mean learning competence achieved by a class.

The larger the groups employed, the more accurately the mean group self-assessment rating is likely to approximate the mean competence test score of the group (Table 1). In Table 1, reading across the three columns from left to right reveals that, as group sizes increase, greater percentages of each group converge on the actual mean competency score of the group.

Table 1

Table 1. Groups’ self-assessment accuracy by group size. The ratings in ppts of groups’ postdicted self-assessed mean confidence ratings closely approximate the groups’ actual demonstrated competency mean scores (SLCI). In group sizes of 200 participants, the mean self-assessment accuracy for every group is within ±3 ppts. To achieve such results, researchers must use aligned instruments that produce reliable data as described in Nuhfer, 2015 and Nuhfer et al. 2016.

From Table 1 and Figure 3, we can now understand how the very high correlations in Figure 1B are achievable by using sufficiently large numbers of participants in each group. Figure 3A and 3B and Table 1 employ the same database.

Finally, we verified that we could achieve high correlations like those in Figure 2B in single institutions, even when we examined only the four undergraduate ranks within each. We also confirmed that the rank orderings and best-fit line slopes formed patterns that differed measurably by the institution.  Two examples appear in Figure 4. The ordering of the undergraduate ranks and the slope of the best-fit line in graphs such as those in Fig. 4 are surprisingly informative.

Figure 4

Figure 4. Institutional profiles from paired measures of undergraduate ranks. Figure 4A is from a primarily undergraduate, public institution. Figure 4B comes from a public research-intensive university. The correlations remain very high, and the best-fit line slopes and the ordering pattern of undergraduate ranks are distinctly different between the two schools. 

In general, steeply sloping best-fit lines in graphs like Figures 1B, 2, and 4A indicate when significant metacognitive growth is occurring together with the development of content expertise. In contrast, nearly horizontal best-fit lines (these do exist in our research results but are not shown here) indicate that students in such institutions are gaining content knowledge through their college experience but are not gaining  metacognitive skill. We can use such information to guide the assessment stage of “closing the loop.” The information provided does help taking informed actions. In all cases where undergraduate ranks appear ordered out of sequence in such assessments (as in Fig. 1B and Fig. 4B), we should seek understanding why this is true.

In Figure 4A, “School 7” appears to be doing quite well. The steeply sloping line shows clear growth between lower division and upper division undergraduates in both content competence and metacognitive ability. Possibly, the school might want to explore how it could extend gains of the sophomore and senior classes. “School 3”  (Fig. 4B) probably should want to steepen its best-fit line by focusing first on increasing self-assessment skill development across the undergraduate curriculum.

We recently used paired measures of competence and confidence to understand the effects of privilege on varied ethnic, gender, and sexual orientation groups within higher education. That work is scheduled for publication by Numeracy in July 2019. We are next developing a peer-reviewed journal article to use the paired self-assessment measures on groups to understand institutions’ educational impacts on students. This blog entry offers a preview of that ongoing work.

Notes. This blog follows on from earlier posts: Measuring Metacognitive Self-Assessment – Can it Help us Assess Higher-Order Thinking? and Collateral Metacognitive Damage, both by Dr. Ed Nuhfer.

The research reported in this blog distills a poster and oral presentation created by Dr. Edward Nuhfer, CSU Channel Islands & Humboldt State University (retired); Dr. Steven Fleisher, California State University Channel Islands; Rachel Watson, University of Wyoming; Kali Nicholas Moon, University of Wyoming; Dr. Karl Wirth, Macalester College; Dr. Christopher Cogan, Memorial University; Dr. Paul Walter, St. Edward’s University; Dr. Ami Wangeline, Laramie County Community College; Dr. Eric Gaze, Bowdoin College, and Dr. Rick Zechman, Humboldt State University. Nuhfer and Fleisher presented these on February 26, 2019 at the American Association of Behavioral and Social Sciences Annual Meeting in Las Vegas, Nevada. The poster and slides from the oral presentation are linked in this blog entry.


Measuring Metacognitive Self-Assessment – Can it Help us Assess Higher-Order Thinking?

by Dr. Ed Nuhfer, California State Universities (retired)

Since 2002, I’ve built my “Developers’ Diary” columns for National Teaching and Learning Forum (NTLF) around the theme of fractals and six essential components in practice of college teaching: (1) affect, (2) levels of thinking (intellectual & ethical development), (3) metacognition. (4) content knowledge & skills, (5) pedagogy and (6) assessment. The first three focus on the internal development of the learner and the last three focus on the knowledge being learned. All six have interconnections through being part of the same complex neural networks employed in practice.

In past blogs, we noted that affect and metacognition, until recently, were deprecated and maligned by behavioral scientists, with the most deprecated aspect of metacognition being self-assessment. The highest levels of thinking discovered by Perry are heavily affective and metacognitive, so some later developmental models shunned these stages when only cognition seemed relevant to education. However, the fractal model advocates for practicing through drawing on all six components. Thus, metacognition is not merely important for its own merits; we instructors rely on metacognitive reflection to monitor whether we are facilitating students’ learning through attending to all six.

The most maligned components, affect and self-assessment may offer a key to measuring the overall quality of education and assessing progress toward highest levels of thinking. Such measurements have been something of a Grail quest for developers. To date, efforts to make such measures have proven to be labor intensive and expensive.

Measuring; What, Who, Why, and How?

The manifestation of affect in the highest Perry stages indicates that cognitive expertise and skills eventually connect to affective networks. At advanced levels of development, experts’ affective feelings are informed feelings that lead to rapid decisions for action that are usually effective. In contrast, novices’ feelings are not informed. Beginners’ approaches are tentative and take a trial-and-error approach rather than an efficient path to a solution. By measuring how well students’ affective feelings of their self-assessed competence have integrated with their cognitive expertise, we should be able to assess their stage of progress toward high-level thinking.

To assess a group’s (a class, class rank or demographic category) state of development, we can obtain the group’s mean self -assessments of competence on an item-by-item basis from a valid, reliable multiple-choice test that requires some conceptual thinking. We have such a test in the 25-item Science Literacy Concept Inventory (SLCI). We can construct a knowledge survey of this Inventory (KSSLCI) to give us 25 item-by-item self-assessed estimates of competence from each participant.

As demonstrated in 2016 and 2017, item-by-item averages of group responses attenuate the random noise present in individuals’ responses. Thus, assessments done by using aggregate information from groups can provide a clear self-assessment signal that allows us to see valid differences between groups.

If affective self-assessed estimates become increasingly informed as higher level thinking capacity develops, then we should see that the aggregate item-by item paired measures correlate with increasing strength as groups gain in participants who possess higher order thinking skill. We can indeed see this trend.

Picture the Results

For clear understanding, it is useful first to see what graphs of paired measures of random noise (meaningless nonsense) look like (Figure 1A) and how paired measures look when they correlate perfectly (Figure 1B). We produce these graphs by inputting simulated data into our SLCI and KSSLCI instruments (Figure 1).

Random nonsense produces a nearly horizontal line along the mean (“regression to the mean”) of 400 random simulated responses to each of the 25 items on both instruments. The best-fit line has values of nearly zero for both correlation (r) and line slope (Figure 1A).

We use a simulated set of data twice to get the pattern of perfect correlation when the participants’ mean SLCI and KSSLCI scores for each item are identical. The best-fit line (Figure 1B has a correlation (r) and a line slope, both of about unity (1). The patterns from actual data (Figure 2) will show slopes and correlations between random noise and perfect order.

Fig 1 Nuhfer Modeling Correlational Patterns

Figure 1. Modeling correlational patterns with simulated responses to a measure of competence (SLCI) and a measure of self-assessed competence (KSSLCI). A shows correlational pattern if responses are random noise. B shows the pattern if 400 simulated participants perfectly assessed their competence.

Next, we look at the actual data obtained from 768 novices (freshmen and sophomores—Figure 2 A). Novices’ self-assessed competence and actual competence have a significant positive correlation. The slope is 0.319 and r is .69. The self-assessment measures explain about half of the variance (r2) in SLCI scores. Even novices do not appear to be “unskilled and unaware of it.” Developing experts (juniors, seniors and graduate students, N = 831 in Figure 2B) produce a fit line with a slightly steeper slope of 0.326 and a stronger r of .77. Here, the self-assessment measures explain about 60% of the variance in the Inventory scores.

When we examine experts (109 professors in Figure 2C), the fit line steepens to a slope of 0.472, and a correlation of r = .83 explains nearly 70% of the variance in Inventory scores. The trend from novice to expert is clear.

Final Figure 2D shows the summative mean SLCI scores and KSSLCI ratings for the four undergraduate ranks plus graduate students and professors. The values of KSSLCI/SLCI increase in the order of academic rank. The correlation (r) between the paired measures is close to unity, and the slope of 0.87 produces a pattern very close to that of perfect self-assessment (Figure 1B).

Fig 2 Nuhfer SLCI data

Figure 2: Correlations from novice to expert of item-by-item group means of each of the 25 items addressed on the KSSLCI and the SLCI. Panel A contains the data from 768 novices (freshmen and sophomores). B consists of 831 developing experts (juniors, seniors and graduate students). C comes from 109 experts (professors). Panel D employs all participants and plots the means of paired data by academic rank. We filtered out random guessing by eliminating data from participants with SLCI scores of 32% and lower.

Figure 2 supports: that self-assessments are not random noise, that knowledge surveys reflect actual competence; that affective development occurs with cognitive development, and that a group’s ability to accurately self-assess seems indicative of the group’s general state of intellectual development.

Where might your students might fall on the continuum of measures illustrated above? By using the same instruments we employ, your students can get measures of their science literacy and self-assessment accuracy, and you can get an estimate of your class’s present state of intellectual development. The work that led to this blog is under IRB oversight, and getting these measures is free. Contact enuhfer@earthlink.net for further information.


Using Metacognition to Support Graduating High School Seniors with a LD to Prepare and Transition Successfully to College (Part II)

by Mary L. Hebert, PhD
Campus Director, The Regional Center for Learning Disabilities,
Fairleigh Dickinson University

High school commencement ushers forth connotations of caps and gowns, goodbyes to four years of familiar teachers, friends, routine, challenges and successes. While the focus seems to be on completing a phase of one’s life, commencement actually means a beginning or a start. With high school now a chapter completed, the summer months will be spent preparing for the transition to college. ALL students entering college will have similar adjustments. Students with a history of a learning disability however, may benefit from a purposeful, strategic, or more metacognitive plan for the transition.

Transition and Related Feelings

Students who have had a 504 or Individualized Education Plan (IEP) during their k-12 years, may face concerns that are similar to other students, yet have a heightened sensitivity to things such as academic performance, managing the pace and independence of college life, leaving behind supports and resources that have been familiar and helpful, and wondering where and if resources at college will be available and/or helpful. They will have similar concerns about making new friends like any first year student, but this may be heightened in particular if a student has had social challenges that have accompanied their LD. Students with a history of LD will often express the challenge of finding balance of work, study, time to relax and be social. Findings by Hall and Webster (2008) indicate that college students with LD indicate self-doubt about being able to perform as well as their non-LD college peers. Encouraging an active preparation to foster self-awareness and building strategies of approach will enrich the metacognitive preparation.

In this post, I will continue my series on how we can use metacognitive practices to support LD students during this transition time (see also Part I). Here I will focus on three key areas including academics, social interactions, and finding balance. Prompts in the form of questions are suggested for each area. Metacognition encourages the enrichment of self-awareness through prompts and reflection to create high level critical thinking and concepts that one can apply to a situation and how one functions.

I propose that metacognition can be applied before day one at college and hopefully assist with a more metacognitive approach to the transition prior to stepping onto campus.

Academics:

Most students ponder how college will be different than high school. Students with learning disabilities frequently ponder this more so. College academics will be different. Typically students experience the differences in coursework to be in regard to the degree of independence in preparing and mastering the material and the pace. Students can be encouraged to converse and even better, to list their reflections to prompts which will increase self-awareness about the differences they anticipate and what strategies they might apply to prepare to respond to managing the differences (i.e. encourage metacognition). Prompts that parents, teachers, tutors, and others familiar with the student can consider may include;

  • How do you think classes will be different in college?
  • What strategies have you learned in high school that you will bring to college?
  • What areas do you still have a hard time with?
  • What resources will there be in college that can help you with these areas?
  • Have you looked on your college website or reached out for more information for resources you will reach out to for support?
  • Is there a program on your campus that specifically responds to the needs of students with LD and are do you intend to reach out to this resource?

Supporting a student in answering and reflecting on these prompts will promote a more metacognitive awareness and ultimately help create a plan for the academic tasks of college. It is the student who is least prepared about the differences between high school and college who may face the most difficulty during the transition. Preparation prevents perspiration and is key to the transition.

Social:

If there were one particular common denominator for transitioning first year students, it is the adjustment to their new social arena on campus. No matter who he or she has been friends with or how many or few, they will need to build a new social circle. Supporting an incoming Freshmen to think about and anticipate changes and choices they will have to make will help them adjust and ponder what is going to be important and a priority for them in the adjustment to their social life at college. In preparation to take on the tasks of social adjustment the goal is to enhance the awareness of what skills will be needed to connect with new friends,

For one’s anticipated social adjustment a person familiar and supportive to the student can prompt the student to respond to the following…

  • How have I been successful in my relationships with peers and authority figures in the past?
  • Where have I had challenges?
  • What two areas do I think need to change?
  • How will these improve how I manage socially?
  • What activities or interests do I have that may be areas I pursue in college clubs or organizations?
  • What resources does my new college have that I can use to help me in making social connections?

These and other prompts can channel past experience into helpful reflection, which will not only help a student organize and reflect on challenges in this arena, but also highlight successes and strengths so that these can become a part of a strategy or plan they can put in their college transition ‘toolbox.’

Balance:

Balance is key for us all and truly a never-ending endeavor; however during the first year it is particularly challenging to establish that balance. Students with LD often have a history of structured support in tackling academics, time management, sleep, recreation, etc. College life will usher in a new life of finding a balance more independently. Time management as well as being adequately organized are two of the most commonly discussed issues. They are key factors toward success as well as factors that interfere with it as well. Encourage your student to once again reflect on some prompts to encourage metacognitive reflection and promote a plan of approach. Consider the following:

  • What is your plan for keeping track of your course work and other commitments (social, clubs, appointments etc)? A traditional planner book? A digital planning system?
  • What efforts to stay organized have worked in the past? Why/why not?
  • What has not worked in the past? Why/why not?
  • How will you fit in sleep, wellness needs, recreation, and other commitments with school work?
  • What will be challenging in doing this?
  • What will be the red flags you are having a hard time finding a balance?
  • What will be your plan of action if you are having a hard time with the balance of college life?
  • What will be your go to resources on campus and off campus to support you in finding balance?

In conclusion, supportive prompts and reflection will promote awareness, critical thinking, and purposeful planning for these issues in the transition to college. Doing so prior to day one of college is helpful, but it can also be continued as the student enters college and embraces the new realities of college life.

Understanding how one approaches academics is particularly important for a student with a learning disability. This will be key for college wellness and help them navigate the transition. By applying metacognition, the student can be encouraged to not only think about their thinking about these concepts of academics, social development and finding balance but also to discern strategies to apply and increase the value of their perception of capacity to self-manage the challenges ahead. With these skills in hand, self-advocacy is heightened, which is a key element of success for college students with learning disabilities.

Hall, Cathy W. and Raymond E. Webster (2008). Metacognitive and Affective Factors of College Students With and Without Learning Disabilities. Journal of Postsecondary Education and Disability. 21 (1)


Supporting Student Self-Assessment with Knowledge Surveys

by Dr. Lauren Scharff, U. S. Air Force Academy*

In my earlier post this year, “Know Cubed” – How do students know if they know what they need to know?, I introduced three challenges for accurate student self-assessment. I also introduced the idea of incorporating knowledge surveys as a tool to support student self-assessment (an aspect of metacognitive learning) and promote metacognitive instruction. This post shares my first foray into the use of knowledge surveys.

What exactly are knowledge surveys? They are collections of questions that support student self-assessment of their course material understanding and related skills. Students complete the questions either at the beginning of the semester or prior to each unit of the course (pre), and then also immediately prior to exams (post-unit instruction). When answering the questions, students rate themselves on their ability to answer the question (similar to a confidence rating) rather than fully answering the question. The type of learning expectation is highlighted by including the Bloom’s level at the end of each question. Completion of knowledge surveys develops metacognitive awareness of learning and can help guide more efficient studying.

Example knowledge survey questions
Example knowledge survey questions

My motivation to include knowledge surveys in my course was a result of a presentation by Dr. Karl Wirth, who was invited to be the keynote speaker at the annual SoTL Forum we hold at my institution, the United States Air Force Academy. He shared compelling data and anecdotes about his incorporation of knowledge surveys into his geosciences course. His talk inspired several of us to try out knowledge surveys in our courses this spring.

So, after a semester, what do I think about knowledge surveys? How did my students respond?

In a nutshell, I am convinced that knowledge surveys enhanced student learning and promoted student metacognition about their learning. Their use provided additional opportunities to discuss the science of learning and helped focus learning efforts. But, there were also some important lessons learned that I will use to modify how I incorporate knowledge surveys in the future.

Evidence that knowledge surveys were beneficial:

My personal observations included the following, with increasing levels of each as the semester went on and students learned how to learn using the knowledge survey questions:

  • Students directly told me how much they liked and appreciated the knowledge survey questions. There is a lot of unfamiliar and challenging content in this upper-level course, so the knowledge survey questions served as an effective road map to help guide student learning efforts.
  • Students asked questions in class directly related to the knowledge survey questions (as well as other questions). Because I was clear about what I wanted them to learn, they were able to judge if they had solid understanding of those concepts and ask questions while we were discussing the topics.
  • Students came to office hours to ask questions, and were able to more clearly articulate what they did and did not understand prior to the exams when asking for further clarifications.
  • Students realized that they needed to study differently for the questions at different Bloom’s levels of learning. “Explain” questions required more than basic memorization of the terms related to those questions. I took class time to suggest and reinforce the use of more effective learning strategies and several students reported increasing success and the use of those strategies for other courses (yay!).
  • Overall, students became more accurate in assessing their understanding of the material prior to the exam. More specifically, when I compared the knowledge survey reports with actual exam performance, students progressively became more accurate across the semester. I think some of this increase in accuracy was due to the changes stated in points above.

Student feedback included the following:

  • End-of-semester feedback from students indicated that vast majority of them thought the knowledge surveys supported their learning, with half of them giving them the highest rating of “definitely supports learning, keep as is.”
  • Optional reflection feedback suggested development of learning skills related to the use of the knowledge surveys and perceived value for their use. The following quote was typical of many students:

At first, I was not sure how the knowledge surveys were going to help me. The first time I went through them I did not know many of the questions and I assumed they were things I was already supposed to know. However, after we went over their purpose in class my view of them changed. As I read through the readings, I focused on the portions that answered the knowledge survey questions. If I could not find an answer or felt like I did not accurately answer the question, I bolded that question and brought it up in class. Before the GR, I go back through a blank knowledge survey and try to answer each question by myself. I then use this to compare to the actual answers to see what I actually need to study. Before the first GR I did not do this. However, for the second GR I did and I did much better.

Other Observations and Lessons learned:

Although I am generally pleased with my first foray into incorporating knowledge surveys, I did learn some lessons and I will make some modifications next time.

  • The biggest lesson is that I need to take even more time to explain knowledge surveys, how students should use them to guide their learning, and how I use them as an instructor to tailor my teaching.

What did I do this past semester? I explained knowledge surveys on the syllabus and verbally at the beginning of the semester. I gave periodic general reminders and included a slide in each lesson’s PPT that listed the relevant knowledge survey questions. I gave points for completion of the knowledge surveys to increase the perception of their value. I also included instructions about how to use them at the start of each knowledge survey:

Knowledge survey instructions
Knowledge survey instructions

Despite all these efforts, feedback and performance indicated that many students really didn’t understand the purpose of knowledge surveys or take them seriously until after the first exam (and some even later than that). What will I do in the future? In addition to the above, I will make more explicit connections during the lesson and as students engage in learning activities and demonstrations. I will ask students to share how they would explain certain concepts using the results of their activities and the other data that were presented during the lesson. The latter will provide explicit examples of what would (or would not) be considered a complete answer for the “explain” questions in contrast to the “remember” questions.

  • The biggest student feedback suggestion for modification of the knowledge surveys pertained to the “pre” knowledge surveys given at the start of each unit. Students reported they didn’t know most of the answers and felt like completion of the pre knowledge surveys was less useful. As an instructor, those “pre” responses helped me get a pulse on their level or prior knowledge and use that to tailor my lessons. Thus, I need to better communicate my use of those “pre” results because no one likes to take time to do what they perceive is “busy work.”
  • I also learned that students created a shared GoogleDoc where they would insert answers to the knowledge survey questions. I am all for students helping each other learn, and I encourage them to quiz each other so they can talk out the answers rather than simply re-reading their notes. However, it became apparent when students came in for office hours that the shared “answers” to the questions were not always correct and were sometimes incomplete. This was especially true for the higher-level questions. I personally was not a member of the shared document, so I did not check their answers in that document. In the future, I will earlier and more explicitly encourage students to be aware of the type of learning being targeted and the type of responses needed for each level, and encourage them to critically evaluate the answers being entered into such a shared document.

In sum, as an avid supporter of metacognitive learning and metacognitive instruction, I believe that knowledge surveys are a great tool for supporting both student and faculty awareness of learning, the first step in metacognition. We then should use that awareness to make necessary adjustments to our efforts – the other half of a continuous cycle that leads to increased student success.

———————————————–

* Disclaimer: The views expressed in this document are those of the author and do not reflect the official policy or position of the U. S. Air Force, Department of Defense, or the U. S. Govt.


Where Should I Start With Metacognition?

by Patrick Cunningham, Rose-Hulman Institute of Technology

Have you ever had a student say something like this to you? “I know the material, I just couldn’t show you on the exam.” How do you respond?

I have heard such comments from students and I think it exemplifies two significant deficiencies.

First, students are over-reliant on rehearsal learning strategies. Rehearsal is drill-and-practice or repetitive practice aimed at memorization and pattern matching. Such practices lead to surface learning and shallow processing. Students know facts and can reproduce solutions to familiar problems, but struggle when the problem looks different. Further, when faced with real-world situations they are often not even able to identify the need for the material let alone apply it. Only knowing material by rote is insufficient for fluency with it. For example, I can memorize German vocabulary and grammar rules, but engaging someone from Germany in a real conversation requires much more than just knowing words and grammar.

Second, students are inaccurate in their self-assessments of their learning, which can lead to false confidence and poor learning choices (Ehrlinger & Shain 2014). Related to this, I have developed a response to our hypothetical student. I ask, “How do you know you know the material?” In reply, students commonly point to looking over notes, looking over homework, reworking examples or homework problems, or working old exams – rehearsal strategies. I often follow up by asking how they assessed their ability to apply the material in new situations. This often brings a mixture of surprise and confusion. I then try to help them discover that while they are familiar with the concepts, they are not fluent with them. Students commonly confuse familiarity with understanding. Marilla Svinicki (2004) calls this the Illusion of Comprehension, and others have called it the illusion of fluency. Continuing the language example, I could more accurately test my knowledge of German by attempting and practicing conversations in German rather than just doing flashcards on vocabulary and grammar rules. Unless we employ concrete, demonstrable, and objective measures of our understanding, we are prone to inaccurate self-assessment and overconfidence. And, yes, we and our students are susceptible to these maladies. We can learn about and improve ourselves as we help our students.

Addressing these two deficiencies can be a good place to start with metacognition. Metacognition is the knowledge and regulation of our thinking processes. Our knowledge of strategies for building deeper understanding and our awareness of being susceptible to the illusion of comprehension are components of metacognitive knowledge. Our ability to regulate our thinking (learning) and apply appropriate learning strategies is critically dependent on accurate self-assessment of our level of understanding and our learning processes, specifically, in metacognitive monitoring and evaluation. So how can we support our students’ metacognitive development in these areas?

To help our students know about and use a broader range of learning strategies, we can introduce them to new strategies and give them opportunities to practice them. To learn more deeply, we need to help students move beyond rehearsal strategies. Deeper learning requires expanding and connecting the things we know, and is facilitated by elaborative and organizational learning strategies. Elaboration strategies aid the integration of knowledge into our knowledge frameworks by adding detail, summarizing, and creating examples and analogies. Organizational strategies impose structure on material and help us describe relationships among its elements (Dembo & Seli 2013).

We can help our students elaborate their knowledge by asking them to: 1) explain their solutions or mistakes they find in a provided solution; 2) generate and solve “what-if” scenarios based on example problems (such as, “what if it changed from rolling without slipping to rolling with slipping”); and 3) create and solve problems involving specific course concepts. We can help our students discover the structure of material by asking them to: 1) create concept maps or mind maps (though you may first need to help them learn what these are and practice creating them); 2) annotate their notes from a prior day or earlier in the period; and 3) reorganize and summarize their notes. Using these strategies in class builds students’ familiarity with them and improves the likelihood of students employing them on their own. Such strategies help students achieve deeper learning, knowing material better and making it more accessible and useable in different situations (i.e., more transferable). For example, a student who achieved deeper learning in a system dynamics course will be more likely to recognize the applicability of a specific dynamic model to understand and design a viscosity experiment in an experiment design class.

To help our students engage in more accurate self-assessment we can aid their discovery of being susceptible to inaccurate self-perceptions and give them opportunities to practice strategies that provide concrete, demonstrable, and objective measures of learning. We can be creative in helping students recognize their propensity for inaccuracy. I use a story about an awkward conversation I had about the location of a youth hostel while travelling in Germany as an undergraduate student. I spent several minutes with my pocket dictionary figuring out how to ask the question, “Wissen Sie wo die Jugendherberge ist?” When the kind stranger responded, I discovered I was nowhere near fluent in German. It takes more than vocabulary and grammar to be conversant in the German language!

We can help our students practice more accurate self-assessment by asking them to: 1) engage in brief recall and review sessions (checking completeness and correctness of their recalled lists); 2) self-testing without supports (tracking the time elapsed and correctness of solution); 3) explaining solutions (noticing the coherence, correctness, and fluency of their responses); and 4) creating and solving problems based on specific concepts (again, noting correctness of their solution and the time elapsed). Each of these strategies creates observable and objective measures (examples noted in parentheses) capable of indicating level of understanding. When I have students do brief (1-2 minute) recall exercises in class, I have them note omissions and incorrect statements as they review their notes and compare with peers. These indicate concepts they do not know as well.

Our students are over-reliant on rehearsal learning strategies and struggle to accurately assess their learning. We can help our students transform their learning by engaging them with a broader suite of learning strategies and concrete and objective measures of learning. By starting here, we are helping our students develop transferable metacognitive skills and knowledge, capable of improving their learning now, in our class, and throughout their lives.

References

Ehrlinger, J., & Shain, E. A. (2014). How Accuracy in Students’ Self Perceptions Relates to Success in Learning. In V. A. Benassi, C. E. Overson, & C. M. Hakala (Eds.). Applying science of learning in education: Infusing psychological science into the curriculum. Retrieved from the Society for the Teaching of Psychology web site: http://teachpsych.org/ebooks/asle2014/index.php

Svinicki, M. (2004). Learning and motivation in the postsecondary classroom. San Francisco, CA: John Wiley & Sons.

Dembo, M. & Seli, H. (2013). Motivation and learning strategies for college success: A focus on self-regulated learning (4th ed.). New York, NY: Routledge.


Developing Metacognition with Student Learning Portfolios

In this IDEA paper #44, The Learning Portfolio: A Powerful Idea for Significant Learning, Dr. John Zubizarreta shares models and guidance for incorporating learning portfolios. He also makes powerful arguments regarding the ability of portfolios to engage students in meaningful reflection about their learning, which in turn will support a metacognitive development and life-long learning.

 


Glimmer to Glow: Creating and Growing the Improve with Metacognition Site

by Lauren Scharff, Ph.D., U. S. Air Force Academy *

It’s been three years since Improve with Metacognition (IwM) went live, but the glimmer of the idea started more than a year prior to that, and we still consider it a work in progress. The adventure started with a presentation on metacognition that Aaron Richmond and I gave at the Southwestern Psychological Association (SWPA) convention in 2013. We both had independently been working on projects related to metacognition, and decided to co-present in the teaching track of the conference. We had good attendance at the session and an enthusiastic response from the audience. I made the suggestion of forming some sort of online community in order to continue the exchange of ideas, and passed around a sign-up sheet at the end of the session.

I have to say that my initial idea of an online community was very limited in scope: some sort of online discussion space with the capability to share documents. I thought it would be super quick to set up. Well, the reality was not quite so easy (lol) and our ambitions for the site grew as we discussed it further, but with help from some friends we got it going just in time to unveil it at the SWPA 2014 convention. Along the way I pulled in our third co-creator, John Draeger, who helped shape the site and presented with us at the 2014 convention.

As Aaron mentioned in his reflection last week, during the past three years we have shared information about the site at a variety of conferences both within the United States and beyond. The response has always been positive, even if not as many people go the next step and sign up for updates or write guest contributions as we’d like. One common line of questioning has been, “This is fantastic! I am interested in doing something similar on the topic of X. How did you get it going?”

We do hope that IwM can serve as a model for other collaboration sites, so here are a few things that stand out for me as I reflect on our ongoing efforts and the small glow we have going so far.

  • Partnerships are essential! John, Aaron, and I have some different skill sets and areas of expertise relevant to running the site, and our professional networks reach different groups. Further, with three of us running it, when life gets nuts for one of us, the others can pick up the slack. I can’t imagine trying to set up and maintain a site like IwM all on my own.
  • Practice metacognition! The three of us periodically join together in a Skype session to reflect on what seems to be working (or not), and share ideas for new features, collaboration projects, etc. We use that reflection to self-regulate our plans for the site (awareness plus self-regulation –> metacognition). Sometimes we’ve had to back off on our initiatives and try new strategies because the initial effort wasn’t working as we’d hoped. A long-time saying I’m fond of is, “the only way to coast is downhill.” Any endeavor, even if wildly successful at first, will require some sort of ongoing effort to keep it from coasting downhill.
  • Be open and provide an environment that supports professional development! (And realize this requires time and effort.) We want to encourage broad involvement in the site and provide opportunities for a wide variety of people interested in metacognition to share their ideas and efforts. We also hope to have a site that is viewed as being legitimate and professional. This balancing act has been most apparent with respect to the blog posts, because not everyone has strong writing skills. And, we believe that even those with strong writing skills can benefit from feedback. Thus, we provide feedback on every submitted post, sometimes suggesting only minor tweaks and sometimes suggesting more substantial revisions. The co-creators even review each other’s drafts before they are posted. As anyone who provides feedback on writing assignments or reviews journal articles knows, this process is a labor of love. We learn a lot from our bloggers – they share new ideas and perspectives that stimulate our own thinking. But, providing the appropriate level of feedback so as to clearly guide the revisions without squashing enthusiasm is sometimes a challenge. Almost always, at least two of the co-creators review each blog submission, and we explicitly communicate with each other prior to sending the feedback, sometimes combined and sometimes separate. That way we can provide a check on the tone and amount of feedback we send. Happily, we have received lots of thanks from our contributors and we don’t have any cases where a submission was withdrawn following receipt of our feedback.

Upon further reflection, my overall point is that maintaining a quality blog, resource, and collaboration site requires more than just getting people to submit pieces and posting articles and other resources. We hadn’t fully realized the level of effort required when we started, and we have many new ideas that we still hope to implement. But, on so many levels all the efforts have been worthwhile. We believe we have a fantastic (and growing) collection of blogs and resources, and we have had several successful collaboration projects (with more in the works).

We welcome your suggestions, and if you have the passion and time to help us glow even brighter, consider joining us as either a collaboration-consultant or as a guest blogger.

Lauren

* Disclaimer: The views expressed in this document are those of the authors and do not reflect the official policy or position of the U. S. Air Force, Department of Defense, or the U. S. Govt.


The Great, The Good, The Not-So-Good of Improve with Metacognition: An Exercise in Self-Reflection

By Aaron S. Richmond, Ph. D., Metropolitan State University of Denver

Recently, Lauren, John, and I reflected on and discussed our experiences with Improve with Metacognition (IwM). We laughed and (no crying) found (at least I did) that our experiences were rich and rewarding. As such, we decided that each of us would write a blog on our experience and self-reflection with IwM. Therefore, I’m up. When thinking about IwM, the theme that kept surfacing in my mind is that we are Great, Good, and on a few things—Not-So-Great.

The Great

Oh, how can I count the ways of how IwM is Great. Well, by counting. In my reflection on what we have accomplished, it came apparent that here at IwM, we have been highly productive in our short existence. Specifically, we have published over 200 blogs, resources about metacognition measures, videos, instruction, curated research articles, and teaching metacognition (see our new call for Teaching with Metacognition). We have created a space for collaborators to gather and connect. We have engaged in our own research projects. We have had over 35 contributors from all over North America and a few from beyond, who have ranged from preeminent scholars in the field of metacognition and SoTL to graduate students writing their first blog. Speaking for Lauren and John, I can only hope that the explosion in productivity and high quality research and writing continues with IwM.

The Good

Ok, it is not just Good—this is just another thing that is great. IwM has produced some amazing blogs. I can’t review them all because, this time I will keep to my word count, but I would like to highlight a few insightful blogs that resonated with me. First, Edn Nuhfer recently wrote a blog titled, Collateral Metacognitive Damage (2017, February). The title is amazing in itself, but Ed extolls the use of self-assessments, why approach and perspective of self-assessment matters most (be the little engine that could vs. little engine who couldn’t), and provides a marvelous self-assessment tool (http://tinyurl.com/metacogselfassess ). I have already shared this with my students and colleagues. Second, one of the topics I would never have thought of, was Stephen Chew’s blog on Metacognition and Scaffolding Learning (2015, July). I have used scaffolding (and still do) throughout all of my courses, however, I never considered that by over-scaffolding, that I could reduce my student’s ability to calibrate (know when you know or don’t know something). That is, by providing too much scaffolding, it may cause students to be highly over confident and overestimate their knowledge and skill. Third, Chris Was wrote about A Mindfulness Perspective on Metacognition (2014, October). I have been begrudgingly and maybe curmudgeonly resistant to mindfulness. As such,  I was skeptical even though I know how great Chris’ research is. Well, Chris convinced me of the value of mindfulness and its connection to metacognition. Chris said it best, “It seems to me that if metacognition is knowledge and control of one’s cognitive processes and training in mindfulness increases one’s ability to focus and control awareness in a moment-by-moment manner, then perhaps we should reconsider, and investigate the relationship between mindfulness and metacognition in education and learning.” There are literally dozens and dozens of other blogs that I have incorporate into both my teaching and research. The work done at IwM is not merely good, it is great!

The Not-So-Good

IwM has been a labor of love. Speaking for myself, the work that has been done is amazing, exhausting, invigorating, productive, and fulfilling.  However, what I believe we have been “Not Great” at is getting the word out. That is, considering that there are over 200 blogs, resources, curated research articles, collaborations, etc. I believe that one of the things we are struggling with is spreading the gospel of metacognition.  Also, despite the fact that Lauren, John, and I have travelled across the globe (literally) promoting IwM at various conferences, so few people know about the good work being done. Moreover, notwithstanding that we have 258 email subscribers, I feel (passionately) that we can do better. I want and desire for other researchers and practitioners to not only benefit from the work we’ve done but to contribute to new IwM blogs, resources, research, and collaboration.

As I do with all my blogs, I will leave you with an open-ended question: What can we do to spread the word of the Great and Good work here at IwM?

Please give me/us some strategies or go out and help spread the word for us.

References

Chew, S. (2015, July). Metacognition and scaffolding student learning. Retrieved from https://www.improvewithmetacognition.com/metacognition-and-scaffolding-student-learning/

Nuhfer, E. (2017, February). Collateral metacognitive damage. Retrieved from https://www.improvewithmetacognition.com/collateral-metacognitive-damage/

Was, C. (2015, October). A mindfulness perspective on metacognition. Retrieved from https://www.improvewithmetacognition.com/a-mindfulness-perspective-on-metacognition/


Hypercorrection: Overcoming overconfidence with metacognition

by Jason Lodge, Melbourne Centre for the Study of Higher Education, University of Melbourne

Confidence is generally seen as a positive attribute to have in 21st Century Western society. Confidence contributes to higher self-esteem, self-reported happiness. It apparently makes someone more attractive and leads to better career outcomes. With such strong evidence suggesting the benefits of confidence, it is no wonder that building confidence has become a major focus within many sectors, particularly in professional development and education.

Despite the evidence for the benefits of confidence, it has a dark side that is overconfidence. There are many occasions where it is problematic to overinflate our skills or abilities. Learning is one of the most obvious examples. According to the (in)famous Dunning-Kruger effect, unskilled learners are often unaware that they are in fact unskilled. The issue here is that those who are low in knowledge of an area are often ignorant to how much they don’t know about the area.

Overconfidence is particularly problematic for students when considering how important it is to make relatively accurate estimates about how they are progressing. For example, if a student is overconfident about their progress, they may decide to stop reviewing or revising a topic prematurely. If students have a difficulty in accurately self-evaluating their learning it can lead them to being underprepared to use the knowledge, for example in an exam or when they need it in practice.

Being wrong can be good

One of the main problems with overconfidence is that students can fail to correct misconceptions or realise that they are wrong. Being wrong or failing has been long seen as negative educational outcomes.

Recent research on productive failure (e.g. Kapur, 2015) has shown, however, that being wrong and coming to realise it is a powerful learning experience. As opposed to more traditional notions of error-free learning, researchers are now starting to understand how important it is for learners to make mistakes. One of the necessary conditions for errors to be effective learning experiences though is that students need to realise they are making them. This is a problem when students are overconfident because they fail to see themselves failing.

There is a silver lining to overconfidence when it comes to making mistakes though. Research on a process called hypercorrection demonstrates that when learners are highly confident but wrong, if the misconception can be corrected, they have a much more effective learning experience (Butterfield & Metcalfe, 2001). In other words, overconfident students who realise that they are wrong about something tend to be surprised and that surprise means they are more likely to learn from the experience.

How metacognition helps with overconfidence

While hypercorrection has potential for helping students overcome misconceptions and achieve conceptual change, it doesn’t happen automatically. One of the main prerequisites is that students need to have enough awareness to realise that they are wrong. The balance between confidence and overconfidence is therefore precarious. It is helpful for students to feel confident that they can manage to learn new concepts, particularly complex and difficult concepts. Confidence helps students to persist when learning becomes difficult and challenging. However, students can have this confidence without necessarily engaging in careful reflective processing. In other words, confidence is not necessarily related to students being able to accurately monitoring their progress.

On the other hand though, it can be easy for students to feel confident in their knowledge of certain misconceptions. This is particularly so if the misconceptions are intuitive and based on real world experience. It is common to have misconceptions about physics and psychology for example because students have vast experience in the physical and social world. This experience gives them intuitive conceptions about the world that are reinforced over time. Some of these conceptions are wrong but their experience gives students high levels of confidence that they are right. Often careful observation or deliberate instructional design is required to shift students’ thinking about these conceptions.

Metacognition is critical in allowing students to monitor and detect when they are making errors or have incorrect conceptions. With misconceptions in particular, students can continue to believe false information if they don’t address the process at which they arrive at a conclusion. Often, overcoming a misconception requires dealing with the cognitive disequilibrium that comes from attempting to overwrite an intuitive conception of the world with a more sophisticated scientific conception.

For example, intuitively a heavy object like a bowling ball and light object like a feather will fall at different rates but, when observing both being dropped simultaneously, they fall at the same rate. The observation causes disequilibrium between the intuitive notion and the more sophisticated understanding of force and gravity encapsulated by Newton’s second law. Generally, overcoming this kind of disequilibrium requires students to shift strategies or approaches to understanding the concept to redress the faulty logic they relied on to arrive at the initial misconception. So in this example, they need to develop a higher-level conception of gravity that requires shifting from intuitive notions. Recognising the need for this shift only comes through metacognitive monitoring and effective error detection.

So metacognition is often necessary for correcting misconceptions and is particularly effective when students are confident about what they think they know and have the realisation that they are wrong. Overconfidence can therefore be managed through enhanced metacognition.

The research on confidence and hypercorrection suggests that it is good for students to be confident about what they think they know as long as they are prepared to recognise when they are wrong. This requires an ability to be able to detect errors and, more broadly, calibrate their perceived progress against their actual progress. While teachers can help with this to a degree through feedback and scaffolding, it is vital that students develop metacognition so that they can monitor when they are wrong or when they are not progressing as they should be. If they can, then there is every chance that the learning experience can be more powerful as a result.

References

Butterfield, B., & Metcalfe, J. (2001). Errors committed with high confidence are hypercorrected. Journal of Experimental Psychology. Learning, Memory, and Cognition, 27(6), 1491–1494. DOI: 10.1037/0278-7393.27.6.1491

Kapur, M. (2015). Learning from productive failure. Learning: Research and Practice, 1(1), 51–65. DOI: 10.1080/23735082.2015.1002195


The Challenge of Deep Learning in the Age of LearnSmart Course Systems (Part 2)

A few months ago, I shared Part 1 of this post. In it I presented the claim that, “If there are ways for students to spend less time per course and still “be successful,” they will find the ways to do so. Unfortunately, their efficient choices may short-change their long-term, deep learning.” I linked this claim to some challenges that I foresaw with respect to two aspects of the online text chosen for the core course I was teaching: 1) the pre-highlighted LearnSmart text, and 2) the metacognition-focused LearnSmart quizzing feature. This feature required students to not only answer the quiz question, but also report their confidence in the correctness of that response. (See Part 1 for details to explain my concerns. Several other posts on this site also discuss confidence ratings as a metacognition tool. See references below.) My stated plan was to “regularly check in with the students, have class discussions aimed at bringing their choices about their learning behaviors into their conscious awareness, and positively reinforcing their positive self-regulation of deep-learning behaviors.” 

This post, Part 2, will share my reflections on how things turned out, along with a summary of some feedback from my students.

With respect to my actions, I did the following in order to increase student awareness of their learning choices and the impact of those choices. Twice early in the semester I took class time to explicitly discuss the possible learning shortcuts students might be tempted to take when reading the chapters (e.g. only reading the highlighted text) and when completing the LearnSmart pre-class quizzes (see Part 1 for details). I shared some alternate completion options that would likely enhance their learning and long-term retention of the material (e.g. reading the full text without highlights and using the online annotation features). Additionally, I took time to share other general learning / studying strategies that have been shown through research to support better learning. These ways of learning were repeatedly reinforced throughout the semester (and linked to content material when applicable, such as when we discussed human learning and memory).

Did these efforts impact student behaviors and choices of learning strategies? Although I cannot directly answer that question, I can share some insights based on some LearnSmart data, course performance, and reflections shared by the students.

With respect to the LearnSmart application that quizzed students at the end of each chapter, one type of data I was able to retrieve was the overall percent of time that student LearnSmart quiz question responses fell into the following correctness and confidence categories (a metacognition-related evaluation):

  1. Students answered correctly and indicated confidence that they would answer correctly
  2. Students answered correctly but indicated that they were not confident of the correctness of their response
  3. Students answered incorrectly and knew they didn’t know the answer
  4. Students answered incorrectly but reported confidence in giving the correct answer

I examined how the percentage of time student responses fell in each category correlated with two course performance measures (final exam grade and overall course grades). Category 1 (correct and confident) and Category 3 (incorrect and knew it) both showed essentially a zero relationship with performance. There was a small positive relationship between being correct but not certain (Category 2). Category 2 responses might prompt more attention to the topic and ultimately lead to better learning. The strongest correlations (negative direction) occurred for Category 4, which was the category about which I was most concerned with respect to student learning and metacognition. There are two reasons students might have responses in that category. They could be prioritizing time efficiency over learning because they were intentionally always indicating they were confident (so that if they got lucky and answered correctly, the question would count toward the required number that they had to answer both correctly and with confidence; if they indicated low confidence, then the question would not count toward the required number they had to complete for the chapter). Alternately, Category 4 responses could be due to students being erroneous with respect their own state of understanding, suggesting poor metacognitive awareness and a likelihood to perform poorly on exams despite “studying hard.” Although there was no way for me to determine which of these two causes were underlying the student responses in this category, the negative relationship clearly indicated that those who had more such responses performed worse on the comprehensive final exam and in the course at large.

I also asked my students to share some verbal and written reflections regarding their choices of learning behaviors. These reflections didn’t explicitly address their reasons for indicating high or low confidence for the pre-class quizzes. However, they did address their choices with respect to reading only the highlighted versus the full chapter text. Despite the conversations at the beginning of the semester stressing that exam material included the full text and that their learning would be more complete if they read the full text, almost half the class reported only reading the highlighted text (or shifting from full to highlighted). These students indicated that their choice was primarily due to perceived time constraints and the fact that the pre-class LearnSmart quizzes focused on the highlighted material so students could be successful on the pre-class assignment without reading the full text. More positively, a couple students did shift to reading the full text because they saw the negative impact of only reading the highlighted text on their exam grades. Beyond the LearnSmart behaviors, several students reported increasing use (even in other courses) of the general learning / study strategies we discussed in class (e.g. working with a partner to discuss and quiz each other on the material), and some of them even shard these strategies with friends!

So, what are my take-aways?

Although this should surprise no one who has studied operant conditioning, the biggest take-away for me is that for almost half my students the immediate reinforcement of being able to more quickly complete the pre-class LearnSmart quiz was the most powerful driver of their behavior, despite explicit in-class discussion and their own acknowledgement that it hurt their performance on the later exams. When asked what they might do differently if they could redo the semester, several of these students indicated that they would tell themselves to read the full text. But, I have to wonder if this level of awareness would actually drive their self-regulatory behaviors due the unavoidable perceptions of time constraints and the immediate reinforcement of “good” performance on the pre-class LearnSmart quizzes. Unfortunately, at this point, instructors do not have control over the questions asked in the LearnSmart quizzes, so that particular (unwanted) reinforcement factor is unavoidable if you use those quizzes. A second take-away is that explicit discussion of high-efficacy learning strategies can lead to their adoption. These strategies were relatively independent from the LearnSmart quiz requirement for the course, so there was no conflict with those behaviors. Although the reinforcement was less immediate, students reported positive results from using the strategies, which motivated them to keep using them and to share them with friends. Personally, I believe that the multiple times that we discussed these general learning strategies also helped because they increased student awareness of them and their efficacy (awareness being an important first step in metacognition).

————

Some prior blog posts related to Confidence Ratings and Metacognition

Effects of Strategy Training and Incentives on Students’ Performance, Confidence, and Calibration, by Aaron Richmond October 2014

Quantifying Metacognition — Some Numeracy behind Self-Assessment Measures, by Ed Nuhfer, January 2016

The Importance of Teaching Effective Self-Assessment, by Stephen Chew, Feb 2016

Unskilled and Unaware: A Metacognitive Bias, by John R. Schumacher, Eevin Akers, and Roman Taraban, April 2016


Using Metacognition to select and apply appropriate teaching strategies

by John Draeger (SUNY Buffalo State) & Lauren Scharff (U. S. Air Force Academy)

Metacognition was a recurring theme at the recent Speaking SoTL (Scholarship of Teaching and Learning) conference at Highpoint university. Invited speaker Saundra McGuire, for one, argued that metacognition is the key to teaching students how to learn. Stacy Lipowski, for another, argued for the importance of metacognitive self-monitoring through the regular testing of students. We argued for the importance of metacognitive instruction (i.e. the use of reflective awareness and self-regulation to make intentional and timely adjustments to teaching a specific  individual or group of students) as a tool for selecting and implementing teaching strategies. This post will share a synopsis of our presentation from the conference.

We started with the assumption that many instructors would like to make use of evidence-based strategies to improve student learning, but they are often faced with the challenge of how to decide among the many available options. We suggested that metacognitive instruction provides a solution. Building blocks for metacognitive instruction include 1) consideration of student characteristics, context, and learning goals, 2) consideration of instructional strategies and how those align with the student characteristics, context, and learning goals, and 3) ongoing feedback, adjustment and refinement as the course progresses (Scharff & Draeger, 2015).

Suppose, for example, that you’re teaching a lower-level core course in your discipline with approximately 35 students where the course goals include the 1) acquisition of broad content and 2) application of this content to new contexts (e.g., current events, personal situations, other course content areas). Students enrolled in the course typically have a variety of backgrounds and ability levels. Moreover, they don’t always see the relevance of the course and they are not always motivated to complete assignments. As many of us know, these core courses are both a staple of undergraduate education and a challenge to teach.

Scholarly teachers (Richlin, 2001) consult the literature to find tools for addressing the challenges just described. Because of the recent growth of SoTL work, they will find many instructional choices to choose from. Let’s consider four choices. First, Just-in-Time teaching strategies ask students to engage course material prior to class and relay those responses to their instructor (e.g., select problem sets or focused writing). Instructors then use student responses to tailor the lesson for the day (Novak, Patterson, & Gavrin, 1999; Simkins & Maier, 2004; Scharff, Rolf, Novotny, & Lee, 2013). In courses where Just-in-Time teaching strategies are used, students are more likely to read before class and take ownership over their own learning. Second, Team-Based Learning (TBL) strategies also engage students in some pre-class preparation, and then during class, students engage in active learning through a specific sequence of individual work, group work, and immediate feedback to close the learning loop (Michaelsen & Sweet, 2011). TBL has been shown to shift course goals from knowing to applying and create a more balanced responsibility for learning between faculty and students (with students taking on more responsibility). Third, concept maps provide visual representations of important connections (often hierarchical connections) between important concepts. They can help students visualize connections between important course concepts (Davies, 2010), but they require some prior understanding of the concepts being mapped. Fourth, mind mapping also leads to visual representations of related concepts, but the process is more free-form and creative, and often requires less prior knowledge. It encourages exploration of relationships and is more similar to brainstorming.

Any of these three tools might be good instructional choices for the course described above. But how is an instructor supposed to choose?

Drawing inspiration from Tanner (2012) who shared questions to prompt metacognitive learning strategies for students, we recommend that instructors ask themselves a series of questions aligned with each of our proposed building blocks to prompt their own metacognitive awareness and self-regulation (Scharff & Draeger, 2015). For example, instructors should consider the type of learning (both content and skills) they hope their students will achieve for a given course, as well as their own level of level of preparedness and time / resources available for incorporating that particular type of teaching strategy.

In the course described above, any of the four instructional strategies might help with the broad acquisition of content, and depending upon how they are implemented, some of them might promote student application of the material to new contexts. For example, while concept maps can facilitate meaningful learning their often hierarchical structure may not allow for the flexibility associated with making connections to personal context and current events. In contrast, the flexibility of mind-mapping might serve well to promote generation of examples for application, but it would be less ideal to support content acquisition. Team-Based-Learning can promote active learning and facilitate the application of knowledge to personal contexts and current events, but it requires the instructor to have high familiarity with the course and the ability to be very flexible during class as students are given greater responsibility (which may be problematic with lower-level students who are not motivated to be in the course).   Just-in-Time-Teaching can promote both content acquisition and application if both are addressed in the pre-class questions. During class, the instructor should show some flexibility by tailoring the lesson to best reach students based on their responses to the pre-class questions, but overall, the lesson is much more traditional in its organization and expectations for student engagement than with TBL. Under these circumstances, it might be that Just-in-Time strategies offer the best prospect for teaching broad content to students with varying backgrounds and ability levels.

While the mindful choice of instructional strategies is important, we believe that instructors should also remain mindful in-the-moment as they implement strategies. Questions they might ask themselves include:

  • What are you doing to “check in” with your learners to ensure progress towards daily and weekly course objectives?
  • What are signs of success (or not) of the use of the strategy?
  • How can you  adjust the technique to better meet your student needs?
  • Are your students motivated and confident, or are they bored or overwhelmed and frustrated? Are your students being given enough time to practice new skills?
  • If learning is not where it needs to be or student affect is not supportive of learning, what are alternate strategies?
  • Are you prepared to shift to them? If not, then why not?

These prompts can help instructors adjust and refine their implementation of the chosen instructional strategy in a timely manner.

If, for example, Just-in-Time assignments reveal that students are understanding core concepts but having difficulty applying them, then the instructor could tweak Just-in-time assignments by more explicitly requiring application examples. These could then be discussed in class. Alternatively, the instructor might keep the Just-in-Time questions focused on content, but start to use mind mapping during class in order to promote a variety of examples of application.  In either case, it is essential that instructors are explicitly and intentionally considering whether the instructor choice is working as part of an ongoing cycle of awareness and self-regulation. Moreover, we believe that as instructors cultivate their ability to engage in metacognitive instruction, they will be better prepared to make in-the-moment adjustments during their lessons because they will be more “tuned-in” to the needs of individual learners and they will be more aware of available teaching strategies.

While not a magic bullet, we believe that metacognitive instruction can help instructors decide which instructional strategy best fits a particular pedagogical situation and it can help instructors adjust and refine those techniques as the need arises.

References

Davies, M. (2011). Concept mapping, mind mapping and argument mapping: what are the differences and do they matter? Higher education, 62(3), 279-301.

Michaelsen, L. K., & Sweet, M. (2011). Team‐based learning. New directions for teaching and learning,(128), 41-51.

Novak, G., Patterson, E., Gavrin, A., & Christian, W. (1999). Just-in-time teaching:

Blending active learning with web technology. Upper Saddle River, NJ: Prentice Hall.

Richlin, L. (2001). Scholarly teaching and the scholarship of teaching. New directions for teaching and learning, 2001(86), 57-68.

Scharff, L. and Draeger, J. (2015). “Thinking about metacognitive instruction” National Teaching and Learning Forum 24 (5), 4-6.

Scharff, L., Rolf, J. Novotny, S. and Lee, R. (2011). “Factors impacting completion of pre-class assignments (JiTT) in Physics, Math, and Behavioral Sciences.” In C. Rust (ed.), Improving Student Learning: Improving Student Learning Global Theories and Local Practices: Institutional, Disciplinary and Cultural Variations. Oxford Brookes University, UK.

Simkins, S. & Maier, M. (2009). Just-in-time teaching: Across the disciplines, across the

academy. Stylus Publishing, LLC.

Tanner, K. D. (2012). Promoting student metacognition. CBE-Life Sciences Education, 11(2), 113-120.


Does Processing Fluency Really Matter for Metacognition in Actual Learning Situations? (Part Two)

By Michael J. Serra, Texas Tech University

Part II: Fluency in the Classroom

In the first part of this post, I discussed laboratory-based research demonstrating that learners judge their knowledge (e.g., memory or comprehension) to be better when information seems easy to process and worse when information seems difficult to process, even when eventual test performance is not predicted by such experiences. In this part, I question whether these outcomes are worth worrying about in everyday, real-life learning situations.

Are Fluency Manipulations Realistic?

Researchers who obtain effects of perceptual fluency on learners’ metacognitive self-evaluations in the laboratory suggest that similar effects might also obtain for students in real-life learning and study situations. In such cases, students might study inappropriately or inefficiently (e.g., under-studying when they experience a sense of fluency or over-studying when they experience a sense of disfluency). But to what extent should we be worried that any naturally-occurring differences in processing fluency might affect our students in actual learning situations?

Look at the accompanying figure. This figure presents examples of several ways in which researchers have manipulated visual processing fluency to demonstrate effects on participants’ judgments of their learning. When was the last time you saw a textbook printed in a blurry font, or featuring an upside down passage, or involving a section where pink text was printed on a yellow background? fluencyWhen you present in-person lectures, do your PowerPoints feature any words typed in aLtErNaTiNg CaSe? (Or, in terms of auditory processing fluency, do you deliver half of the lesson in a low, garbled voice and half in a loud, booming voice?). You would probably – and purposefully – avoid such variations in processing fluency when presenting to or creating learning materials for your students. Yet, even in the laboratory with these exaggerated fluency manipulations, the effects of perceptual fluency on both learning and metacognitive monitoring are often small (i.e., small differences between conditions). Put differently, it takes a lot of effort and requires very specific, controlled conditions to obtain effects of fluency on learning or metacognitive monitoring in the laboratory.

Will Fluency Effects Occur in the Classroom?

Careful examination of methods and findings from laboratory-based research suggests that such effects are unlikely to occur in the real-life situations because of how fragile these effects are in the laboratory. For example, processing fluency only seems to affect learners’ metacognitive self-evaluations of their learning when they experience both fluent and disfluent information; experiencing only one level of fluency usually won’t produce such effects. For example, participants only judge information presented in a large, easy-to-read font as better learned than information presented in a small, difficult-to-read font when they experience some of the information in one format and some in the other; when they only experience one format, the formatting does not affect their learning judgments (e.g., Magreehan et al., 2015; Yue et al., 2013). The levels of fluency – and, perhaps more importantly, disfluency – must also be fairly distinguishable from each other to have an effect on learners’ judgments. For example, consider the example formatting in the accompanying figure: learners must notice a clear difference in formatting and in their experience of fluency across the formats for the formatting to affect their judgments. Learners likely must also have limited time to process the disfluent information; if they have enough time to process the disfluent information, the effects on both learning and on metacognitive judgments disappear (cf. Yue et al., 2013; but see Magreehan et al., 2015). Perhaps most important, the effects of fluency on learning judgments are easiest to obtain in the laboratory when the learning materials are low in authenticity or do not have much natural variation in intrinsic difficulty. For example, participants will base their learning judgments on perceptual fluency when all of the items they are asked to learn are of equal difficulty, such as pairs of unrelated words (e.g., “CAT – FORK”, “KETTLE – MOUNTAIN”), but they ignore perceptual fluency once there is a clear difference in difficulty, such as when related word pairs (e.g., “FLAME – FIRE”, “UMBRELLA – RAIN”) are also part of the learning materials (cf. Magreehan et al., 2015).

Consider a real-life example: perhaps you photocopied a magazine article for your students to read, and the image quality of that photocopy was not great (i.e., disfluent processing fluency). We might be concerned that the poor image quality would lead students to incorrectly judge that they have not understood the article, when in fact they had been able to comprehend it quite well (despite the image quality). Given the evidence above, however, this instance of processing fluency might not actually affect your students’ metacognitive judgments of their comprehension. Students in this situation are only being exposed to one level of fluency (i.e., just disfluent formatting), and the level of disfluency might not be that discordant from the norm (i.e., a blurry or dark photocopy might not be that abnormal). Further, students likely have ample time to overcome the disfluency while reading (i.e., assuming the assignment was to read the article as homework at their own pace), and the article likely contains a variety of information besides fluency that students can use for their learning judgments (e.g., students might use their level of background knowledge or familiarity with key terms in the article as more-predictive bases for judging their comprehension). So, despite the fact that the photocopied article might be visually disfluent – or at least might produce some experience of disfluency – it would not seem likely to affect your students’ judgments of their own comprehension.

In summary, at present it seems unlikely that the experience of perceptual processing fluency or disfluency is likely to affect students’ metacognitive self-evaluations of their learning in actual learning or study situations. Teachers and designers of educational materials might of course strive by default to present all information to students clearly and in ways that are perceptually fluent, but it seems premature – and perhaps even unnecessary – for them to worry about rare instances where information is not perceptually fluent, especially if there are counteracting factors such as students having ample time to process the material, there only being one level of fluency, or students having other information upon which to base their judgments of learning.

Going Forward

The question of whether or not laboratory findings related to perceptual fluency will transfer to authentic learning situations certainly requires further empirical scrutiny. At present, however, the claim that highly-contrived effects of perceptual fluency on learners’ metacognitive judgments will also impair the efficacy of study behaviors in more naturalistic situations seems unfounded and unlikely.

Researchers might be wise to abandon the examination of highly-contrived fluency effects in the laboratory and instead examine more realistic variations in fluency in more natural learning situations to see if such conditions actually matter for students. For example, Carpenter and colleagues (Carpenter, et al., in press; Carpenter, et al., 2013) have been examining the effects of a factor they call instructor fluency – the ease or clarity with which information is presented – on learning and judgments of learning. Importantly, this factor is not perceptual fluency, as it does not involve purported variations in perceptual processing. Rather, instructor fluency invokes the sense of clarity that learners experience while processing a lesson. In experiments on this topic, students watched a short video-recorded lesson taught by either a confident and well-organized (“fluent”) instructor or a nervous and seemingly disorganized (“disfluent”) instructor, judged their learning from the video, and then completed a test over the information. Much as in research on perceptual fluency, participants judged that they learned more from the fluent instructor than from the disfluent one, even though test performance did not differ by condition.

These findings related to instructor fluency do not validate those on perceptual fluency. Rather, I would argue that they actually add further nails to the coffin of perceptual fluency. There are bigger problems out there besides perceptual fluency we can be worrying about in order to help our students learn and help them to accurately make metacognitive judgments. Perhaps instructor fluency is one of those problems, and perhaps it isn’t. But it seems that perceptual fluency is not a problem we should be greatly concerned about in realistic learning situations.

References

Carpenter, S. K., Mickes, L., Rahman, S., & Fernandez, C. (in press). The effect of instructor fluency on students’ perceptions of instructors, confidence in learning, and actual learning. Journal of Experimental Psychology: Applied.

Carpenter, S. K., Wilford, M. M., Kornell, N., & Mullaney, K. M. (2013). Appearances can be deceiving: instructor fluency increases perceptions of learning without increasing actual learning. Psychonomic Bulletin & Review, 20, 1350-1356.

Magreehan, D. A., Serra, M. J., Schwartz, N. H., & Narciss, S. (2015, advanced online publication). Further boundary conditions for the effects of perceptual disfluency on judgments of learning. Metacognition and Learning.

Yue, C. L., Castel, A. D., & Bjork, R. A. (2013). When disfluency is—and is not—a desirable difficulty: The influence of typeface clarity on metacognitive judgments and memory. Memory & Cognition, 41, 229-241.