Broaden your self-awareness through reflective journaling

by Mariah Kidd, B.S., GEOSCIENCES, 2022, Boise State University

This is the 4th post in the Guest Editor Series, Metacognition, Writing, and Well-Being, Edited by dawn shepherd, PhD, Ti Macklin, PhD, and Heidi Estrem, PhD

Introduction

The summer after I graduated high school was a turbulent time; plans changed, I felt lost and confused, and I needed a way to make sense of it all. I’ve never understood why but at that time I felt a natural pull towards writing about my life. So, I bought a journal and began scribbling down my thoughts and feelings. During the last five years, I have used journaling as a tool to digest my experiences. Each time I write, I leave my journal feeling lighter and clearer than when I started because I took time to slow down and release the internal pressure of my mind. My journal slowly became a place where I was able to express myself freely without the worry of judgment from another person. This took time, however; it was difficult to be honest and non-judgmental with myself about my own feelings. To this day, I continue using my journal as a way to ponder, process, and plan how I want to show up in life.

Building Self-Awareness

Before I began journaling in 2017, I did not practice self-reflection. I needed a practice where my internal world could be reflected back to me in a way that I could understand. My journal is a mirror; it reflects everything about myself back to me. Once I begin writing, parts of myself that I didn’t know existed are revealed; something about writing allows my subconscious thoughts and feelings to emerge. Awareness of my subconscious thoughts and feelings shows me how my life is unknowingly controlled by impulsive reactions or assumptions I carry. This awareness provides an opportunity for me to consciously choose how to respond to situations rather than instinctually reacting in harmful ways.photo of a young woman sitting by a window and writing in a journal

An entry from my journal on July 18, 2022 is an example of my growing self-awareness:

“Distraction is everywhere. Especially in my mind – my thoughts are constantly trying to direct my attention elsewhere. This morning I noticed myself getting pulled into social media so I decided to start reading. While reading I got distracted more than once. After reading I felt the urge to check my phone again. So I picked up my journal… Now here we are.”

Consistent reflection allows patterns in my life to emerge – only then, once my patterns are revealed through my writing, am I able to make tangible change towards more aligned patterns and habits.

Tracking Growth

As a person who values personal development, re-reading and reflecting on my journals is a useful tool to see how I have grown over the years. Since I began journaling, I have filled 11 journals cover-to-cover with my life story through college and beyond. Last spring I re-read these journals in chronological order from my freshman year of college to where I currently am six months post-graduation.

Reading my journals showed me how subtle and slow the process of growth is. Just like nature, we grow slowly. Each day we have the opportunity to be 1% better than the day before, and over the years that 1% adds up to substantial change. However, change can be difficult to notice in your day-to-day life. This is where the beauty of journaling becomes crystal clear. Journals allow us to time-travel to see how younger versions of ourselves moved through the world and can reveal meaningful changes that had previously gone unnoticed. Once I recognize where growth has already occurred, I feel inspired to take more aligned actions in my life to pursue future growth as a result of my reflection.

Beyond personal growth, reflective practices during college revealed trackable growth as a student. In English classes, university foundations courses, and philosophy classes I engaged in reflective writing that guided me into new ways of thinking about my academics. I had the opportunity to consider challenges I encountered through projects, acknowledge what I did well, and plan for how I can improve in the future. A full college course-load can quickly become difficult to navigate, but having reflective practices built into courses created the space to reflect, reground, and encourage me through my journey as a student. Reflection was always my favorite part of the few classes that incorporated it and I always wished every class had a reflection component.

Let Your Writing Evolve With You

Over the years, my journal has served many purposes depending on where I am in life. In the past it has served as a place to release overwhelming emotions. Other times it is used to capture special experiences that I want to remember the fullness of for the rest of my life. When I’m feeling stagnant, I use my journal to organize my life, dial in my habits, and plan how I want to show up in my life. Most commonly now, I use my journal to ask questions and dive deeper into my relationship with myself.

During college I used my journal to separate my personal life from my academic life. I created time and space to process my life outside of school so that I was able to fully show up to my academics without the distraction of unprocessed experiences. Through the years, I’ve realized how important it is to let the purpose of my journal evolve and change as I do because then it can support me at any point in life.

Finding Beauty

Adopting a consistent journaling practice has allowed me to find more meaning and value in my life experiences. I regularly incorporate gratitude into my journaling practice as a reminder that my life is richer and more beautiful than my mind would sometimes like me to believe. Five years ago, I could have never imagined how large of a role journaling would play in my development of becoming a more aligned version of myself day by day. This fact alone provides unlimited opportunity for what role my journal may play in the coming years of my life. Reflecting and taking aligned action in my life will be a continual process of refining myself through my self-discovery process.

Additional Resources:

Laura B. Miller, Review of Journaling as a Teaching and Learning Strategy, Teaching and Learning in Nursing, Volume 12, Issue 1, 2017, Pages 39-42, ISSN 1557-3087.

Pastore, Caitlin, Stress management in college students: Why journaling is the most effective technique for this demographic, 2020.


Fostering Metacognition to Support Student Learning and Performance

This article by Julie Dangremond Stanton, Amanda J. Sebesta and John Dunlosky “outline the reasons metacognition is critical for learning and summarize relevant research … in … three main areas in which faculty can foster students’ metacognition: supporting student learning strategies (i.e., study skills), encouraging monitoring and control of learning, and promoting social metacognition during group work.” They then “distill insights from key papers into general recommendations for instruction, as well as a special list of four recommendations that instructors can implement in any course.”

CBE Life Sci Educ June 1, 2021 20:fe3

https://doi.org/10.1187/cbe.20-12-0289


Using Ungrading and Metacognition to Foster “Becoming a Learner”

by Matt Recla, PhD, Associate Director of University Foundations at Boise State

This is the 3rd post in the Guest Editor Series, Metacognition, Writing, and Well-Being, Edited by dawn shepherd, PhD, Ti Macklin, PhD, and Heidi Estrem, PhD

Becoming a Learner

When I started teaching a required first-year course years ago, faculty were recommended to include Matthew Sanders’ small text, Becoming a Learner. Though it seemed a distraction from the “real” content of my course, I dutifully added the text. It makes a simple, compelling argument that students should strive to be active learners rather than passive students, exposing common misconceptions about a college education and suggesting helpful corrections. I paired the text with a short assignment to craft three learning goals for the semester, including at least one for our course and at least one for their learning journey more broadly.

I was surprised by the overwhelmingly positive reactions from students. Though assigned at the beginning of the semester, in their reflections on the course months later students still made comments like the following: “I learned so much about myself and what to improve on.” “It really set the tone for the rest of the class.” “It really changed my perspective on how I view my college education.” A few even claimed it was the most valuable part of the course! Reflecting on their past learning experiences and considering concrete goals provided a tool to gain purchase on their educational journey.colorful silhouette of human head with colors exploding from top

I began to wonder, though, whether my teaching techniques and assignments throughout the rest of the course were in harmony with the message of becoming a learner. Sanders exhorts students to be creative and courageous in order to learn (14, 42). Was I helping students do that, or was I penalizing them if they took a risk? He encourages critical thinking and the interconnectedness of learning (15, 35). Was I providing opportunities to make those connections, to reflect on the impact of their learning? These reflections led me to further opportunities for student metacognition. I made two additional changes that, in offering students a greater sense of empowerment in their education, also hopefully contributes to their sense of well-being.

Ungrading

The first change was ungrading, which to my mind was the natural complement to a first-year required course that promotes taking charge of your education. (There are many different ways to ungrade; I was initially guided by Hacking Assessment, and have since benefitted from the edited volume, Ungrading.) I’ve landed for now on a system where students receive no grades until the end of the course. They receive significant feedback on each assignment (based on Mark Barnes’ SE2R feedback approach) from me or a teaching assistant and have unlimited opportunities to revise and resubmit their work. We meet individually with each one of our 100 students at midsemester to hear about their progress and tackle any ongoing challenges. We meet again at semester’s end, and students explain the grade they believe they’ve earned. At least nine times out of ten they assess themselves just as we (instructors) would. When there appear to be gaps in the student’s self-assessment, we have a slightly longer conversation to understand (and rarely, suggest possible corrections to) their rationale.

I have come to see ungrading as part of my own well-being as an educator, as it appropriately shares my responsibility for a student’s grade with them. They are well-positioned to evaluate their performance if I trust them to do so and let them practice. There is a learning curve, and it can at first be frustrating for students who (like myself as a student) are used to finding out “what the teacher wants.” If embraced, though, it encourages for most students more authentic engagement with their learning. Their reflections suggest this augments a feeling of ownership of their education.

Metacognitive Reflection

The second change I adopted is to have students write or record a brief metacognitive reflection along with every major assignment. (My first and last assignments are themselves reflections on their experience, so I don’t assign a reflection on their reflection. That gets confusing for everyone!) The prompt for this brief addendum asks students to think about successes and challenges, both internal and external. (I’ve lost track of the original source for this idea, but I’m grateful!) I show these four areas in a quadrant and invite students to respond to at least one prompt in each area:

 

Internal

External

Successes

●   What did I do to achieve success on this assignment?

●   What did I learn from this assessment (in terms of content, skills, and/or about myself)?

●   What parts of the assignment worked well for me? Why?

●   Where do I think I did best on the assignment or what portion am I particularly proud of?

●   Which assignment standards did I meet or exceed? Why do I think so?

Challenges

●   What challenges did I face while completing the assignment (outside the assignment itself)?

●   How did I overcome those challenges?

●   What do I plan to do differently next time as a result?

●   What parts of the assignment were most challenging for me to understand? Why?

●   How did I overcome those challenges?

●   Which assignment standards did I not meet? Why?

Students reflect honestly on their challenges and modestly on their successes. They already may do this internally as they complete their work, but taking the time to record it helps reinforce that intuitive reflection and reveals the interconnectedness of their learning. The reflections often provide helpful context for their work, which may be impacted by any number of factors. In most cases I can affirm their self-assessment and suggest other small shifts as needed. The opportunity for intentional, transparent reflection has induced some “aha!” moments. I’ve seen many students follow through with changes in their time management for future assignments or double-down on areas of skill that were uncovered in reflection, which, because self-generated rather than forced, increases their felt self-efficacy.

Teaching in a COVID (and post-COVID) world

Although I incorporated both of these practices before the global disruptions of the last couple years, I’ve found that both ungrading and metacognitive reflection lend themselves well to teaching in a world unmoored by a pandemic. In the fall semester of 2020 we could see the impacts of a dramatic disruption in students’ learning, transitioning from in-person to primarily or completely virtual. Those impacts have become more pronounced each year since. The flexible design of my course is fairly adaptable to student needs and abilities when they enter the course, and it means that their grade isn’t ruined because they miss something due to unforeseen circumstances.

As they complete assignments and reflect on their progress, I can see them wrestle with the challenges of my course while simultaneously managing their other courses and the numerous obligations of adulthood. When they reflect at the end of the semester and assign themselves a grade, I can see how they comprehensively assess what this small piece of their growth as learners has added up to. I am privileged to work with students with a variety of different experiences and perspectives, and if my classroom provides a space where they can reflect on where they are and continue the lifelong process of becoming learners, I feel that I’ve boosted their well-being and not hindered their journey.

References:

Blum, S. D. (2020). Ungrading: Why Rating Students Undermines Learning (and What to Do Instead). West Virginia University Press.

Sackstein, S. (2015). Hacking assessment: 10 ways to go gradeless in a traditional grades school. Times 10 Publications.

Sanders, M. L. (2018). Becoming a learner: Realizing the opportunity of education. Macmillan Learning Curriculum Solutions.

 


Student Well-Being Through Reflection and Metacognition in a First-Year Writing Course

by Ti Macklin, PhD, Department of Writing Studies Lecturer; Lilly Crolius, graduate student (Texas A&M University-Commerce); Harland Recla, first-year writing student; and Natalie Plunkett, first-year writing student, Boise State University.

This is the 2nd post in the Guest Editor Series, Metacognition, Writing, and Well-Being, Edited by dawn shepherd, PhD, Ti Macklin, PhD, and Heidi Estrem, PhD

——–

photo of a bubble floating in front of a blurred background
Image by dsjones from Pixabay

In summer of 2020, it was clear that business as usual was not going to work in terms of preparing graduate teaching assistants (GTAs) to teach first-year writing (FYW) and for FYW students entering Boise State University. Students would likely be coming to class still in pandemic isolation and their needs would be unlike anything we had experienced as teachers. As FYW administrators, dawn shepherd and Heidi Estrem worked with Ti Macklin (an experienced instructor and teacher of the GTA pedagogy course) to develop a fully online course specifically designed to support both of these student populations. 

This blog post examines the experiences of Ti Macklin, Lilly Crolius (graduate student and teaching assistant in the course), Harland Recla (FYW student), and Natalie Plunkett (FYW student).

Metacognition Through Reflection

For Ti, building the online FYW course, English 101 (ENGL 101), centered on metacognition as a means of supporting the well-being of all of the students involved in the course; both graduate and undergraduate. Her pedagogy focuses on the notion that improvement as writers comes from self-awareness, so reflection was built into every module of the course, with students working to answer the overarching question of “how do we improve as writers instead of simply improving individual pieces of writing?” The table below highlights the deliberately reflective elements of the course:

 

Class Activity

Reflection

Four Unit Course Structure

●      Unit 1 – reflect on who they are as writers/what their relationship is with writing

●      Unit 2 – reflect on how they became the writers they are

●      Unit 3 – reflect on and identify the transferable skills they developed through the course

●      Unit 4 (the final portfolio) – reflect on their learning in the course, examine their growth as writers and students, and look ahead to the next FYW course

Weekly Self-Assessment Journal

Students reflect at the end of each module on what was most helpful, what they learned, what they’re struggling with, and what adjustments they might make for future modules

Weekly Writing and Rhetoric Activity

Students are introduced to a new concept/term each week and, at the end of the lesson, are asked to reflect on how they might use this concept in the class and outside of the class

Weekly Course Readings and Discussion Boards

Course readings are designed to encourage students to reflect on their own writing processes, literacy experiences, and experiences with transfer

Final Portfolio Reflection

Students reflect on the culminating activity at the end of each unit and consider what changes they would make if they were to include this piece in the final portfolio.

All three students (graduate teaching assistant Lilly, Harland, and Natalie) were unaware of the concept of metacognition at the beginning of the semester. However, as the semester went on, they all realized that the focus on reflection was impacting both their writing and their well-being. Lilly noticed changes in the FYW students’ writing as the semester progressed. Through examining themselves and their abilities, and being encouraged by the support of creativity and personal interests in their assignments, their writing showed evidence that they were able to see connections between writing for our class and other situations (both academic and non-academic) thus cementing the concept of metacognition as a transferable learning and life skill.      

The FYW students’ experiences were similar to Lilly’s. Harland began to understand the concept of metacognition about mid-semester when he realized that dedicating a large amount of time to reflection wasn’t something he was accustomed to, so it began to stand out as the course went on. Natalie found that, because the instruction on metacognition was subtle, it took a few weeks for students to fully understand that they were consistently doing metacognitive work whether they realized it or not.

Harland and Natalie also recommend that, even though it would mean adding more terminology to the course, it would be worth making metacognition even more explicit. Harland suggests that describing the purpose of metacognition in the course would demonstrate to students that metacognition can yield helpful adjustments in both learning and behaviors, thus making the concept and its function more obvious. Natalie adds that pointing out the overtly metacognitive work that students did at the end of each module in addition to the subtle work throughout the module makes this deeply reflective and challenging work seem much more manageable and possible.

Metacognition and Well-Being

When asked how their learning/thinking/writing processes changed as a result of ENGL 101, all three students indicated that their well-being was positively impacted. For example, for Harland, this style of learning shifted his life outside of class because he spent time reflecting upon the methods he used to think and learn. He specifically noticed that the metacognitive focus of the course boosted his well-being as it gave him a sense of control over the knowledge he absorbs.

Likewise, for Natalie, the focus on metacognition impacted her well-being by fostering and supporting her self-confidence in her writing skills and ideas. This boost largely came from when she realized that she was thinking of the concepts in the ENGL 101 course in her spare time and found herself applying them to other courses and areas of her life.

For Lilly, the experience as a graduate instructor within this class and learning about these ideas encouraged her to apply her learning in much more thoughtful ways. It made learning more engaging and highlighted how meaningful and valuable it could be, giving her clarity. Her job as a GTA became less stressful once she realized that there was a clear purpose for everything done in the English 101 class that she could use elsewhere.

All three students believe that their writing processes evolved significantly as a result of the course. Natalie went from quickly completing assignments within the due date to soaking in what was being taught and feeling more fulfilled and confident in her learning. Harland echoes this and adds that he became more comfortable with his own process, which resulted in a boost in his overall well-being.

As a student, Lilly found that reflection helped her to see that her writing was part of a bigger picture. No matter what is being written, she felt like there was a place for it in the world. Even if it never sees the light of day, it’s an opportunity for improvement and growth.

The Takeaway

In the midst of the isolation of the pandemic, people were turning inward and hiding from the world, which became a cycle of solitude and stagnation. The consistent reflective opportunities of this English 101 course introduced and amplified the notion of metacognition, thus pulling both the GTAs and the FYW students back into the world. By reflecting on themselves as writers, these students were able to connect to various places in their lives where they hadn’t previously made associations.

It is worth noting that this productive introspection took place in a class of 300 students in an asynchronous, fully online course. The students worked in semester-long groups of 10, each with an assigned GTA, in order to provide as much educational and human support to both the GTAs and the FYW students as possible. The reflection that Heidi, dawn, and I mention in the introduction to this blog series allowed us to rethink the size and shape of FYW classes while holding on to the essential elements of the course, like metacognition, that make a class a writing class. 

Student Readings on Reflection, Metacognition, & Transfer

Allen, Sarah. “The Inspired Writer vs. the Real Writer.” Writing Spaces: Readings on Writing, Volume 1, https://writingspaces.org/essays

Brandt, Deborah. “Sponsors of Literacy.” College Composition and Communication, vol. 49, no. 2, 1998, pp. 165–185. JSTOR, www.jstor.org/stable/358929. Accessed 13 July 2020.

Carillo, Ellen C. “Writing Knowledge Transfers Easily.” Bad Ideas About Writing, edited by Cheryl E. Ball & Drew M. Loewe, p. 34-37.

Driscoll, Dana Lynn and Roger Powell. “States, Traits, and Dispositions: The Impact of Emotion on Writing Development and Writing Transfer Across College Courses and Beyond.” Composition Forum, vol. 34, 2016. Accessed 21 July 2020.

Rose, Mike. “Rigid Rules, Inflexible Plans, and the Stifling of Language: A Cognitivist Analysis of Writer’s Block.” College Composition and Communication, vol. 31, no. 4, 1980, pp. 389–401. JSTOR, www.jstor.org/stable/356589. Accessed 1 June 2020.

Rosenberg, Karen. “Reading Games: Strategies for Reading Scholarly Source.” Writing Spaces: Readings on Writing, Volume 2, https://writingspaces.org/essays

Robertson, Liane, Kara Taczak, and Kathleen Blake Yancey. “Notes toward A Theory of Prior Knowledge and Its Role in College Composers’ Transfer of Knowledge and Practice.” Composition Forum, vol. 26, 2012. Accessed 21 July 2020.

Tomlinson, Barbara. “Cooking, Mining, Gardening, Hunting: Metaphorical Stories Writers Tell about Their Composing Processes.” Metaphor & Symbolic Activity, vol. 1, no. 1, Mar. 1986, p. 57.


Teaching and Learning Writing Together in a Pandemic

by dawn shepherd, PhD, Ti Macklin, PhD, and Heidi Estrem, PhD, Boise State University

This is the 1st post in the Guest Editor Series, Metacognition, Writing, and Well-Being, Edited by dawn shepherd, PhD, Ti Macklin, PhD, and Heidi Estrem, PhD

———–

It is not hyperbolic to say that “The Pause and The Pivot” of March 2022 has irrevocably changed the three of us (dawn, Ti, and Heidi). In particular, the pause, pivot, and subsequent rethinking of nearly every aspect of our professions has deeply affected how we approach our colleagues and the classroom. The three of us have extensive experience administering large first-year writing programs, as well as decades of teaching behind us. Still, the unprecedented changes brought about by the pandemic shook loose many of our previously held beliefs about quality writing instruction.

Throughout this intensive and extended pandemic period, the three of us have met regularly to commiserate, plan courses, brainstorm ways to support our first-year writing students and instructors, and develop new approaches to teaching. Our collegial, challenging, and deeply supportive professional conversations have enabled us to use the unsettled ground of this time period to prompt new growth for all of us. This professional growth has, in turn, enabled us to develop pragmatic and humane classrooms and relationships with colleagues. To be sure, we would have said our classrooms were humane prior to 2020, and they were – but we attend to the well-being of self, colleagues, and students now like we never have before. One of our richest strategies for calling attention to well-being is through metacognitive discussions that take place in our co-writing and collaborative pedagogical work.

Metacognition has long been recognized as a deeply valuable and critically important practice for first-year writing students and for learning about writing more generally (Hayes, Jones, Gorzelsky, and Driscoll 2018). Indeed, one of the most important aspects of a rich first-year writing course is not only content about writing and practice doing writing but also extensive reflective work on how, when, and why writing changes across contexts (see Gorzelsky, Driscoll, Jones and Hayes 2016; see also Moore and Anson 2016). It is in the thinking about writing that novice writers gain sensitivity to changing rhetorical demands. So, as the three of us have collaborated over the past three years, employing these reflective practices ourselves has been fundamentally important. As program directors (dawn and Heidi) and innovative course designers (dawn and Ti), and as colleagues and friends (all three of us), we constantly and critically approached all of our curricular and pedagogical practices through a lens of metacognition and with a steady eye on making decisions that promote well-being.

This has been layered, intensive, and exhausting work. It has also been one of the richest periods of growth and collaboration of our professional lives. In brief, here are some of the grounding principles we returned to and perspectives that enabled us to thrive in these times:

  • We can enable, enact, and model healthy decisions.  As program directors, dawn and Heidi were keenly aware of the need to encourage healthier work-life choices but sought to make it explicit in crisis times. It was top of mind for us to encourage our colleagues – to give them permission – to scale back assignments, to cull their courses for anything that wasn’t essential, to honor their need for breaks in fully online/remote semesters. Our approach to leadership has always been reflective, iterative, and in service to others. We also tend to work more than we should. This moment required us to enact healthy decisions related to our own workload and self- care, serving a model for others as well. We quickly set up google drive folders for sharing ideas for moving online in late spring 2020 to immediately encourage informal collaboration; we sent regular emails throughout the pandemic designed to both acknowledge the deep challenges of teaching in this time and offer hope and strategies for instructors.
  • We can change course. We all learned to be differently flexible in this time period, and meeting regularly to check in with each other helped us make visible things that were and weren’t working – and that might need to be adjusted. For example, the three of us were excited about a potential second course innovation for the spring 2022 semester. But as the fall unfolded, we realized together that it wasn’t the right semester for it. So, we adjusted. And let go.
  • We can learn to live and even thrive in an environment of productive discomfort. Nothing felt comfortable in 2020-2022. We know that learning is uncomfortable, and we strive to help our students to remain resilient when things are hard, and we were forced to face both productive discomfort and trauma, by experiencing them in our own lives and witnessing them in the lives of colleagues and students. In teaching and learning environments as well as workplaces, we don’t always make a distinction between the two. Discomfort can bring growth.

With these ideas in mind, we brought together a number of other colleagues who have also been thinking deeply about the interplay of writing, well-being, and cognition. In the next post, Ti Macklin and three students from her Fall 2021 first-year writing course examine their experiences with a metacognitively-focused English 101 course. Lilly Crolius (graduate student and teaching assistant in the course), Harland Recla (first-year writing student), and Natalie Plunkett (first-year writing student) provide insight into the student experience by discussing how reflection and a focus on transferable writing skills impacted their well-being.

The third post, written by Matt Recla, Associate Director of University Foundations at Boise State, discusses how reflective practices and assessment improved his students’ sense of self-efficacy and well-being. He specifically details how incorporating “ungrading” and metacognitive reflection practices into his required first-year course provides students with a framework to see themselves as life-long learners.

The series ends with a final post from a former Boise State University undergraduate student, Mariah Kidd, who explains how reflective journaling helped her to track her growth as a writer throughout her undergraduate career.

Works Cited

Hayes, Carol, Ed Jones, Gwen Gorzelsky, and Dana Driscoll. “Adapting Writing About Writing: Curricular Implications of Cross-Institutional Data from the Writing Transfer Project,” WPA: Writing Program Administration, 41.2, Spring 2019, pp. 65-88.

Gorzelsky, Gwen, Dana Lynn Driscoll, Joe Paszak, Ed Jones, and Carol Hayes, “Cultivating Constructive

Metacognition: A New Taxonomy for Writing Studies,” in Critical Transitions: Writing and the Question of Transfer, eds Jessie Moore and Chris Anson, Utah State University Press, 2016.

Moore, Jessie and Chris Anson, Critical Transitions: Writing and the Question of Transfer, eds Jessie Moore and Chris Anson, Utah State University Press, 2016.

 


Using Metacognition to Scaffold the Development of a Growth Mindset

by Lauren Scharff, PhD, U. S. Air Force Academy,*
Steven Fleisher, PhD, California State University,
Michael Roberts, PhD, DePauw University

It conceptually seems simple… inform students about the positive power of having a growth mindset, and they will shift to having a growth mindset.

If only it were that easy!

Black silhouette of a human head with colored neurons inside it
Image by Gordon Johnson from Pixabay

In reality, even if we (humans) cognitively know something is “good” for us, we may struggle to change our ways of thinking, behaving, and automatic emotional reactions because those have become habits. However, rather than throw up our hands and give up because it’s challenging, in this blog we will model a growth mindset by offering a new strategy to facilitate the transition to a growth mindset. The strategy involves metacognitive refection, specifically the use of awareness-oriented and self-regulation-oriented questions for both students and instructors.

Mindset Overview

To get us all on the same page, let’s first examine “mindset,” a term coined by Carol Dweck (2006). This concept proposes that individuals internalize ways of thinking about their abilities related to intelligence, learning, and academics (or any other skill). These beliefs become internalized based on years of living and hearing commentary about skills (e.g., She’s a born leader! or, You’re so smart! or, They are natural math wizzes!). These internalized beliefs subsequently affect our responses and performance related to those skills.

According to Dweck and others, people fall along a continuum (Figure 1) that ranges from having a fixed mindset (“My skills are innate and can’t be developed”) to having a growth mindset (“My skills can be developed”). Depending on a person’s beliefs about a particular skill, they will respond in predictable ways when a skill requires effort, when it seems challenging, when effort affects performance, and when feedback informs performance. The two-part mindset blog posts in Ed Nuhfer’s guest series (Part 1, and Part 2, 2022) provide evidence that the feedback component is especially influential.

diagram showing the opposite nature of fixed and growth mindset with respect to how people view effort, challenge, failure and feedback. From https://trainugly.com/portfolio/growth-mindset/

Figure 1. Fixed – growth mindset tendencies. (From https://trainugly.com/portfolio/growth-mindset/)

Metacognition to Support Change

As the opening to this blog pointed out, simply explaining the concept of mindset and the benefits of growth mindset to students is not typically enough to lead students to actually adopt a growth mindset. This lack of change is likely even if students say they see the benefits and want to shift to a greater growth mindset. Thus, we need a process to scaffold the change.

We believe that metacognition offers a process by which to do this. Metacognition not only helps us examine our beliefs, but also provides a guide for one’s subsequent behaviors. More specifically, we believe metacognition involves two key processes, 1) awareness, often gleaned through reflection, and 2) self-regulation, during which the person uses that awareness to adjust their behaviors as needed in order to achieve their targeted goal.  

Much research (e.g., Isaacson & Fujita, 2006) has already documented the benefits of students being metacognitive about their learning processes. However, we haven’t seen any other work focus on being metacognitive about one’s mindset.

Further, we know that efforts to develop skills are often more successful when they are more narrowly targeted on specific aspects of a broader construct (e.g., Heft & Scharff, 2017). Thus, rather than encouraging students to simply adopt a general “growth mindset,” or be metacognitive about their general mindset for a task, it would be more productive to target how they think about and respond to the specific component aspects of mindset for that task (e.g., challenge, feedback, failure).

Promoting a Growth Mindset Via Metacognition

Below we offer some example metacognitive reflection questions for students and for instructors that focus on awareness and self-regulation related to the feedback component of mindset. For the full set of questions that target all of the mindset components, please go to our full Mindset Metacognition Questions Resource.

We chose to highlight the component of feedback due to Nuhfer et al.’s findings reported in his 2022 guest series. By targeting the specific aspects of mindset, such as feedback, students might more effectively overcome patterns of thinking that keep them stuck in a fixed mindset.

We also include metacognitive reflection questions for instructors because they are instrumental in establishing a classroom environment that either supports or inhibits growth mindset in students. Instructors’ roles are important – recent research has demonstrated that instructor mindset about student learning abilities can impact student motivation, belongingness, engagement, and grades (Muenks, et al., 2020). Yeager, et al. (2021) additionally showed that mindset interventions for students had more impact if the instructors also display growth mindsets. Thus, we suggest that instructors examine their own behaviors and how those behaviors might discourage or encourage a growth mindset in their students.

Student Questions Related to Feedback

  • (Self-assessment/awareness) How am I thinking about and responding to feedback that implies I need to make changes or improve?
  • (Self-assessment/awareness) How am I interacting with the instructor in response to feedback? (emotional regulation; comfort versus frustration)
  • (Self-regulation) How do I plan to respond to feedback I have / will receive?
  • (Self-regulation) How might I reasonably seek feedback from peers or the instructor when more is needed?

Instructor Questions Related to Feedback

  • (Self-assessment/awareness) Are students using my feedback? Are there aspects of content or tone of feedback that may be interacting with students’ mindsets?
  • (Self-assessment/awareness) Am I appropriately focusing my feedback on student performance (e.g., meeting standards) rather than on students themselves (e.g. their dispositions or aptitudes)?
  • (Self-regulation) When a student approaches me with a question, what do I signal via my demeanor? Am I demonstrating that engaging with feedback can be a positive experience?
  • (Self-regulation) What formative assessments might I develop to provide students feedback about their progress and learn to constructively use that feedback to support their growth?

Take-aways and Future Directions

We believe the interconnections between mindset and metacognition can go beyond the use of metacognition to examine aspects of one’s mindset. Students can be metacognitive about the learning process itself, which can interact with mindset by providing realizations that adapting one’s learning strategies can promote success. The belief that one can try new strategies and become more successful is a hallmark of growth mindset.

We hope that you utilize the questions above for yourself and your students. Given the lack of research in this area, your efforts could make a contribution to the larger understanding of how to effectively promote growth mindset in students. (If you investigate, let us know, and we would welcome a blog post so you could share your results.) At the very least, such efforts might help students overcome patterns of thinking that keep them stuck in a fixed mindset, and it might help them more effectively cope with the inevitable challenges that they will face, both in and beyond the academic realm.

References

Dweck, C. S. (2006). Mindset: The new psychology of success. New York: Random House.

Heft, I. & Scharff, L. (July 2017). Aligning best practices to develop targeted critical thinking skills and habits. Journal of the Scholarship of Teaching and Learning, Vol 17(3), pp. 48-67. http://josotl.indiana.edu/article/view/22600 

Isaacson, R.M. & Fujita, F. (2006). Metacognitive knowledge monitoring and self-regulated learning: Academic success and reflections on learning. Journal of the Scholarship of Teaching and Learning, Vol 6(1), 39-55. Retrieved from https://eric.ed.gov/?id=EJ854910

Muenks, K., Canning, E. A., LaCosse, J., Green, D. J., Zirkel, S., Garcia, J. A., & Murphy, M. C. (2020). Does my professor think my ability can change? Students’ perceptions of their STEM professors’ mindset beliefs predict their psychological vulnerability, engagement, and performance in class. Journal of Experimental Psychology: General, 149(11), 2119-2114.  http://dx.doi.org/10.1037/xge0000763

Yeager, D.S., Carroll, J.M., Buontempo, J., Cimpian, A., Woody, S., Crosnoe, R., Muller, C., Murray, J., Mhatre, P., Kersting, N., Hulleman, C., Kudym, M., Murphy, M., Duckworth, A.L., Walton, G.M., & Dweck, C.S.(2022). Teacher mindsets help explain where a growth-mindset intervention does and doesn’t work. Psychological Science, 33(1), 18-32.     https://journals.sagepub.com/doi/abs/10.1177/09567976211028984

* The views expressed in this article, book, or presentation are those of the author and do not necessarily reflect the official policy or position of the United States Air Force Academy, the Air Force, the Department of Defense, or the U.S. Government.


Guest Edited Series on Self-Assessment: Synthesis

by Ed Nuhfer, California State Universities (retired)

Self-assessment is a metacognitive skill that employs both cognitive competence and affective feelings. After over two decades of scholars’ misunderstanding, misrepresenting, and deprecating self-assessment’s value, recognizing self-assessment as valid, measurable, valuable, and connected to a variety of other beneficial behavioral and educational properties is finally happening. The opportune time for educating to strengthen that ability is now. We synthesize this series into four concepts to address when teaching self-assessment.

Image of a face silhouette watching a schematic of a man interfacing with mathematical symbols and a human brain
Image by Gerd Altmann from Pixabay

Teach the nature of self-assessment

Until recently, decades of peer-reviewed research popularized a misunderstanding of self-assessment as described by the Dunning-Kruger effect. The effect portrayed the natural human condition as most people overestimating their abilities, lacking the ability to recognize they do so, the most incompetent being the most egregious offenders, and only the most competent possessing the ability to self-assess themselves accurately.

From founding to the present, that promotion relied on mathematics that statisticians and mathematicians now recognize as specious. Behavioral scientists can no longer argue for “the effect” by invoking the unorthodox quantitative reasoning used to propose it. Any salvaging of “the effect” requires different mathematical arguments to support it.

Quantitative approaches confirm that a few percent of the populace are “unskilled and unaware of it,” as described by “the effect.” However, these same approaches affirm that most adults, even when untrained for self-assessment accuracy, are generally capable of recognizing their competence or lack thereof. Further, they overestimate and underestimate with about the same frequency.

Like the development of higher-order or “critical” thinking, the capacity for self-assessment accuracy develops slowly with practice, more slowly than required to learn specific content, and through more practice than a single course can provide. Proficiency in higher-order thinking and self-assessment accuracy seem best achieved through prolonged experiences in several courses.

During pre-college years, a deficit of relevant experiences produced by conditions of lesser privilege disadvantages many new college entrants relative to those raised in privilege. However, both the Dunning-Kruger studies and our own (https://books.aosis.co.za/index.php/ob/catalog/book/279 Chapter 6) confirm that self-assessment accuracy is indeed learnable. Those undeveloped in self-assessment accuracy can become much more proficient through mentoring and practice.

Teach the importance of self-assessment

As a nation that must act to address severe threats to well-being, such as healthcare, homelessness, and climate change, we have rarely been so incapacitated by polarization and bias. Two early entries on bias in this guest-edited series explained bias as a ubiquitous survival mechanism in which individuals relinquish self-assessment to engage in modern forms of tribalism that marginalize others in our workplaces, institutions, and societal cultures. Marginalizing others prevents holding the needed consensus-building conversations between diverse groups that bring creative solutions and needed action.

Relinquishing metacognitive self-assessment to engage in bias obscures perceiving the impacts and consequences of what one does. Developing the skill to exercise self-assessment and use evidence, even under peer pressure not to do so, seems a way to retain one’s perception and ability to act wisely.

Teach the consequences of devaluing self-assessment

The credibility “the effect” garnered as “peer-reviewed fact” helped rationalize the public’s tolerating bias and supporting hierarchies of privilege. A quick Google® search of the “Dunning Kruger effect” reveals widespread misuse to devalue and taunt diverse groups of people as ignorant, unskilled, and inept at recognizing their deficiency.

Underestimating and disrespecting other peoples’ abilities is not simply innumerate and dismal; it cripples learning. Subscribing to the misconception disposes the general populace to avoid trusting in themselves, in others who merit trust, and to dismiss implementing or even respecting effective practices developed by others presumed to be inferiors. It discourages reasoning from evidence and promotes unfounded deference to “authority.” Devaluing self-assessment encourages individuals to relinquish their autonomy to self-assess, which weakens their ability to resist being polarized by demagogues to embrace bias.

Teach self-assessment accuracy

As faculty, we have frequently heard the proclamation “Students can’t self-assess.” Sadly, we have yet to hear that statement confronted by, “So, what are we going to do about it?”

Opportunities exist to design learning experiences that develop self-assessment accuracy in every course and subject area. Knowledge surveys, assignments with required self-assessments, and post-evaluation tools like exam wrappers offer straightforward ways to design instruction to develop this accuracy.

Given the current emphasis on the active learning structures of groups and teams, teachers easily mistake these as the sole domains for active learning and deprecate study alone. The interactive engagements are generally superior to the conventional structure of lecture-based classes for cognitive mastery of content and skills. However, these structures seldom empower learners to develop affect or recognize the personal feelings of knowing that come with genuine understanding. Those feelings differ from those that rest on shallow knowledge and often launch the survival mechanism of bias at critically inopportune times.

Interactive engagement for developing cognitive expertise differs from the active engagement in self-assessment needed to empower individuals to direct their lifelong learning. When students employ quiet reflection time alone to practice self-assessment by enlisting understanding for content for engaging in knowing self, this too is active learning. Ability to distinguish the feeling of deep understanding requires repeated practices in such reflection. We contend that active learning design that attends to both cognition and affect is superior to design that attends only to one of these.

To us, John Draeger was particularly spot-on in his IwM entry, recognizing that instilling cognitive knowledge alone is insufficient as an approach for educating students or stakeholders within higher education institutions. Achievement of successful outcomes depends on educating for proficiency in both cognitive expertise and metacognition. In becoming proficient in controlling bias, “thinking about thinking” must include attention to affect to recognize the reactive feelings of dislike that often arise when confronting the unfamiliar. These reactive feelings are probably unhelpful to the further engagement required to achieve understanding.

The ideal educational environment seems one in which stakeholders experience the happiness that comes from valuing one another during their journey to increase content expertise while extending the knowing of self.


Knowledge Surveys Part 2 — Twenty Years of Learning Guiding More Creative Uses

by Ed Nuhfer, California State Universities (retired)
Karl Wirth, Macalester College
Christopher Cogan, Memorial University
McKensie Kay Phillips, University of Wyoming
Matthew Rowe, University of Oklahoma

Early adopters of knowledge surveys (KSs) recognized the dual benefits of the instrument to support and assess student learning produced by a course or program. Here, we focus on a third benefit: developing students’ metacognitive awareness through self-assessment accuracy.

Communicating self-assessed competence

Initially, we just authored test and quiz questions as the KS items. After the importance of the affective domain became more accepted, we began stressing affect’s role in learning and self-assessment by writing each knowledge survey item with an overt affective self-assessment root such as “I can…” or “I am able to…” followed by a cognitive content outcome challenge. When explaining the knowledge survey to students, we focus their attention on the importance of these affective roots for when they rate their self-assessed competence and write their own items later.

We retain the original three-item response scale expressing relative competence as no competence, partial competence, and high competence. Research reveals three-item scales as valid and reliable as longer ones, but our attraction to the shorter scale remains because it promotes addressing KS items well. Once participants comprehend the meaning of the three items and realize that the choices are identical for every item, they can focus on each item and rate their authentic feeling about meeting the cognitive challenge without distraction by more complex response choices.

photo of woman facing a black board with the words "trust yourself"
Image by Gerd Altmann from Pixabay

We find the most crucial illumination for a student’s self-assessment dilemma: “How do I know when I can rate that I can do this well?” is “When I know that I can teach how to meet this challenge to another person.”

Backward design

We favor backward design to construct topical sections within a knowledge survey by starting with the primary concept students must master when finally understanding that topic. Then, we work backward to build successive items that support that understanding by constantly considering, “What do students need to know to address the item above?” and filling in the detail needed. Sometimes we do this down to the definitions of terms needed to address the preceding items.

Such building of more detail and structure than we sensed might be necessary, especially for introductory level undergraduates, is not “handing out the test questions in advance.” Instead, this KS structure uses examples to show that deceptively disconnected observations and facts allow understanding of the unifying meaning of  “concept” through reaching to make connections. Conceptual thinking enables transferability and creativity when habits of mind develop that dare to attempt to make “outrageous connections.”

The feeling of knowing and awareness of metadisciplinary learning

Students learn that convergent challenges that demand right versus wrong answers feel different from divergent challenges that require reasonable versus unreasonable responses. Consider learning “What is the composition of pyrite?” and “Calculate the area of a triangle of 50 meters in length and a base of 10 meters?” Then, contrast the feeling required to learn, “What is a concept?” or “What is science?”

The “What is science?” query is especially poignant. Teaching specialty content in units of courses and the courses’ accompanying college textbooks essentially bypass teaching the significant metadisciplinary ways of knowing of science, humanities, social science, technology, arts, and numeracy. Instructors like Matt Rowe design courses to overcome the bypassing and strive to focus on this crucial conceptual understanding (see video section at times 25.01 – 29.05).

Knowledge surveys written to overtly provoke metadisciplinary awareness aid in designing and delivering such courses. For example, ten metadisciplinary KS items for a 300-item general geology KS appeared at its start, two of which follow.

  1. I can describe the basic methods of science (methods of repeated experimentation, historical science, and modeling) and provide one example each of its application in geological science.
  2. I can provide two examples of testable hypotheses statements, and one example of an untestable hypothesis.

Students learned that they would develop the understanding needed to address the ten throughout the course. The presence of the items in the KS ensured that the instructor did not forget to support that understanding. For ideas about varied metadisciplinary outcomes, examine this poster.

Illuminating temporal qualities

Because knowledge surveys establish baseline data and collect detailed information through an entire course or program, they are practical tools from which students and instructors can gain an understanding of qualities they seldom consider. Temporal qualities include magnitudes (How great?), rates (How quickly?), duration (How long?), order (What sequence?), frequency (How often?), and patterns (What kind?).

More specifically, knowledge surveys reveal magnitude (How great were changes in learning?), rates (How quickly we cover material relative to how well we learned it?), duration (How long was needed to gain an understanding of specific content?), order (What learning should precede other learning?), and patterns (Does all understanding come slowly and gradually or does some come in time as punctuated “Aha moments?”).

Knowledge survey patterns reveal how easily we underestimate the effort needed to do the teaching that makes significant learning change. A typical pattern from item-by-item arrays of pre-post knowledge surveys reveals a high correlation. Instructors may find it challenging to produce the changes where troughs of pre-course knowledge surveys revealing areas of lowest confidence become peak areas in post-course knowledge surveys showing high confidence. Success requires attention to frequency (repetition with take-home drills), duration (extending assignments addressing difficult contents with more time), order (giving attention to optimizing sequences of learning material), and likely switching to more active learning modalities, including students authoring their own drills, quizzes, and KS items.

Studies in progress by author McKensie Phillips showed that students were more confident with the material at the end of the semester rather than each individual unit. This observation even held for early units where researchers expected confidence would decrease given the time elapsed between the end of the unit and when the student took the post-semester KS. The results indicate that certain knowledge mastery is cumulative, and students are intertwining material from unit to unit and practicing metacognition by re-engaging with the KS to deepen understanding over time.

Student-authored knowledge surveys

Introducing students to the KS authoring must start with a class knowledge survey authored by the instructor so that they have an example and disclosure of the kinds of thinking utilized to construct a KS. Author Chris Cogan routinely tasks teams of 4-5 students to summarize the content at the end of the hour (or week) by writing their own survey items for the content. Typically, this requires about 10 minutes at the end of class. The instructor compiles the student drafts, looks for potential misconceptions, and posts the edited summary version back to the class.

Beginners’ student-authored items often tend to be brief, too vague to answer, or too focused on the lowest Bloom levels. However, feedback from the instructor each week has an impact, and students become more able to write helpful survey items and – more importantly – better acquire knowledge from the class sessions. The authoring of items begins to improve thinking, self-assessment, and justified confidence.

Recalibrating for self-assessment accuracy

Students with large miscalibrations in self-assessment accuracy should wonder, “What can I do about this?” The pre-exam knowledge survey data enables some sophisticated post-exam reflection through exam wrappers (Lovett, 2013). With the responses to their pre-exam knowledge survey and the graded exam in hand, students can do a “deep dive” into the two artifacts to understand what they can do.

Instructors can coach students to gain awareness of what their KS responses indicate about their mastery of the content. If large discrepancies between the responses to the knowledge survey and the graded exam exist, instructors query for some introspection on how these arose. Did students use their KS results to inform their actions (e.g., additional study) before the exam? Did different topics or sections of the exam produce different degrees of miscalibration? Were there discrepancies in self-assessed accuracy by Bloom levels?

Most importantly, after conducting the exam wrapper analysis, students with significant miscalibration errors should each articulate doing one thing differently to improve performance. Reminding students to revisit their post-exam analysis well before the next exam is helpful. IwM editor Lauren Scharff noted that her knowledge surveys and tests reveal that most psychology students gradually improved their self-assessment accuracy across the semester and more consistently used them as an ongoing learning tool rather than just a last-minute knowledge check.

Takeaways

We construct and use surveys differently than when we began two decades ago. For readers, we provide a downloadable example of a contemporary knowledge survey that covers this guest-edited blog series and an active Google® Forms online version.

We have learned that mentoring for metacognition can measurably increase students’ self-assessment accuracy as it supports growing their knowledge, skills, and capacity for higher-order thinking. Knowledge surveys offer a powerful tool for instructors who aim to direct students toward understanding the meaning of becoming educated, becoming learning experts, and understanding themselves through metacognitive self-assessment. There remains much to learn.

 


Knowledge Surveys Part 1 — Benefits of Knowledge Surveys to Student Learning and Development

by Karl Wirth, Macalester College,
Ed Nuhfer, California State Universities (retired)
Christopher Cogan, Memorial University
McKensie Kay Phillips, University of Wyoming

Introduction

Knowledge surveys (KSs) present challenges like exam questions or assignments, but respondents do not answer these. Instead, they express their felt ability to address the challenges with present knowledge. Knowledge surveys focus on self-assessment, which is a special kind of metacognition. 

Overall, metacognition is a self-imposed internal dialogue that is a distinguishing feature of “expert learners” regardless of the discipline (e.g., Ertmer & Newby, 1996). Because all students do not begin college as equally aware and capable of thinking about their learning, instructors must direct students to keep them in constant contact with their metacognition. Paul Pintrich, a pioneer in metacognition, stressed that “instruction about metacognition must be explicit.” Knowledge surveys enable what Ertmer & Newby and Pintrich advocate in any class in any subject.

road sign with words "data" pointing to words "information" pointing to word "knowledge" with the word "learning above
Image by Gerd Altmann from Pixabay

Knowledge surveys began in 1992 during a conversation about annual reviews between the guest editor and a faculty member who stated: “They never ask about what I teach.” Upon hearing the faculty member, the guest editor decided to create a 200-item form to survey student ratings of their mastery of detailed content for his geology course at the start and end of the class. The items were simply an array of test and quiz questions, ordered in the sequence the students would encounter during the course. The students responded to each item through a 3-point response at the start and end of the course. 

The information from this first knowledge survey proved so valuable that the guest editor described this in 1996 in a geology journal as a formative assessment. As a result, geoscience faculty elsewhere started taking the lead in researching them and describing more benefits.

In 2003, U.S. Air Force Academy’s physics professor Delores Knipp and the guest editor published the first peer-reviewed paper (Nuhfer and Knipp, 2003) for multiple disciplines. If new to knowledge surveys, click the hotlink to that paper now and read at least the first page to gain a conceptual understanding of the instrument.

Self-assessment, Metacognition, and Knowledge Surveys

Becoming educated is a process of understanding self and the phenomena that one experiences. Knowledge surveys structure practices in understanding both. 

Our series’ earlier entries revealed the measurable influence of self-assessment on dispositions such as self-efficacy, mindset, and intellectual and ethical development that prove indispensable to the lifelong process of becoming educated. The entries on bias and privilege revealed that the privilege of having the kind of education that renders the unconscious conscious may determine the collective quality of a society and how well we treat one another within it.

Knowledge surveys prompt self-assessment reflections during learning every aspect of the content. Over a baccalaureate education, cumulative, repetitive practice can significantly improve understanding of one’s present knowledge and self-assessing accuracy.

Improving Learning

Knowledge surveys’ original purpose was to improve student learning (e.g., Nuhfer & Knipp, 2003Wirth et al., 20162021). Providing students with a knowledge survey at the beginning of a course or unit of instruction offered an interactive roadmap for an entire course that overtly disclosed the instructor’s intentions for learning to students. 

Early on, users recognized that knowledge surveys might offer a measure of changes in learning produced by a unit of instruction. Demonstrating the validity of such self-assessed competence measures was crucial but was finally achieved in 2016 and 2017.

Deeper Reading

Students quickly learned the value of prioritizing knowledge through engaging with the knowledge survey prior to and during engaging in reading. The structure of the KSs enabled reading with the purpose of illuminating known learning objectives. The structure also primed students to understand concepts by using the reading to clarify the connectedness between knowledge survey items.

Rather than just sitting down to “complete a reading,” students began reading assignments with appropriate goals and strategies; a characteristic of “expert readers” (Paris et al., 1996). When they encountered difficult concepts, they displayed increasing effort to improve their understanding of the topics identified as being essential to understand the concept. Further, knowledge surveys facilitated mentoring. When students did not understand the material, they proved more likely to follow up with a colleague or instructor to complete their understanding. 

Facilitating Acquiring Self-Regulation

Well-constructed knowledge surveys are detailed products of instructor planning and thinking. They communicate instructor priorities and coordinate the entire class to focus on specific material in unison. That students’ comments expressing they “didn’t know that would be on the exam” nearly disappeared from classroom conversations cannot be overly appreciated. 

Replacing scattered class-wide guessing of what to study allowed a collective focus on “How will we learn this material?” That reframing led to adopting learning strategies that expert learners employ when they have achieved self-regulation. Students increasingly consulted with each other or the instructor when they sensed or realized their current response to a knowledge survey item was probably inadequate. 

Levels and Degrees of Understanding

In preparing a knowledge survey for a course, the instructor carefully writes each survey item and learning objective so that learning addresses the desired mastery at the intended Bloom level (Krathwohl, 2002). Providing awareness of Bloom levels to students and reinforcing this throughout a course clarifies student awareness of the deep understanding required to teach the content at the required Bloom level to another person. Whereas it may be sufficient to remember or comprehend some content, demonstrating higher cognitive processes by having to explain to another how to apply, synthesize or evaluate central concepts and content of a course feels different because it is different. 

Knowledge surveys can address all Bloom levels and provide the practices needed to enable the paired understanding of knowing and “feeling of knowing” like no other instrument. Including the higher Bloom levels, combined with the explicitly stated advanced degree of understanding as the level of “teaching” or “explaining” to others, builds self-assessment skills and fosters the development of well–justified self-confidence. A student with such awareness can better focus efforts on extending the knowledge in which they recognize their weakness.

Building Skills with Feedback

The blog entries by Fleisher et al. in this series stressed the value of feedback in developing healthy self-assessments. Knowledge survey items that address the same learning outcomes as quizzes, exams, assignments, and projects promote instructional alignment. Such alignment allows explicit feedback from the demonstrated competence measures to calibrate the accuracy of self-assessments of understanding. Over time, knowledge surveys confer awareness that appropriate feedback builds both content mastery and better self-assessment skills.

A robust implementation directs students to complete the relevant portions of a knowledge survey after studying for an exam but before taking it. After the teacher grades the exams, students receive their self-assessed (knowledge survey score) and demonstrated (graded exam score) competence in a single package. From this information, the instructor can direct students to compare their two scores and to receive mentoring from the instructor when there is a large discrepancy (>10 points) between the two scores. 

Generally, a significant discrepancy from a single knowledge survey-exam pair comparison is not as meaningful as longer-term trends illuminated by cumulative data. Instructors who use KSs skillfully mentor students to become familiar with their trends and tendencies. When student knowledge survey responses consistently over- or under-estimate their mastery of the content, the paired data reveal this tendency to the student and instructor and open the opportunity for conversations about the student’s habitually favored learning strategies.

A variant implementation adds an easy opportunity for self-assessment feedback. Here, instructors assign students to estimate their score on an assignment or exam at the start of engaging the project and after completing the test or assignment prior to submission. These paired pre-post self-assessments help students to focus on their feelings of knowing and to further adjust toward greater self-assessment accuracy.

Takeaways

Knowledge surveys are unique in their utility for supporting student mastery of disciplinary knowledge, developing their affect toward accurate feelings of knowing, and improving their skills as expert learners. Extensive data show that instructors’ skillful construction of knowledge surveys as part of class design elicits deeper thinking and produces higher quality classes. After construction, class use facilitates mutual monitoring of progress and success by students and instructors. In addition to supporting student learning of disciplinary content, knowledge surveys keep students in constant contact with their metacognition and develop their capacity for lifelong learning. 

In Part 2, we follow from our more recent investigations on (1) more robust knowledge survey design, (2) learning about temporal qualities of becoming educated, (3) student authoring of knowledge surveys, and (4) mentoring students with large mis-calibrations in self-assessed competence toward greater self-assessment accuracy. 


Metacognition and Mindset for Growth and Success: Part 2 – Documenting Self-Assessment and Mindset as Connected

by Steven Fleisher, California State University
Michael Roberts, DePauw University
Michelle Mason, University of Wyoming
Lauren Scharff, U. S. Air Force Academy
Ed Nuhfer, Guest Editor, California State University (retired)

Self-assessment measures and categorizing mindset preference both employ self-reported metacognitive responses that produce noisy data. Interpreting noisy data poses difficulties and generates peer-reviewed papers with conflicting results. Some published peer-review works question the legitimacy and value of self-assessment and mindset.

Yeager and Dweck (2020) communicate frustration when other scholars deprecate mindset and claim it makes no difference under what mindset students pursue education. Indeed, that seems similar to arguing that enjoyment of education and students’ attitudes toward it makes no difference in the quality of their education.

We empathize with that frustration when we recall our own from seeing in class after class that our students were not “unskilled and unaware of it” and reporting those observations while a dominant consensus that “Students can’t self-assess” proliferated. The fallout that followed from our advocacy in our workplaces (mentioned in Part 2 of the entries on privilege) came with opinions that since “the empiricists have spoken,” there was no reason we should study self-assessment further. Nevertheless, we found good reason to do so. Some of our findings might serve as an analogy to demonstrating the value of mindsets despite the criticisms being leveled against them.

How self-assessment research became a study of mindset

In the summer of 2019, the guest editor and the first author of this entry taught two summer workshops on metacognition and learning at CSU Channel Islands to nearly 60 Bridge students about to begin their college experience. We employed a knowledge survey for the weeklong program, and the students also took the paired-measures Science Literacy Concept Inventory (SLCI). Students had the option of furnishing an email address if they wanted a feedback letter. About 20% declined feedback, and their mean score was 14 points lower (significant at the 99.9% confidence level) than those who requested feedback.

In revisiting our national database, we found that every campus revealed a similar significant split in performance. It mattered not whether the institution was open admissions or highly selective; the mean score of the majority who requested feedback (about 75%) was always significantly higher than those who declined feedback. We wondered if the responses served as an unconventional diagnosis of Dweck’s mindset preference.

Conventional mindset diagnosis employs a battery of agree-disagree queries to determine mindset inclination. Co-author Michael Roberts suggested we add a few mindset items on the SLCI, and Steven Fleisher selected three items from Dweck’s survey battery. After a few hundred student participants revealed only a marginal definitive relationship between mindset diagnosed by these items and SLCI scores, Steve increased our items to five.

Who operates in fixed, and who operates in growth mindsets?

The personal act of choosing to receive or avoid feedback to a concept inventory offers a delineator to classify mindset preference that differs from the usual method of doing so through a survey of agree-disagree queries. We compare here the mindset preferences of 1734 undergraduates from ten institutions using (a) feedback choice and (b) the five agree-disagree mindset survey items that are now part of Version 7.1a of the SLCI. That version has been in use for about two years.

We start by comparing the two groups’ demonstrable competence measured by the SLCI. Both methods of sorting participants into fixed or growth mindset preferences confirmed a highly significant (99.9% confidence) greater cognitive competence in the growth mindset disposition (Figure 1A). As shown in the Figure, feedback choice created two groups of fixed and growth mindsets whose mean SLCI competency scores differed by 12 percentage points (ppts). In contrast, the agree-disagree survey items defined the two groups’ means as separated by only 4 ppts. However, the two methods split the student populace differently, with the feedback choice determining that about 20% of the students operated in the fixed mindset. In contrast, the agree-disagree items approach determined that nearly 50% were operating in that mindset.

We next compare the mean self-assessment accuracy of the two mindsets. In a graph, it is easy to compare mean skills between groups by comparing the scatter shown by one standard deviation (1 Sigma) above and below the means of each group (Figure 1B). The group members’ scatter in overestimating or underestimating their actual scores reveals a group’s developed capacity for self-assessment accuracy. Groups of novices show a larger scatter in their group’s miscalibrations than do groups of those with better self-assessment skills (see Figure 3 of resource at this link).

Graphs showing how fixed and growth mindsets relate to SLCI scores, differing based on how mindset is categorized.

Figure 1. A. Comparisons of competence (SLCI scores) of 1734 undergraduates between growth mindset participants (color-coded blue) and fixed mindset participants (color-coded red) mindsets as deduced by two methods: (left) agree-disagree survey items and (right) acceptance or opting-out or receiving feedback. “B” displays the measures of demonstrated competence spreads of one standard deviation (1 Sigma) in growth (blue) and fixed mindset (red) groups as deduced by the two methods. The thin black line at 0 marks a perfect self-assessment rating of 0, above which lie overconfident estimates and below which lie underconfident estimates in miscalibrations of self-assessed accuracy. The smaller the standard deviation revealed by the height of the rectangles in 2B, the better the group’s ability to self-assess accurately. Differences shown in A of 4 and 12 ppts and B of 2.3 and 3.5 ppts are differences between means.

On average, students classified as operating in a growth mindset have better-calibrated self-assessment skills (less spread of over- and underconfidence) than those operating in a fixed mindset by either classification method (Figure 1B). However, the difference between fixed and growth was greater and more statistically significant when mindset was classified by feedback choice (99% confidence) rather than by the agree-disagree questions (95% confidence).

Overall, Figure 1 supports Dweck and others advocating for the value of a growth mindset as an asset to learning. We urge contextual awareness by referring readers to Figure 1 of Part 1 of this two-part thematic blog on self-assessment and mindset. We have demonstrated that choosing to receive or decline feedback is a powerful indicator of cognitive competence and at least a moderate indicator of metacognitive self-assessment skills. Still, classifying people into mindset categories by feedback choice addresses only one of the four tendencies of mindset shown in that Figure. Nevertheless, employing a more focused delineator of mindset preference (e.g., choice of feedback) may help to resolve the contradictory findings reported between mindset type and learning achievement.

At this point, we have developed the connections between self-assessment, mindset, and feedback we believe are most valuable to the readers of the IwM blog. Going deeper is primarily of value to those researching mindset. For them, we include an online link to an Appendix to this Part 2 after the References, and the guest editor offers access to SLCI Version 7.1a to researchers who would like to use it in parallel with their investigations.

Takeaways and future direction

Studies of self-assessment and mindset inform one another. The discovery of one’s mindset and gaining self-assessment accuracy require knowing self, and knowing self requires metacognitive reflection. Content learning provides the opportunity for developing the understanding of self by practicing for self-assessment accuracy and acquiring the feeling of knowing while struggling to master the content. Learning content without using it to know self squanders immense opportunities.

The authors of this entry have nearly completed a separate stand-alone article for a follow-up in IwM that focuses on using metacognitive reflection by instructors and students to develop a growth mindset.

References

Dweck, C. S. (2006). Mindset: The new psychology of success. New York: Random House.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487


Metacognition and Mindset for Growth and Success: APPENDIX to Part 2 – Documenting Self-Assessment and Mindset as Connected

by Ed Nuhfer, Guest Editor, California State University (retired)
Steven Fleisher, California State University
Michael Roberts, DePauw University
Michelle Mason, University of Wyoming
Lauren Scharff, U. S. Air Force Academy
Ed Nuhfer, Guest Editor, California State University (retired)

This Appendix stresses numeracy and employs a dataset of 1734 participants from ten institutions to produce measures of cognitive competence, self-assessed competence, self-assessment accuracy, and mindset categorization. The database is sufficient to address essential issues introduced in our blogs.

Finding replicable relationships in noisy data employs groups from a database collected from instruments proven to produce high-reliability measures. (See Figure 10 at this link.). If we assemble groups, say, groups of 50, as shown in Figure 1 B, we can attenuate the random noise in individuals’ responses (Fig. 1A) and produce a clearer picture of the signal hidden within the noise (Fig. 1B).

graphs showing postdicted self-assessment and SLCI a) individual data and b) group data

Figure 1 Raw data person-by-person on over 9800 participants (Fig. 1 A) shows a highly significant correlation between measures of actual competence from SLCI scores and postdicted self-assessed competence ratings. Aggregating the data into over 180 groups of 50 (Fig. 1 B) reduces random noise and clarifies the relationship.

Random noise is not simply an inconvenience. In certain graphic types, random noise generates patterns that do not intuitively appear random. Researchers easily interpret these noise patterns as products of a human behavior signal. The “Dunning-Kruger effect” appears built on many researchers doing that for over twenty years. 

Preventing confusing noise with signal requires knowing what randomness looks like. Researchers can achieve this by ensuring that the surveys and test instruments used in any behavioral science study have high reliability and then constructing a simulated dataset by completing these instruments with random number responses. The simulated population should equal that of the participants in the research study, and graphing the simulated study should employ the same graphics researchers intend to present the participants’ data in a publication.

The 1734 participants addressed in Parts 1 and 2 of this blog’s theme pair on mindset are part of the larger dataset represented in Figure 1. The number is smaller than 9800 because we only recently added mindset questions. 

The blog containing this Appendix link showed the two methods of classifying mindset as consistent in designating growth mindset as associated with higher scores on cognitive measures and more accurate self-assessments. However, this finding does not directly test how the two classification methods are related to one another. The fact noted in the blog that the two methods classified people differently indicated a reason to anticipate that the two may not prove to be directly statistically related.

We need to employ groups to attenuate noise, and ideally, we want large groups with good prospects of a spread of values. We first picked the groups associated with furnishing information about privilege (Table 1) because these are groups large enough to attenuate random noise. Further, the groups displayed highly significant statistical spreads when we looked at self-assessed and demonstrable competence within these categories. Note well: we are not trying to study privilege aspects here. Our objective, for now, is to understand the relationship between mindset defined by agree-disagree items and mindset defined by requests for feedback.

We have aggregated our data in Table 1 from four parameters to yield eight paired measures and are ready to test for relationships. Because we already know the relationship between self-assessed competence and demonstrated competence, we can verify whether our existing dataset of 1734 participants presented in 8 paired measures groups is sufficient to deduce the relationship we already know. Looking at self-assessment serves as a calibration to help answer, “How good is our dataset likely going to be for distinguishing the unknown relationships we seek about mindset?”

Mindset and self-assessment indicators by large groups.

Table 1. Mindset and self-assessment indicators by large groups. The table reveals each group’s mindset composition derived from both survey items and feedback and the populace size of each group.

Figure 2 shows that our dataset in Table 1 proved adequate in capturing the known significant relationship between self-assessed competence and demonstrated competence (Fig. 2A). The fit-line slope and intercept in Figure 2A reproduce the relationship established from much larger amounts of data (Fig. 1 B). However, the dataset did not confirm a significant relationship between the results generated by the two methods of categorizing people into mindsets (Fig. 2B).

In Figure 2B, there is little spread. The plotted points and the correlation are close to significant. Nevertheless, the spread clustered so tightly that we are apprehensive that the linear relationship would replicate in a future study of a different populace. Because we chose categories with a large populace and large spreads, more data entered into these categories probably would not change the relationships in Figure 2A or 2B. More data might bump the correlation in Figure 2B into significance. However, this could be more a consequence of the spread of the categories chosen for Table 1 than a product of a tight direct relationship between the two methods employed to categorize mindset. However, we can resolve this by doing something analogous to producing the graph in Figure 1B above.

Relationships between self-assessed competence and demonstrated competence (A) and growth mindset diagnosed by survey items and requests for feedback (B). The data graphed is from Table 1.

Figure 2. Relationships between self-assessed competence and demonstrated competence (A) and growth mindset diagnosed by survey items and requests for feedback (B). The data graphed is from Table 1.

We next place the same participants from Table 1 into different groups and thereby remove the spread advantages conferred by the groups in Table 1. We randomize the participants to get a good mix of the populace from the ten schools, sort the randomized data by class rank to be consistent with the process used to produce Figure 1B and aggregate them into groups of 100 (Table 2).

Table 2. 1700 students are randomized into groups of 100, and the means are shown for four categories for each group.

Table 2. 1700 students are randomized into groups of 100, and the means are shown for four categories for each group.

The results employing different participant groupings appear in Figure 3. Figure 3A confirms that the different groupings in Table 2 attenuate the spread introduced by the groups in Table 1.

Figure 3. The data graphed is from Table 2. Relationships between self-assessed competence and demonstrated competence appear in (A). In (B), plotting classified by agree-disagree survey items versus mindset classified by requesting or opting out of feedback fails to replicate the pattern shown in Figure 2 B

Figure 3. The data graphed is from Table 2. Relationships between self-assessed competence and demonstrated competence appear in (A). In (B), plotting classified by agree-disagree survey items versus mindset classified by requesting or opting out of feedback fails to replicate the pattern shown in Figure 2 B

The matched pairs of self-assessed competence and demonstrable competence continue in Figure 3A to reproduce a consistent line-fit that despite diminished correlation that still attains significance like Figures 1B and 2A. 

In contrast, the ability to show replication between the two methods for categorizing mindsets has completely broken down. Figure 2B shows a very different relationship from that displayed in 1B. Deducing the direct relationship between the two methods of categorizing mindset proves not replicable across different groups.

To allow readers who may wish to try different groupings, we have provided the raw dataset used for this Appendix that can be downloaded from https://profcamp.tripod.com/iwmmindsetblogdata.xls.

Takeaways

The two methods of categorizing mindset, in general, designate growth mindset as associated with higher scores on tests of cognitive competence and, to a lesser extent, better self-assessment accuracy. However, the two methods do not show a direct relationship with each other. This indicates the two are addressing different dimensions of the multidimensional character of “mindsets.”


Metacognition and Mindset for Growth and Success: Part 1 – Understanding the Metacognitive Connections between Self-Assessment and Mindset

by Steven Fleisher, California State University
Michael Roberts, DePauw University
Michelle Mason, University of Wyoming
Lauren Scharff, U. S. Air Force Academy
Ed Nuhfer, Guest Editor, California State University (retired)

When I first entered graduate school, I was flourishing. I was a flower in full bloom. My roots were strong with confidence, the supportive light from my advisor gave me motivation, and my funding situation made me finally understand the meaning of “make it rain.” But somewhere along the way, my advisor’s support became only criticism; where there was once warmth, there was now a chill, and the only light I received came from bolts of vindictive denigration. I felt myself slowly beginning to wilt. So, finally, when he told me I did not have what it takes to thrive in academia, that I wasn’t cut out for graduate school, I believed him… and I withered away.                                                                              (actual co-author experience)

schematic of person with band aid and flowers growing who is facing other people
Image by Moondance from Pixabay

After reading the entirety of this two-part blog entry, return and read the shared experience above once more. You should find that you have an increased ability to see the connections there between seven elements: (1) affect, (2) cognitive development, (3) metacognition, (4) self-assessment, (5) feedback, (6) privilege, and (7) mindset. 

The study of self-assessment as a valid component of learning, educating, and understanding opens up fascinating areas of scholarship for new exploration. This entry draws on the same paired-measures research described in the previous blog entries of this series. Here we explain how measuring self-assessment informs understanding of mindset and feedback. Few studies connect self-assessment with mindset, and almost none rest on a sizeable validated data set. 

Mindset, self-assessment, and privilege

Mindset theory proposes that individuals lean toward one of two mindsets (Dweck, 2006) that differ based on internalized beliefs about intelligence, learning, and academics. According to Dweck and others, people fall along a continuum that ranges from having a fixed mindset defined by a core belief that their intelligence and thinking abilities remain fixed, and effort cannot change them. In contrast, having a growth mindset comes with the belief that, through their effort, people can expand and improve their abilities to think and perform (Figure 1). 

Indeed, a growth mindset has support in the stages of intellectual, ethical, and affective development discovered by Bloom & Krathwohl and William Perry mentioned earlier in this series. However, mindset theory has evolved into making broader claims and advocating that being in a state of growth mindset also enhances performance in high-stakes functions such as leadershipteaching, and athletics

diagram showing the opposite nature of fixed and growth mindset with respect to how people view effort, challenge, failure and feedback. From https://trainugly.com/portfolio/growth-mindset/

Figure 1. Fixed – growth mindset tendencies. (From https://trainugly.com/portfolio/growth-mindset/)

Do people choose their mindset or do their experiences place them in their positions on the mindset continuum?  Our Introduction to this series disclosed that people’s experiences from degrees of privilege influence their positioning along the self-assessment accuracy continuum, and self-assessment has some commonalities with mindset. However, a focused, evidence-based study of privilege on determining mindset inclination seems lacking.

Our Introduction to this series indicated that people do not choose their positions along the self-assessment continuum. People’s cumulative experiences place them there. Their positions result from their individual developmental histories, where degrees of privilege influence the placement through how many experiences an individual has that are relevant and helpful to building self-assessment accuracy. The same seems likely for determining positions along the mindset continuum.

Acting to improve equity in educational success

Because the development during pre-college years primarily occurs spontaneously by chance rather than by design, people are rarely conscious of how everyday experiences form their dispositions. College students are unlikely even to know their positions on either continuum unless they receive a diagnostic measure of their self-assessment accuracy or their tendency toward a growth or a fixed mindset. Few get either diagnosis anywhere during their education. 

Adapting a more robust growth mindset and acquiring better self-assessment accuracy first requires recognizing that these dispositions exist. After that, devoting systematic effort to consciously enlisting metacognition during learning disciplinary content seems essential. Changing the dispositions takes longer than just learning some factual content. However, the time required to see measurable progress can be significantly reduced by a mentor/coach who directs metacognitive reflection and provides feedback.

Teaching self-assessment to lower-division undergraduates by providing numerous relevant experiences and prompt feedback is a way to alleviate some of the inequity produced by differential privilege in pre-college years. The reason to do this early is to allow students time in upper-level courses to ultimately achieve healthy self-efficacy and graduate with the capacity for lifelong learning. A similar reason exists for teaching students the value of affect and growth mindset by providing awareness, coaching, and feedback. Dweck describes how achieving a growth mindset can mitigate the adverse effects of inequity in privilege.

Recognizing good feedback

Dweck places high value on feedback for achieving the growth mindset. The Figure 1 in our guest series’ Introduction also emphasizes the importance of feedback in developing self-assessment accuracy and self-efficacy during college.

Depending on a person’s beliefs about their particular skill to address a challenge, they will respond in predictable ways when a skill requires effort, when it seems challenging, when effort affects performance, and when feedback informs performance. Those with a fixed mindset realize that feedback will indicate imperfections, which they take as indicative of their fixed ability rather than as applicable to growing their ability. To them, feedback shames them for their imperfections, and it hurts. They see learning environments as places where stressful competitions occur between their own and others’ fixed abilities. Affirmations of success rest in grades rather than growing intellectual ability.

Those with a growth mindset value feedback as illuminating the opportunities for advancing quickly in mastery during learning. Sharing feedback with peers in their learning community is a way to gain pleasurable support from a network that encourages additional effort. There is little doubt which mindset promotes the most enjoyment, happiness, and lasting friendships and generates the least stress during the extended learning process of higher education.

Dweck further stresses the importance of distinguishing feedback that is helpful from feedback that is damaging. Our lead paragraph above revealed a devastating experience that would influence any person to fear feedback and seek to avoid it. A formative influence that disposes us to accept or reject feedback likely lies in the nature of feedback that we received in the past. A tour through traits of Dweck’s mindsets suggests many areas where self-perceptions can form through just a single meaningful feedback event. 

Australia’s John Hattie has devoted his career to improving education, and feedback is his specialty area. Hattie concluded that feedback is “…the most powerful single moderator that enhances achievement” and noted in this University of Auckland newsletter “…arguably the most critical and powerful aspect of teaching and learning.” 

Hattie and Timperley (2007) synthesized many years of studies to determine what constitutes feedback helpful to achievement. In summary, valuable feedback focuses on the work process, but feedback that is not useful focuses on the student as a person or their abilities and communicates evaluative statements about the learner rather than the work. Hattie and Dweck independently arrived at the same surprising conclusion: even praise directed at the person, rather than focusing on the effort and process that led to the specific performance, reinforces a fixed mindset and is detrimental to achievement.

Professors seldom receive mentoring on how to provide feedback that would promote growth mindsets. Likewise, few students receive mentoring on how to use peer feedback in constructive ways to enhance one another’s learning. 

Takeaways

Scholars visualize both mindset and self-assessment as linear continuums with two respective dispositions at each of the ends: growth and fixed mindsets and perfectly accurate and wildly inaccurate self-assessments. In this Part 1, we suggest that self-assessment and mindset have surprisingly close connections that scholars have scarcely explored.

Increasing metacognitive awareness seems key to tapping the benefits of skillful self-assessment, mindset, and feedback and allowing effective use of the opportunities they offer. Feedback seems critical in developing self-assessment accuracy and learning through the benefits of a growth mindset. We further suggest that gaining benefit from feedback is a learnable skill that can influence the success of individuals and communities. (See Using Metacognition to Scaffold the Development of a Growth Mindset, Nov 2022.)

In Part 2, we share findings from our paired measures data that partially explain the inconsistent results that researchers have obtained between mindset and learning achievement. Our work supports the validity of mindset and its relationship to cognitive competence. It allows us to make recommendations for faculty and students to apply this understanding to their advantage.

 

References

Dweck, C. S. (2006). Mindset: The new psychology of success. New York: Random House.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487

Heft, I. & Scharff, L. (July 2017). Aligning best practices to develop targeted critical thinking skills and habits. Journal of the Scholarship of Teaching and Learning, Vol 17(3), pp. 48-67. http://josotl.indiana.edu/article/view/22600

Isaacson, Randy M., and Frank Fujita. 2006. “Metacognitive Knowledge Monitoring and Self-Regulated Learning: Academic Success and Reflections on Learning.” Journal of Scholarship of Teaching and learning6, no. 1: 39–55. Retrieved from https://eric.ed.gov/?id=EJ854910

Yeager, D. S., & Dweck, C. S. (2020). What can be learned from growth mindset controversies? American Psychologist, 75(9), 1269–1284. https://doi.org/10.1037/amp0000794

 


Metacognitive Self-assessment in Privilege and Equity – Part 2: Majority Privilege in Scientific Thinking

Ed Nuhfer, California State University (Retired)
Rachel Watson, University of Wyoming
Cinzia Cervato, Iowa State University
Ami Wangeline, Laramie County Community College

Being in the majority carries the privilege of empowerment to set the norms for acceptable beliefs. Minority status for any group invites marginalization by the majority simply because the group appears different from the familiar majority. Here, we explore why this survival mechanism (bias) also operates when a majority perceives an idea as different and potentially threatening established norms.

Young adult learners achieve comfort in ways of thinking and explaining the world from their experiences obtained during acculturation. Our Introduction stressed how these experiences differ in the majority and minority cultures and produce measurable effects. Education disrupts established states of comfort by introducing ideas that force reexaminations that contradict earlier beliefs established from experiences.

Even the kind of college training that promotes only growing cognitive expertise is disruptive but more critical; research verifies that the disruptions are felt. While discovering the stages of intellectual development, William Perry Jr. found that, for some learners, the feelings experienced during transitions toward certain higher stages of thinking were so discomforting that the students ceased trying to learn and withdrew. Currently, about a third of first-year college students drop out before their sophomore year.

Educating for self-assessment accuracy to gain control over bias

We believe that the same survival mechanisms that promote prejudice and suppress empathizing and understanding different demographic groups also cripple understanding in encounters with unfamiliar or contrarian ideas. In moments that introduce ideas disruptive to beliefs or norms, unfamiliar ideas become analogous to unfamiliar groups—easily marginalized and thoughtlessly devalued in snap judgments. Practice in doing self-assessment when new learning surprises us should be valuable for gaining control over the mechanism that triggers our own polarizing bias. Image of a maze on a black background with each branch of the maze showing different words such as "response, meaning, bias, memory." credit: Image by John Hain from Pixabay

Earlier (Part 2 entry on bias), we recommended teaching students to frequently self-assess, “What am I feeling that I want to be true, and why do I have that feeling?” That assignment ensures that students encounter disruptive surprises mindfully by becoming aware of affective feelings involved in triggering their bias. Awareness gives the greater control over self needed to prevent being captured by a reflex to reject unfamiliar ideas out of hand or to marginalize those who are different.

Teaching by employing self-assessment routinely for educating provides the prolonged relevant practice with feedback required for understanding self. Educating for self-assessment accuracy constitutes a change from training students to “know stuff” to educating students to know how they can think to understand both “stuff” and self.

When the first encounter with something or someone produces apprehension, those who gain a capacity for self-assessment accuracy from practice can exercise more control over their learning through recognizing the feeling that accompanies incipient activation of bias in reaction to discomfort. Such self-awareness allows a pause for reflecting on whether enlisting this vestigial survival mechanism serves understanding and can prevent bias from terminating our learning and inducing us to speak or act in ways that do not serve to understand.

Affect, metacognition, and self-assessment: minority views of contrarian scholars

We address three areas of scholarship relevant to this guest-edited series to show how brain survival mechanisms act to marginalize ideas that contradict an established majority consensus.

Our first example area involves the marginalization of the importance of affect by the majority of behavioral scientists. Antonio Damasio (1999, p. 39) briefly described this collective marginalization:

There would have been good reason to expect that, as the new century started, the expanding brain sciences would make emotion part of their agenda…. But that…never came to pass. …Twentieth Century science…moved emotion back into the brain, but relegated it to the lower neural strata associated with ancestors whom no one worshipped. In the end, not only was emotion not rational, even studying it was probably not rational.

A past entry in Improve with Metacognition (IwM) also noted the chilling prejudice against valuing affect during the 20th Century. Benjamin Bloom’s Taxonomy of the Affective Domain (Krathwohl et al. 1964) received an underwhelming reception from educators who had given unprecedented accolades to the team’s earlier volume on Taxonomy of the Cognitive Domain (Bloom, 1956). Also noted in that entry was William G. Perry’s purposeful avoidance of referring to affect in his landmark book on intellectual and ethical development (Perry, 1999). The Taxonomy of the Affective Domain also describes a developmental model that maps onto the Perry model of development much better than Bloom’s Taxonomy of the Cognitive Domain.

Our second example involved resistance against valuing metacognition. Dunlosky and Metcalfe (2009) traced this resistence to French philosopher Auguste Comte (1798-1854), who held that an observer trying to observe self was engaged in an impossible task like an eye trying to see itself by looking inwardly. In the 20th Century, the behaviorist school of psychology gave new life to Comte’s views by professing that individuals’ ability to do metacognition, if such an ability existed, held little value. According to Dunlosky and Metcalfe (2009, p. 20), the behaviorists held “…a stranglehold on psychology for nearly 40 years….” until the mid-1970s, when the work of John Flavell (see Flavell, 1979) made the term and concept of metacognition acceptable in academic circles.

Our third example area involves people’s ability to self-assess. “The Dunning-Kruger effect” holds that most people habitually overestimate their competence, with those least competent holding the most overly inflated views of their abilities and those with real expertise revealing more humility by consistently underestimating their abilities by modest amounts. Belief in “the effect” permeated many disciplines and became popular among the general public. As of this writing, a Google search brought up 1.5 million hits for the “Dunning Kruger effect.” It still constitutes the majority view of American behavioral scientists about human self-assessment, even after recent work revealed that the original mathematical arguments for “the effect” were untenable. 

Living a scholars’ minority experience

Considering prejudice against people and bias against new ideas as manifestations of a common, innate survival mechanism obviates fragmentation of these into separate problems addressed through unrelated educational approaches. Perceiving that all biases are related makes evident that the tendency to marginalize a new idea will certainly marginalize the proponents of an idea.

Seeing all bias as related through a common mechanism supports using metacognition, particularly self-assessment, for gaining personal awareness and control over the thoughts and feelings produced as the survival mechanism starts to trigger them. Thus, every learning experience providing discomfort in every subject offers an opportunity for self-assessment practice to gain conscious control over the instinct to react with bias

Some of the current blog series authors experienced firsthand the need for higher education professionals to acquire such control. When publishing early primary research in the early 1990s, we were naively unaware of majority consensus, had not yet considered bias as a survival reaction, and we had not anticipated marginalization. Suggesting frequent self-assessments as worthwhile teaching practices in the peer-reviewed literature brought reactions that jolted us from complacency into a new awareness.

Scholars around the nation, several of them other authors of this blog series, read the guest editor’s early work, introduced self-assessment in classes and launched self-assessment research of their own. Soon after, many of us discovered disparagements at the departmental, college, and university levels, and even at professional meetings followed for doing so. Some disparagements led to damaged careers and work environments.

The bias imparted by marginalization led to our doubting ourselves. Our feelings for a time were like those of the non-binary gender group presented in the earlier Figure 1 in the previous Part 1 on privilege: We “knew our stuff,” but our feelings of competence in our knowledge lagged. Thanks to the feedback from the journal peer-reviewers of Numeracy, we now live with less doubt in ourselves. For those of us who weathered the storm, we emerged with greater empathy for minority status and minority feelings and greater valuing of self-assessment. 

Self-assessment, a type of metacognition employing affect, seems in a paradigm change that recapitulates the history of affect and metacognition. Our Numeracy articles have achieved over 10,000 downloads, and psychologists in Europe, Asia, and Australia now openly question “the effect” (Magnus and Peresetsky, 2021; Kramer et al., 2022; Hofer et al., 2022; Gignac, 2022) in psychology journals. The Office of Science and Society at McGill University in Canada reached out to the lay public (Jarry, 2020) to warn how new findings require reevaluating “the effect.” We recently discovered that paired measures could even unearth unanticipated stress indicators among students (view section at time 21.38 to 24.58) during the turbulent times of COVID and civil disruption.

Takeaways

Accepting teaching self-assessment as good practice for educating and self-assessment measures as valid assessments open avenues for research that are indeed rational to study. After one perceives bias as having a common source, developing self-assessment accuracy seems a way to gain control over personal bias that triggers hostility against people and ideas that are not threatening, just different. 

“Accept the person you are speaking with as someone who has done amazing things” is an outstanding practice stressed at the University of Wyoming’s LAMP program. Consciously setting one’s cognition and affect to that practice erases all opportunities for marking anyone or their ideas for inferiority.

References

Bloom, B.S. (Ed.). (1956). Taxonomy of educational objectives, handbook 1: Cognitive domain. New York, NY: Longman.

Damasio, A. (1999). The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York: Harcourt.

Flavell, J. H. (1979). Metacognition and cognitive monitoring: a new area of cognitive-developmental inquiry. American Psychologist 34, 906-911.

Gignac, Gilles E. (2022). The association between objective and subjective financial literacy: Failure to observe the Dunning-Kruger effect. Personality and Individual Differences 184: 111224. https://doi.org/10.1016/j.paid.2021.111224

Hofer, G., Mraulak, V., Grinschgl, S., & Neubauer, A.C. (2022). Less-Intelligent and Unaware? Accuracy and Dunning–Kruger Effects for Self-Estimates of Different Aspects of Intelligence. Journal of Intelligence, 10(1). https://doi.org/10.3390/jintelligence10010010

Kramer, R. S. S., Gous, G., Mireku, M. O., & Ward, R. (2022). Metacognition during unfamiliar face matching. British Journal of Psychology, 00, 1– 22. https://doi.org/10.1111/bjop.12553

Krathwohl, D.R., Bloom, B.S. and Masia, B.B. (1964) Taxonomy of Educational Objectives: The Affective Domain. New York: McKay.

Magnus, Jan R., and Peresetsky, A. (October 04, 2021). A statistical explanation of the Dunning-Kruger effect. Tinbergen Institute Discussion Paper 2021-092/III, http://dx.doi.org/10.2139/ssrn.3951845

Nicholas-Moon, Kali. (2018). “Examining Science Literacy Levels and Self-Assessment Ability of University of Wyoming Students in Surveyed Science Courses Using the Science Literacy Concept Inventory with Expanded Inclusive Demographics.” Master’s thesis, University of Wyoming.

Perry, W. G. Jr. (1999). Forms of Ethical and Intellectual Development in the College Years. San Francisco, CA: Jossey-Bass (a reprint of the original 1968 work with minor updating).

Tarricone, P. (2011). The Taxonomy of Metacognition (1st ed.). Psychology Press. 288p. https://doi.org/10.4324/9780203830529


Metacognitive Self-assessment in Privilege and Equity – Part 1 Conceptualizing Privilege and its Consequences

by Rachel Watson, University of Wyoming
Ed Nuhfer, California State University (Retired)
Cinzia Cervato, Iowa State University
Ami Wangeline, Laramie County Community College

Demographics of metacognition and privilege

The Introduction to this series asserted that lives of privilege in the K-12 years confer relevant experiences advantageous to acquire the competence required for lifelong learning and entry into professions that require college degrees. Healthy self-efficacy is necessary to succeed in college. Such self-efficacy comes only after acquiring self-assessment accuracy through practice in using the relevant experiences for attuning the feelings of competence with demonstrable competence. We concur with Tarricone (2011) in her recognition of affect as an essential component of the self-assessment (or awareness) component of metacognition: the “‘feeling of knowing’ that accompanies problem-solving, the ability to distinguish ideas about which we are confident….” 

A surprising finding from our paired measures is how closely the mean self-assessments of performance of groups of people track with their actual mean performances. According to the prevailing consensus of psychologists, mean self-assessments of knowledge are supposed to confirm that people, on average, overestimate their demonstrable knowledge. According to a few educators, self-reported knowledge is supposed to be just random noise with no meaningful relationship to demonstrable knowledge. Data published in 2016 and 2017 in Numeracy from two reliable, well-aligned instruments revealed that such is not the case. Our reports in Numeracy shared earlier on this blog (see Figures 2 and 3 at this link) confirm that people, on average, self-assess reasonably well. 

In 2019, by employing the paired measures, we found that particular groups of peoples’ average competence varied measurably, and their average self-assessed competence closely tracked their demonstrable competence. In brief, different demographic groups, on average, not only performed differently but also felt differently about their performance, and their feelings were accurate.

Conceptualizing privilege and its consequences

Multiple systems (structural, attitudinal, institutional, economic, racial, cultural, etc.) produce privilege, and all individuals and groups experience privilege and disadvantage in some aspects of their lives. We visualize each system as a hierarchical continuum, along which at one end lie those systematically marginalized/minoritized, and those afforded the most advantages lie at the other. Because people live and work within multiple systems, each person likely operates at different positions along different continuums.

Those favored by privilege are often unaware of their part in maintaining a hierarchy that exerts its toll on those of lesser privilege. As part of our studies of the effects on those with different statuses of privilege, we discovered that instruments that can measure cognitive competence and self-assessments of their competence offer richer assessments than competency scores. They also inform us about how students feel and how accurately they self-assess their competence. Students’ histories of privilege seem to influence how effectively they can initially do the kinds of metacognition conducive to furthering intellectual development when they enter college.

Sometimes a group’s hierarchy results from a lopsided division into some criterion-based majority/minority split. There, advantages, benefits, status, and even acceptance, deference, and respect often become inequitably and systematically conferred by identity on the majority group but not on the underrepresented minority groups. 

Being a minority can invite being marked as “inferior,” with an unwarranted majority negative bias toward the minority, presuming the latter have inferior cognitive competence and even lower capacity for feeling than the majority. Perpetual exposure to such bias can influence the minority group to doubt themselves and unjustifiably underestimate their competence and capacity to perform. By employing paired measures, Wirth et al. (2021, p. 152 Figs.6.7 & 6.8) found recently that undergraduate women, who are the less represented binary gender in science, consistently underestimated their actual abilities relative to men (the majority) in science literacy.

We found that in the majority ethnic group (white Caucasians), both binary genders, on average, significantly outperformed their counterparts in the minority group (all other self-identified ethnicities combined) in both the competence scores of science literacy and the mean self-assessed competency ratings (Figure 1). 

Graph of gender performance on measures of self-assessed competence ratings and demonstrated competence scores across ethnic majority/minority categories.

Figure 1. Graph of gender performance on measures of self-assessed competence ratings and demonstrated competence scores across ethnic majority/minority categories. This graph represents ten years of data collection of paired measures, but we only recently began to collect non-binary gender data within the last year, so this group is sparsely represented. Horizontal colored lines coded to the colored circles’ legend mark the positions of the means of scores and ratings in percent at the 95% confidence level. 

Notably, in Figure 1, the non-binary gender groups, majority or minority, were the strongest academic group of the three gender categories based on SLCI scores. Still, relative to their performance, the non-binary groups felt that they performed less well than they actually did.  

On a different SLCI dataset with a survey item on sexual preference rather than gender, researcher Kali Nicholas Moon (2018) found the same degree of diminished self-assessed competence relative to demonstrated competence for the small LGBT group (see Fig. 7 p. 24 of this link). Simply being a minority may predispose a group to doubt their competence, even if they “know their stuff” better than most.

These mean differences in performance shown in Figure 1 are immense. For perspective, pre-post measures in a GE college course or two in science rarely produce more than mean differences of more than a couple of percentage points on the SLCI. In both majority and minority groups, females, on average, underestimated their performance, whereas males overestimated theirs. 

If a group receives constant messages that their thinking may be inferior, it is hardly surprising that they internalize feelings of inferiority that are damaging. Our evidence above from several groups verifies such a tendency. We showed that lower feelings of competence parallel significant deficit performance on a test of understanding science, an area relevant to achieving intellectual growth and meeting academic aspirations. Whether this signifies a general tendency of underconfidence in minority groups for meeting their aspirations in other areas remains undetermined.

Perpetuating privilege in higher education

Academe nurtures many hierarchies. Across institutions, “Best Colleges” rating lists perpetuate a myth that institutions that make the list are, in all ways, for all students “better than” those not on the list. Some state institutions actively promote a “flagship reputation,” implying the state’s other schools as “inferior.” Being in a community of peers that reinforces such hierarchical valuing confers damaging messaging of inferiority to those attending the “inferior” institutions, much as an ethnic majority casts negative messages to the minority.  

Within institutions, different disciplines are valued differently, and people experience differential privileges across the departments and programs that focus on educating to support different disciplines. The degrees of consequences of stress, alienation, and physical endangerment are minor compared to those experienced by socially marginalized/minoritized groups. Nevertheless, advocating for any change in an established hierarchy in any community is perceived as disruptive by some and can provide consequences of diminished privilege. National communities of academic research often prove no exception. 

Takeaways

Hierarchies usually define privilege, and the majority group often supports hierarchies detrimental to the well-being of minority groups. Although test scores are the prevalent measures used to measure learning mastery, paired measures of cognitive competence and self-assessed competence provide additional information about students’ affective feelings about content mastery and their developing capacity for accurate self-assessment. This information helps reveal the inequity across groups and monitors how well students can employ the higher education environment for advancing their understanding of specialty content and understanding of self. Paired measures confirm that groups of varied privilege fare differently in employing that environment for meeting their aspirations. 


Understanding Bias in the Disciplines: Part 2 – the Physical and Quantitative Sciences 

by Ed Nuhfer, California State University (Retired)
Eric Gaze, Bowdoin College
Paul Walter, St Edwards University
Simone Mcknight (Simone Erchov), Global Systems Technology

In Part 1, we summarized psychologists’ current understanding of bias. In Part 2, we connect conceptual reasoning and metacognition and show how bias challenges clear reasoning even in “objective” fields like science and math.

Science as conceptual

College catalogs’ explanations of general education (GE) requirements almost universally indicate that the desired learning outcome of the required introductory science course is to produce a conceptual understanding of the nature of science and how it operates. Focusing only on learning disciplinary content in GE courses squeezes out stakeholders’ awareness that a unifying outcome even exists. 

Wherever a GE metadisciplinary requirement (for example, science) specifies a choice of a course from among the metadiscipline’s different content disciplines (for example, biology, chemistry, physics, geology), each course must communicate an understanding of the way of knowing established in the metadiscipline. That outcome is what the various content disciplines share in common. A student can then understand how different courses emphasizing different content can effectively teach the same GE outcome.

The guest editor led a team of ten investigators from four institutions and separate science disciplines (biology, chemistry, environmental science, geology, geography, and physics). Their original proposal was to investigate ways to improve the learning in the GE science courses. While articulating what they held in common as professing the metadiscipline of “science,” the investigators soon recognized that the GE courses they took as students had focused on disciplinary content but scarcely used that content to develop an understanding of science as a way of knowing. After confronting the issue of teaching with such a unifying emphasis, they later turned to the problem of assessing success in producing this different kind of understanding.

Upon discovering no suitable off-the-shelf assessment instrument to meet this need, they constructed the Science Literacy Concept Inventory (SLCI). This instrument later made possible this guest-edited series and the confirmation of knowledge surveys as valid assessments of student learning.

Concept inventories test understanding the concepts that are the supporting framework for larger overarching blocks of knowledge or thematic ways of thinking or doing. The SLCI tests nine concepts specific to science and three more related to the practice of science and connecting science’s way of knowing with contributions from other requisite GE metadisciplines.

Self-assessment’s essential role in becoming educated

Self-assessment is partly cognitive (the knowledge one has) and partly affective (what one feels about the sufficiency of that knowledge to address a present challenge). Self-assessment accuracy confirms how well a person can align both when confronting a challenge.

Developing good self-assessment accuracy begins with an awareness that having a deeper understanding starts to feel different from merely having surface knowledge needed to pass a multiple-choice test. The ability to accurately feel when deep learning has occurred reveals to the individual when sufficient preparation for a challenge has, in fact, been achieved. We can increase learners’ capacity for metacognition by requiring frequent self-assessments that give them the practice needed to develop self-assessment accuracy. No place needs teaching such metacognition more than the introductory GE courses.

Regarding our example of science, the 25 items on the SLCI that test understanding of the twelve concepts derive from actual cases and events in science. Their connection to bias lies in learning that when things go wrong when doing or learning science, some concept is unconsciously being ignored or violated. Violations are often traceable to bias that hijacked the ability to use available evidence.

We often say: “Metacognition is thinking about thinking.” When encountering science, we seek to teach students to “think about” (1) “What am I feeling that I want to be true and why do I have that feeling?” and (2) “When I encounter a scientific topic in popular media, can I articulate what concept of science’s way of knowing was involved in creating the knowledge addressed in the article?”

Examples of bias in physical science

“Misconceptions research” constitutes a block of science education scholarship. Schools do not teach the misconceptions. Instead, people develop preferred explanations for the physical world from conversations that mostly occur in pre-college years. One such explanation addresses why summers are warm and winters are cold. The explanation that Earth is closer to the sun in summer is common and acquired by hearing it as a child. The explanation is affectively comfortable because it is easy, with the ease coming from repeatedly using the neural network that contains the explanation to explain the seasonal temperatures we experience. We eventually come to believe that it is true. However, it is not true. It is a misconception.

When a misconception becomes ingrained in our brain neurology over many years of repeated use, we cannot easily break our habit of invoking the neural network that holds the misconception until we can bypass it by constructing a new network that holds the correct explanation. Still, the latter will not yield a network that is more comfortable to invoke until usage sufficiently ingrains it. Our bias tendency is to invoke the most ingrained explanation because doing so is easy.

Even when individuals learn better, they often revert to invoking the older, ingrained misconception. After physicists developed the Force Concept Inventory (FCI) to assess students’ understanding of conceptual relationships about force and motion, they discovered that GE physics courses only temporarily dislodged students’ misconceptions. Many students soon reverted to invoking their previous misconceptions. The same investigators revolutionized physics education by confirming that active learning instruction better promoted overcoming misconceptions than did traditional lecturing.

The pedagogy that succeeds seemingly activates a more extensive neural network (through interactive discussing, individual and team work on problem challenges, writing, visualizing through drawing, etc.) than was activated to initially install the misconception (learning it through a brief encounter).

Biases that add wanting to believe something as true or untrue are especially difficult to dislodge. An example of the power of bias with emotional attachment comes from geoscience.

Nearly all school children in America today are familiar with the plate tectonics model, moving continents, and ephemeral ocean basins. Yet, few realize that the central ideas of plate tectonics once were scorned as “Germanic pseudoscience” in the United States. That happened because a few prominent American geoscientists so much wanted to believe their established explanations as true that their affect hijacked these experts’ ability to perceive the evidence. These geoscientists also exercised enough influence in the U. S. to keep plate tectonics out of American introductory level textbooks. American universities introduced plate tectonics in introductory GE courses only years later than did Europe.

Example of Bias in Quantitative Reasoning

People usually cite mathematics as the most dispassionate discipline and the least likely for bias to corrupt. However, researchers Dan Kahan and colleagues demonstrated that bias also disrupts peoples’ ability to use quantitative data and think clearly.

Researchers asked participants to resolve whether a skin cream effectively treated a skin rash. Participants received data for subjects who did or did not use the skin cream. Among users, the rash got better in 223 cases and got worse in 75 cases. Of subjects who did not use the skin cream, the rash got better in 107 cases and worse in 21 cases.

Participants then used the data to select from two choices: (A) People who used the cream were more likely to get better or (B) People who used the cream were more likely to get worse. More than half of the participants (59%) selected the answer not supported by the data. This query was primarily a numeracy test in deducing the meaning of numbers.

Then, using the same numbers, the researchers added affective bait. They replaced the skin cream query with a query about the effects of gun control on crime in two cities. One city allowed concealed gun carry, and another banned concealed gun carry. Participants had to decide whether the data showed that concealed carry bans increased or decreased crime.

Self-identified conservative Republicans and liberal Democrats responded with a desire to believe acquired from their party affiliations. The result was even more erroneous than the skin cream case participants. Republicans greatly overestimated increased crime from gun bans, but no more than Democrats overestimated decreased crime from gun bans (Figure 1). When operating from “my-side” bias planted by either party, citizens significantly lost their ability to think critically and use numerical evidence. This was true whether the self-identified partisans had low or high numeracy skills.

Graph showing comparing responses from those with low and high numeracy skills. Those with high numeracy always have better accuracy (smaller variance around the mean). When the topic was non-partisan, the means for those with low and high numeracy skills were roughly the same and showed little bias regarding direction of error. When the topic was partisan, then then those with lower skill showed, the strong bias and those with higher skill showed some bias.

Figure 1. Effect of bias on interpreting simple quantitative information (from Kahan et al. 2013, Fig. 8). Numerical data needed to answer whether a cream effectively treated a rash triggered low bias responses. When researchers employed the same data to determine whether gun control effectively changed crime, polarizing emotions triggered by partisanship significantly subverted the use of evidence toward what one wanted to believe.

Takeaway

Decisions and conclusions that appear based on solely objective data rarely are. Increasing metacognitive capacity produces awareness of the prevalence of bias.


Understanding Bias in the Disciplines: Part 1 – the Behavioral Sciences 

by Simone Mcknight (Simone Erchov), Global Systems Technology
Ed Nuhfer, California State University (Retired)
Eric Gaze, Bowdoin College
Paul Walter, St Edwards University

Bias as conceptual

Bias arises from human brain mechanisms that process information in ways that make decision-making quicker and more efficient at the cognitive/neural level. Bias is an innate human survival mechanism, and we all employ it.

Bias is a widely known and commonly understood psychological construct. The common understanding of bias is “an inclination or predisposition for or against something.” People recognize bias by its outcome—the preference to accept specific explanations or attributions as true.

In everyday conversation, discussions about bias occur in preferences and notions people have on various topics. For example, people know that biases may influence the development of prejudice (e.g., ageism, sexism, racism, tribalism, nationalism), political, or religious beliefs.

the words "Bias in the Behavioral Sciences" on a yellow backgroundA deeper look reveals that some of these preferences are unconscious. Nevertheless, they derive from a related process called cognitive bias, a propensity to use preferential reasoning to assess objective data in a biased way. This entry introduces the concept of bias, provides an example from the behavioral sciences, and explains why metacognition can be a valuable tool to counteract bias. In Part 2, which follows this entry, we provide further examples from hard science, field science, and mathematics.

Where bias comes from

Biases develop from the mechanisms by which the human brain processes information as efficiently as possible. These unconscious and automatic mechanisms make decision-making more efficient at the cognitive/neural level. Most mechanisms that help the human brain make fast decisions are credited to adaptive survival. Like other survival mechanisms, bias loses value and can be a detriment in a modern civilized world where threats to our survival are infrequent challenges. Cognitive biases are subconscious errors in thinking that lead to misinterpreting future information from the environment. These errors, in turn, impact the rationality and accuracy of decisions and judgments.

When we frame unconscious bias within the context of cognitive bias and survival, it is easier to understand how all of us have inclinations to employ bias and why any discipline that humans manage is subject to bias. Knowing this makes it easier to account for the frequent biases affecting the understanding and interpreting of diverse kinds of data.

People easily believe that bias only exists in “subjective” disciplines or contexts where opinions and beliefs seem to guide decisions and behavior. However, bias manifests in how humans process information at the cognitive level. Although it is easier to understand bias as a subjective tendency, the typical way we process information means that bias can pervade all of our cognition.

Intuitively, disciplines relying on tangible evidence, logical arguments, and natural laws of the physical universe would seem factually based and less influenced by feelings and opinion. After all, “objective disciplines” do not predicate their findings on beliefs about what “should be.” Instead, they measure tangible entities and gather data. However, even in the “hard science” disciplines, the development of a research question, the data collected, and the interpretations of data are vulnerable to bias. Tangible entities such as matter and energy are subject to biases as simple as differences in perception of the measured readings on the same instrument. In the behavioral sciences, where investigative findings are not constrained by natural law, bias can be even harder to detect. Thus, all scientists carry bias into their practice of science, and students carry bias into their learning of it.

Metacognition can help counter our tendencies toward bias because it involves bringing relevant information about a process (e.g., conducting research, learning, or teaching) into awareness and then using that awareness to guide subsequent behaviors.

Consequences of bias

Bias impacts individual understanding of the world, the self, and how the self navigates the world – our schemas. These perceptions may impact elements of identity or characterological elements that influence the likelihood of behaving in one way versus another.

Bias should be assumed as a potentially influential factor in any human endeavor. Sometimes bias develops for an explanation after hearing it in childhood and then invoking that explanation for years. Even after seeing the evidence against that bias, our initial explanations are difficult to replace with ones better supported by evidence because we remain anchored to that initial knowledge. Adding a personal emotional attachment to an erroneous explanation makes replacing it even more difficult. Scientists can have emotional attachments to particular explanations of phenomena, especially their own explanations. Then, it becomes easy to selectively block out or undervalue evidence that modifies or contradicts the favored explanation (also known as confirmation bias).

Self-assessment, an example of long-standing bias in behavioral science

As noted in the introduction, this blog series focuses on our team’s work related to self-assessment. Our findings countered results from scores of researchers who replicated and verified the testing done in a seminal paper by Kruger and Dunning (1999). Their research asserted that most people were overconfident about their abilities, and the least competent people had the most overly optimistic perceptions of their competence. Researchers later named the phenomenon the “Dunning-Kruger effect,” and the public frequently deployed “the effect” as a label to disparage targeted groups as incompetent. “The effect” held attraction because it seemed logical that people who lacked competence also lacked the skills needed to recognize their deficits. Quite simply, people wanted to believe it, and replication created a consensus with high confidence in concluding that people, in general, cannot accurately self-assess.

While a few researchers did warn about likely weaknesses in the seminal paper, most behavioral scientists selectively ignored the warnings and repeatedly employed the original methodology. This trend of replication continued in peer-reviewed behavioral science publications through at least 2021.

Fortunately, the robust information storage and retrieval system that characterizes the metadiscipline of science (which is a characteristic distinguishing science from technology as ways of knowing) makes it possible to challenge a bias established in one discipline by researchers from another. Through publications and open-access databases, the arguments that challenge an established bias then become available. In this case, the validity of “the effect” resided mainly in mathematical arguments and not, as presumed, arguments that resided solely within the expertise of behavioral scientists.

No mathematics journal had ever hosted arguments addressing the numeracy of arguments that established and perpetuated the belief in “the effect.” However, mathematics journals offered the benefit of reviewers who specialized in quantitative reasoning and were not emotionally attached to any consensus established in behavioral science journals. These reviewers agreed that the long-standing arguments for supporting the Dunning-Kruger effect were mathematically flawed.  

In 2016 and 2017, Numeracy published two articles from our group that detailed the mathematical arguments that established the Dunning-Kruger effect conclusions and why these arguments are untenable. When examined by methods the mathematics reviewers verified as valid, our data indicated that people were generally good at self-assessing their competence and confirmed that there were no marked tendencies toward overconfidence. Experts and novices proved as likely to underestimate their abilities as to overestimate them. Further, the percentage of those who egregiously overestimated their abilities was small, in the range of about 5% to 6% of participants. However, our findings confirmed a vital conclusion of Kruger and Dunning (1999): experts self-assess better than novices (variance decreases as expertise increases), and self-assessment accuracy is attainable through training and practice.

By 2021, the information released in Numeracy began to penetrate the behavioral science journals. This blog series, our earlier posts on this site, and archived presentations to various audiences (e.g., the National Numeracy Network, the Geological Society of America) further broadened awareness of our findings.

Interim takeaways

Humans construct their learning from mentally processing life experiences. During such processing, we simultaneously construct some misconceptions and biases. The habit of drawing on a misconception or bias to explain phenomena ingrains it and makes it difficult to replace with correct reasoning. Affective attachments to any bias make overcoming the bias extremely challenging, even for the most accomplished scholars.

It is essential to realize that we can reduce bias by employing metacognition to recognize bias originating from within us at the individual level and by considering bias that influences us but is originated from or encouraged by groups. In the case above, we were able to explain the bias within the Behavioral Sciences disciplines by showing how repeatedly mistaking mathematical artifacts as products of human behavior produced a consensus that held understanding self-assessment captive for over two decades.

Metacognitive self-assessment seems necessary for initially knowing self and later for recognizing one’s own personal biases. Self-assessment accuracy is valuable in using available evidence well and reducing the opportunity for bias to hijack our ability to reason. Developing better self-assessment accuracy appears to be a very worthy objective of becoming educated.


Introduction: Why self-assessment matters and how we determined its validity 

By Ed Nuhfer, Guest Editor, California State University (retired)

There are few exercises of thinking more metacognitive than self-assessment. For over twenty years, behavioral scientists accepted that the “Dunning-Kruger effect,” which portrays most people as “unskilled and unaware of it,” correctly described the general nature of human self-assessment. Only people with significant expertise in a topic were capable of self-assessing themselves accurately, while those with the least expertise supposedly held highly overinflated views of their abilities. 

The authors of this guest series have engaged in a collaborative effort to understand self-assessment for over a decade. They documented how the “Dunning-Kruger effect,” from its start, rested on specious mathematical arguments. Unlike what the “effect” asserts, most people do not hold overly inflated views of their competence, regardless of their level of expertise. We summarized some of our peer-reviewed work in earlier articles in “Improve with Metacognition (IwM).” These are discoverable by using “Dunning-Kruger effect” in IwM’s search window. 

Confirming that people, in general, are capable of self-assessing their competence affirms the validity of self-assessment measures. The measures inform efforts in guiding students to improve their self-assessment accuracy. 

This introduction presents commonalities that unify the series’ entries to follow. In the entries, we hotlink the references available as open-source within the blogs’ text and place all other references cited at the end. 

Why self-assessment matters

After an educator becomes aware of metacognition’s importance, teaching practice should evolve beyond finding the best pedagogical techniques for teaching content and assessing student learning. The “place beyond” focuses on teaching the student how to develop a personal association with content as a basis for understanding self and exercising higher-order thinking. Capturing the changes in developing content expertise together with self in a written teaching/learning philosophy expedites understanding how to achieve both. Self-assessment could be the most valuable of all the varieties of metacognition that we employ to deepen our understanding. 

Visualization is conducive to connecting essential themes in this series of blogs that stress becoming better educated through self-assessment. Figure 1 depicts the role and value of self-assessment from birth at the top of the figure to becoming a competent, autonomous lifelong learner by graduation from college at the bottom. diagram illustrating components that come together to promote life-long learning: choices & effort through experiences; self-assessment; self-assessment accuracy; self-efficacy; self-regulation

Figure 1. Relationship of self-assessment to developing self-regulation in learning. 

Let us walk through this figure, beginning with early life Stage #1 at the top. This stage occurs throughout the K-12 years, when our home, local communities, and schools provide the opportunities for choices and efforts that lead to experiences that prepare us to learn. In studies of Stage 1, John A. Ross made the vital distinction between self-assessment (estimating immediate competence to meet a challenge) and self-efficacy (perceiving one’s personal capacity to acquire competence through future learning). Developing healthy self-efficacy requires considerable practice in self-assessment to develop consistent self-assessment accuracy.

Stage 1 is a time that confers much inequity of privilege. Growing up in a home with a college-educated parent, attending schools that support rich opportunities taught in one’s native language, and living in a community of peers from homes of the well-educated provide choices, opportunities, and experiences relevant to preparing for higher education. Over 17 or 18 years, these relevant self-assessments sum to significant advantages for those living in privilege when they enter college. 

However, these early-stage self-assessments occur by chance. The one-directional black arrows through Stage 2 communicate that nearly all the self-assessments are occurring without any intentional feedback from a mentor to deliberately improve self-assessment accuracy. Sadly, this state of non-feedback continues for nearly all students experiencing college-level learning too. Thereby, higher education largely fails to mitigate the inequities of being raised in a privileged environment.

The red two-directional arrows at Stage 3 begin what the guest editor and authors of this series advocate as a very different kind of educating to that commonly practiced in American institutions of education. We believe education could and should provide self-assessments by design, hundreds in each course, all followed by prompt feedback, to utilize the disciplinary content for intentionally improving self-assessment accuracy. Prompt feedback begins to allow the internal calibration needed for improving self-assessment accuracy (Stage #4). 

One reason to deliberately incorporate self-assessment practice and feedback is to educate for social justice. Our work indicates that we can enable the healthy self-efficacy needed to succeed in the kinds of thinking and professions that require a college education by strengthening the self-assessment accuracy of students and thus make up for the lack of years of accumulated relevant self-assessments in the backgrounds of those lesser privileged.

By encouraging attention to self-assessment accuracy, we seek to develop students’ felt awareness of surface learning changing toward the higher competence characterized by deep understanding (Stage #5). Awareness of the feeling characteristic when one attains the competence of deep understanding enables better judgment for when one has adequately prepared for a test or produced an assignment of high quality and ready for submission. 

People attain Stage #6, self-regulation, when they understand how they learn, can articulate it, and can begin to coach others on how to learn through effort, using available resources, and accurately doing self-assessment. At that stage, a person has not only developed the capacity for lifelong learning, but has developed the capacity to spread good habits of mind by mentoring others. Thus the arrows on each side of Figure 1 lead back to the top and signify both the reflection needed to realize how one’s privileges were relevant to their learning success and cycling that awareness to a younger generation in home, school, and community. 

A critical point to recognize is that programs that do not develop students’ self-assessment accuracy are less likely to produce graduates with healthy self-efficacy or the capacity for lifelong learning than programs that do. We should not just be training people to grow in content skills and expertise but also educating them to grow in knowing themselves. The authors of this series have engaged for years in designing and doing such educating.

The common basis of investigations

The aspirations expressed above have a basis in hard data from assessing the science literacy of over 30,000 students and “paired measures” on about 9,000 students with peer-reviewed validated instruments. These paired measures allowed us to compare self-assessed competence ratings on a task and actual performance measures of competence on that same task. 

Knowledge surveys serve as the primary tool through which we can give “…self-assessments by design, hundreds in each course all followed by prompt feedback.” Well-designed knowledge surveys develop each concept with detailed challenges that align well with the assessment of actual mastery of the concept. Ratings (measures of self-assessed competence) expressed on knowledge surveys, and scores (measures of demonstrated competence) expressed on tests and assignments are scaled from 0 to 100 percentage points and are directly comparable.

When the difference between the paired measures is zero, there is zero error in self-assessment. When the difference (self-assessed minus demonstrated) is a positive number, the participant tends toward overconfidence. When the difference is negative, the participant has a tendency toward under-confidence.

In our studies that established the validity of self-assessment, our demonstrated competence data in our paired measures came mainly from the validated instrument, the Science Literacy Concept Inventory or “SLCI.” Our self-assessed competence data comes from knowledge surveys and global single-queries tightly aligned with the SLCI. Our team members incorporate self-created knowledge surveys of course content into their higher education courses. Knowledge surveys have proven to be powerful research tools and classroom tools for developing self-assessment accuracy. 

Summary overview of this blog series

IwM is one of the few places where the connection between bias and metacognition has directly been addressed (e.g., see a fine entry by Dana Melone). The initial two entries of this series will address metacognitive self-assessment’s relation to the concept of bias. 

Later contributions to this series consider privilege and understanding the roles of affect, self-assessment, and metacognition when educating to mitigate the disadvantages of lesser privilege. Other entries will explore the connection between self-assessment, participant use of feedback, mindset, and metacognition’s role in supporting the development of a growth mindset. Near the end of this series, we will address knowledge surveys, the instruments that incorporate the disciplinary content of any college course to improve learning and develop self-assessment accuracy. 

We will conclude with a final wrap-up entry of this series to aid readers’ awareness that what students should “think about” when they “think about thinking” ought to provide a map for reaching a deeper understanding of what it means to become educated and to acquire the capacity for lifelong learning.


Short-takes: Metacognition Ah-Ha Moments

Spring 2022: We asked folks, “What is your favorite metacognition “ah ha” moment that a student shared with you or you’ve experienced in the classroom?” Here are the responses:

  • Randy Laist, “Stray Cats and Invisible Prejudices: A Metacognition Short-Take”
  • Tara Beziat, “Is the driver really that far away?”
  • Jennifer McCabe, “The Envelope, Please”
  • Ritamarie Hensley, “Teaching is teaching at all levels and people are people at all ages”
  • Antonio Gutierrez de Blume, “Culture influences metacognitive phenomena”
  • Kathleen Murray, “Metacognitive Thinking: A Mechanism for Change”
  • Lindsay Byer, “Metacognition: Bringing Our Different Skills to the DEI Table”
  • Gina R. Evers, “Vulnerability as a Connector”
  • Ludmila Smirnova, “The Power of “Ah-ha Moments” in Collaboration”
  • Marie-Therese C. Sulit, “Metacognition with Faith, Trust, and Joy in the Collaborative Process”
  • Sonya Abbye Taylor, “What the World Needs Now: The Power of Collaboration”

schematic of a centimeter ruler colored blue

Stray Cats and Invisible Prejudices: A Metacognition Short-Take

by Randy Laist, University of Bridgeport

We had reached the point in our freshman writing class where the students described the topics they were choosing for their research papers.  Samara had a brilliant idea to conduct research inspired by the stray cats she had seen on campus.  After praising the originality and perceptiveness that led her to propose this line of inquiry, I casually editorialized regarding what I thought everyone knew about stray cats: that they were “super-predators” that menaced birds and squirrels, laying waste to backyard ecologies.  The other students followed my comments with their own observations and questions about stray cats, but I noticed that Samara seemed somehow crestfallen and remote.

The following week, we discussed the progress we had been making with our research, and, when it was Samara’s turn, she presented a catalog of evidence demonstrating that the stereotype of cats as super-predators is in fact completely erroneous.  Stray cats, she explained, rarely eat birds and actually play an important role in controlling rodent populations, especially in cities.  I couldn’t help but feel that she was directing this argument at me personally.  

The perspective Samara supported through her research completely revolutionized what I thought I knew about stay cats, but, more fundamentally, her research confronted me with the invisibility of my own prejudices.  So much of our lives and our thoughts consist of background “truths” that we acquire somehow at some point and that become an inert part of our mental architecture.  They remain unquestioned, and they exert a constant influence on our thoughts and our behavior in ways that we rarely recognize.  Samara’s independent thinking – the product of both her humanitarian principles and her academic inquiry – made me realize that I had been willing to believe terrible things about cats, even though I am myself a loving cat-owner.  The experience made me feel more compassionate toward stray cats, more warry of my own perceived truths, and more appreciative of the role that academic research, observation-based inquiry, and interpersonal dialogue can play in challenging human beings to reprogram themselves for greater humility and empathy. 

 

Is the driver really that far away?

by Tara Beziat, Auburn University at Montgomery

I had two female students who were great friends and happen to be in same class. One day when they were driving home from class, one pondered aloud if a sign on a truck was correct. It noted something about how the driver was x amount of feet away. As the two of them drove, they contemplated if the information on the truck was accurate. Something just didn’t seem right. They started to calculate the length of the truck. What they realized was they were using concepts we had learned in class that day to figure this “problem” out. They realized they were using critical thinking skills because they were objectively evaluating the information presented. But then as they realized what they were doing, they were also being metacognitive as thought about their thinking, as they drove down the road. 

 

The Envelope, Please

by Jennifer McCabe, Goucher College

In my Human Learning and Memory course, my favorite metacognition “ah ha” moment comes during the final class period, when I return the sealed envelopes containing students’ first-day-of-class memories. On the first day, they record details about everything they have done in the past 24 hours, then mark routine (R) or novel (N) next to each event. They seal the written memories in an envelope, on which they write their names, and submit. (I promise to not open them.) During the final class, they try to remember as much as they can about that first day. Once they get started writing, I brandish the envelopes, at which point there are more than a few reactions along the lines of, “I totally forgot we did that!” and “I have no idea what I wrote!” As they are distributed, I suggest that just seeing the envelope can be an effective retrieval cue. Finally, the big metacognitive moment – opening the envelopes to discover what they wrote that first day! The reactions at this point are mostly along the lines of disbelief that they wrote so much more than they can remember now. This leads to a discussion of transience, and how they now have very personal evidence of how much we forget about our own lives. We also discuss episodic versus semantic elements of their memories, and whether routine or novel elements tended to be remembered best. I conclude with anti-transience strategies, such as keeping a journal or using social media to document life events.

 

Teaching is teaching at all levels and people are people at all ages

by Ritamarie Hensley, Simmons University

After teaching at the high school level for many years, I knew students didn’t read the directions, finish assignments on time, or know how to write a coherent paragraph. Of course, these characteristics didn’t describe all of my students, but I often encountered these issues from year-to-year. So when I began teaching doctoral students for the first time, I assumed I would not need to worry about them turning-in late papers, writing illogical paragraphs, or ignoring my instructions. 

Hence the aha moment. 

Students are students and no matter what their age or level of education, they need me. They need me to teach the basics of writing a coherent paragraph. They need me to remind them when a paper is due. And, they need me to point out the directions… again. But, that’s okay. It’s what I do. Teaching is teaching at all levels and people are people at all ages. Whether it’s earning a high school diploma or a terminal degree, people are busy and human, so they need their instructors to help them attain their goals, even if that means reminding them of the directions one more time. 

 

Culture influences metacognitive phenomena

by Antonio Gutierrez de Blume, Georgia Southern University

What is “metacognition”? What does it mean to experience or become aware of one’s own thoughts? Since John Flavell coined the term in 1979, metacognition has been colloquially understood as thinking about one’s own thinking or taking one’s own thoughts as the object of cognition. However, this definition is far from complete because, among other things, it assumes, without actual evidence, that metacognition is a universal construct that is consistently experienced by all people. Nevertheless, I began to think more deeply on this matter and decided it was time to empirically examine the homogeneity or heterogeneity of metacognition across cultures and languages of the world rather than make wholesale assumptions.

In a recent study with 366 individuals across four countries (China, Colombia, Spain, US) and three languages (Chinese, English, Spanish), a group of colleagues and I investigated the influence of culture on metacognitive awareness (subjective) and objective metacognitive monitoring accuracy. We discovered, much to our surprise, that metacognition is not a homogenous concept or experience. In other words, metacognitive phenomena are understood and experienced differently as a function of the cultural norms and expectations in which one is reared. Imagine that! Future qualitative research will hopefully help us understand why and how culture influences metacognitive phenomena, including among indigenous populations of the world.

If you had asked me 5 years ago where my research on metacognition would take me, I would have never believed it would be down this path. However, I am excited about the prospect of continuing down the path of intercultural/multicultural investigation of metacognition!

 

Metacognitive Thinking: A Mechanism for Change

by Kathleen Murray, Mount Saint Mary College

Pushing us to reflect and evaluate our own personal strengths and limitations, Chick defines metacognition as “the processes used to plan, monitor, and assess one’s understanding and performance” (2013). Metacognition allows us to recognize the limitations of our knowledge and encourages us to figure out how to expand our ways of thinking and extend our abilities. There is no better example of this principle in action than Mount Saint Mary College’s DEI Dream Team: a group of individuals from all different walks-of-life united in our common desire to expand our perspective and make a difference in the community. Inspired by Tell Me Who You Are, our Dream Team created a series of workshops designed to promote reflection and educate both faculty and staff on the importance of diversity, equity, and inclusion.

Our moniker, “Dream Team,” was, in fact, coined after an “ah-ha” moment when I first recognized the unique nature of this collaboration. These team members pushed me to expand my way of thinking and facilitated my growth as both an individual and a future educator. As I sit here and reflect on the success of these workshops, I am overwhelmed by the power of metacognitive thinking and inspired by the difference that our group could make in the campus community. After experiencing this wonderful collaboration rooted in parity, I now understand that metacognition is more than a pedagogical process or professional concept. It is a powerful tool that can be used to inspire activism and change the world.

Reference:

Chick, N. (2013). Metacognition. Vanderbilt University Center for Teaching. Retrieved January 22, 2022, from https://cft.vanderbilt.edu/guides-sub-pages/metacognition/.

 

Metacognition: Bringing Our Different Skills to the DEI Table 

by Lindsay Byer, Mount Saint Mary College

I remember being in one of our virtual planning meetings when my “Ah-ha” moment clicked. I was surrounded by women who were, and still are, inspiring in their own right. There are six of us: two undergraduate students, including myself, two Education professors, one professor from Arts and Letters, and the Director of the Writing Center. Each of us brought different skills to the metaphorical table along with different perspectives on how to best achieve our goal of spreading awareness of diversity, equity, and inclusion on our campus. I was nervous, to say the least, but I quickly realized I was being heard! My opinions carried the same weight as everyone else in that meeting. This was an atypical experience for me in working with faculty.

Met with collective positivity, I kicked off the forum series with a workshop focusing on the individual’s identity. Knowing the importance of creating a safe space for all participants, I used vignettes from Tell Me Who You Are to introduce terms, like intersectionality, and I closed with the opportunity to create “I am” poems. At some point during our planning process, and after Katie coined it, we referred to ourselves as the Dream Team, which couldn’t have suited us more. We were able to collaborate with respect towards one another, support one another, and allow one another to lead. I learned so much from working with these women and I am proud to say I am a member of the Dream Team. 

 

Vulnerability as a Connector

by Gina R. Evers, Mount Saint Mary College

I have worked in academia my whole career, and I am cognizant of the invisible lines of power that have traditionally defined relationships among students, professors, and administrators. In our collaboration, however, we were able to create genuine, equal relationships across these boundaries. And it seemed to happen without intentional effort. How? What allowed this “committee” to disregard our institutionally prescribed guardrails with ease?

When facilitating “Writing Through a Racial Reckoning,” I included a read aloud of Ibram X. Kendi’s Antiracist Baby. Dr. Kendi encourages us to “confess” after realizing we may have done something racist. I remember the burn of shame I felt the first time I encountered this methodology. But as an educator, I knew that if I was experiencing such a strong reaction, it’s likely some of my students were as too. I shared and acknowledged that talking about mistakes and questions moves us forward while silence upholds racist structures. Through making myself vulnerable, I was able to create a brave space for workshop participants and invite folks to share on genuine and equal terms.

The fact that our “committee,” comprised of people occupying different institutional roles with inherently varying power structures, was able to achieve parity in our collaboration was enabled by our union in DEI work. Discussions of our common text — sharing our vulnerabilities around the content and genuinely grappling with matters of identity — connected us as equal members of a team, poised to rise together.

 

The Power of “Ah-ha Moments” in Collaboration

by Ludmila Smirnova, Mount Saint Mary College

One of the most memorable experiences and pedagogical discoveries of the last year was the work of our Dream Team on a college-wide initiative around issues of diversity, equity, and inclusion. The first “ah-ha moment” was when a spontaneous group of diverse but like-minded people gathered together to collaborate on the design and implementation of the project. It was amazing to interact with teacher candidates and faculty from other divisions and offices outside of the traditional college classroom. What united us was our devotion to the importance of DEI on campus and the desire to bring this project to its success. The college chose an excellent book, Tell Me Who You Are, for a common read and the team voluntarily met to discuss and implement a series of workshops to engage college students and faculty in activities that centered the content of the book. That’s the power of metacognition—being aware of how to plan events, choosing strategies for implementing the ideas, and reflecting on them collectively. Our students experienced the “ah-ha moments” a number of times throughout this project. How captivating to observe how mesmerized our students were to communicate with their faculty and how grateful they were to see that their voices counted and that they were able to contribute their ideas to the project as equal partners! But the most astonishing outcome of this project was our team collaboration on the book chapter that allowed us to reflect on this experience in a series of personal vignettes. 

 

Metacognition with Faith, Trust, and Joy in the Collaborative Process

by Marie-Therese C. Sulit, Mount Saint Mary College

As I choose a favorite metacognition moment, two simultaneous experiences capture that sense of “ah-ha!” for me: when we first came together as a planning team and when we met for our penultimate writing workshop for our book chapter. In Tell Me Who You Are, I found the plethora of vignettes so well rendered with storytelling so ripe and rich to address our DEI initiative that I envisioned a series of “Talk Story” and “Talk Pedagogy” workshops. Our Dream Team came together in answering this call that created and delivered a forum series for our campus. While not easy, it certainly seemed so as we almost intuitively balanced and counter-balanced one another through every step of this process: preparation, execution, reflection. This collaboration then brought a book chapter to life, requiring a series of writing workshops every Friday afternoon throughout the fall semester. In contradistinction to the forum series, the nature of and expectations for publication moved us in different directions with a deadline after the Thanksgiving holiday. All of us, in our respective roles, were likewise closing our semesters. So important to us, this commitment stretched us in ways that we had not been stretched: it was all in the timing. However, with this all-female cast, the mutual reciprocity of our collaboration–the teaching/learning, problem-solving and shared decision-making–remained the same but especially worked through our commitment to metacognition. In so doing, we also affirmed our sense of faith, trust, and joy in the process and with one another.   

 

What the World Needs Now: The Power of Collaboration

by Sonya Abbye Taylor, Mount Saint Mary College

I saw pride in my students’ expressions when they completed a beautifully executed collaborative course project. They agreed each contributed to the success of the project: they were joyful! Brought back to a moment, the semester before, I felt that same joy and awe at an Open Mic session, the culminating event of a collaborative venture, I worked on with students and faculty. We created a series of workshops addressing diversity, equity, and inclusion. While the workshops were excellent, it was the collaborative process that impacted me as being the most meaningful, just as it was for those students. Wide in age-range, from different ethnicities, different backgrounds and frames of reference, our diverse group—two undergraduates, two Education faculty, one Arts and Letters faculty member, and the Director of the Writing Center—functioned as true collaborators. We had parity, as everyone’s ideas weighed equally, we listened actively and empathetically to one another, and we communicated effectively. Committed to a common goal, and having achieved it beyond our expectations, I knew at that “ah ha” moment something very special had happened. I knew, without having been taught the aforementioned skills essential for effective collaboration, we had been successful. I realized the power of collaboration and bemoaned the lack of opportunities to practice collaboration in schools. When we teach collaboration, we teach skills for life. I learn and contribute when I collaborate; I want my students do the same. People, willing and able to collaborate, can change the world.

*Reference:

U.S. Department of Education, Office of Vocational and Adult Education. (2011). Just Write! Guide. Washington, DC


Metacognitive Discourse—Final Course Presentations that Foster Campus Conversations about Learning

by Gina Burkart, Ed.D., Learning Specialist, Clarke University

colored people and conversation bubblesPrior to the pandemic and now since returning to campus, there has been a shift in students’ use of group study and ability to learn and work in groups. When I began my position as Learning Specialist 10 years ago, it was not uncommon to find 30 students at group study sessions at 9 pm in the evening. Now, one group study session remains, and 2-3 students might attend the sessions (unless they are teams’ sessions required by athletic coaches). Colleagues have also shared in conversations that they have found it problematic that students avoid interacting with one another in the classroom and are not able to work and learn in physical groups. Further in my learning resource center year-end reports, data have shown a steady decline in group study attendance and a steady increase of students relying on support from me, the Learning Specialist. They want to work one/one with adults. In conversations with students and in online discussion blogs, students’ have shared a lack of inter- and intrapersonal communication skills as affecting their ability to work with their peers. In simple terms—overuse of electronic communication pre-pandemic and during the pandemic has left them unable to communicate interact with their classmates. This is problematic for a variety of reasons. In terms of learning, pedagogy is clear—learning is social (Bandura, 1977).

An Assignment to Reinforce Social Learning and Metacognition

In response, this semester, to reinforce social learning and metacognition, I changed the final assessment for the College Study Strategy course to be a final presentation that embedded metacognition and social discourse. The College Study Strategy course is metacognitive in nature in that it begins by having students reflect on their prior learning experiences, assess themselves and their skills, and set goals for the semester. It is a 1-credit course open to any student below 90 credits and can be retaken. However, in the second semester, it is almost entirely filled with students placed on academic probation or warning who are required to take the course. Curriculum includes theorists such as Marzano (2001), Bandura (1994), Ducksworth (2013), Dweck (2014), and Covey (2004) and requires them to begin applying new motivation, emotional intelligence, learning, reading, time management, study, note-taking, and test-taking strategies to their courses. In the past, students created a portfolio that demonstrated the use of their new strategies and presented their growth to me in a midterm and final conference. This year, I wanted them to share their new growth with more than me—I wanted them to share their growth with the entire community.

By changing the final project to be more outward-facing, the assignment would still be metacognitive in nature—requiring students to reflect on past learning, show how they made adjustments to learning and applied new methods and strategies, share in conversation how they made the adjustments, and finally explain how they will continue to apply strategies and continue their growth in the future with the new knowledge and strategies. Again,  it would require students to share with more than me. They would need to envision a larger audience and needs—the entire campus community (administrators, students, Athletic coaches, staff, professors, recruits) and create a presentation that could be adjusted to the audience. They would practice inter and intra-personal skills as they made adjustments to their presentation over the course of 2 hours while they remained at station in the library, prepared to share their presentation as members of the campus community approached. This also allowed for the campus community to benefit from the students’ new knowledge and growth of the semester. And, being on a small scale, it re-introduced students to the art of in-person, face-face conversation between each other and the value of seeking information from each other. This is something that has been eroding due to a heavy use of electronic communication and isolated learning that occurred during the pandemic.

Students were introduced to this assignment in week one of the semester. They were told that in week 6 they would choose any topic from the course curriculum that they felt they needed to focus on more intently based on their semester goals. Once choosing the curriculum they would focus on (ex: motivation, reading, procrastination, time management, studying, growth mindset), they would then research a different article each week related to their chosen topic (weeks 6-12) and apply the new critical reading strategy taught in class to create journal entries that would be used to prepare content for the final presentation. In weeks 14 or 15, they would present in the library at a table (poster session style) during a time of their choosing (two-hour block) to the campus community about their topic. The presentation needed to include some type of visual and the content needed to include all of the following metacognitive information about the topic:

  • past struggles
  • reasons for choosing the topic
  • strategies learned in class
  • information learned in their research
  • recommendations for other students struggling
  • strategies for continued growth

Positive Impact and Take-Aways

While students were nervous and hesitant prior to the presentations, during and after the presentations, they admitted to having fun sharing about their growth and learning. Staff, faculty, and students were also appreciative of the presentations and made a point of attending. Some future students/recruits even attended as they were touring. Not surprising, most students chose to present about motivation, time management and procrastination. A few students chose to present about growth mindset, Bloom’s Taxonomy as a study strategy, and reading. A surprising take-away was that in the metacognitive process of the presentation, many students connected improved reading strategies to increased motivation and reduction in procrastination.

While observing the presentations, it was encouraging to see students learn to adapt their presentations as people approached. Since they were stationed at a table for two hours, they needed to present the material many times to different types of audiences—and they had to field questions. As they presented and represented, they learned how to interact and present differently based on the needs of the audience. This adaptation required the use of metacognition and rhetorical analysis. It also built inter- and intrapersonal communication skills. It also came at a good time in the semester, as students were authentically seeking many of the strategies and skills to prepare for finals, conclude the semester, and look forward to the next semester. Many of the presenters had friends and team members, coaches, and faculty come to hear their presentations (as I had advertised the presentations to the campus in advance). In conclusion, metacognitive presentations that engage the entire campus community in discourse about learning may be a helpful step toward rebuilding learning communities post-pandemic. Next semester, I will continue this assignment. Additionally, next semester, I am working on embedding group reading labs into targeted courses to improve learning, motivation and reduce procrastination in the classroom.

 


Using metacognition to move from talking the equity talk, to walking the equity walk

Conversations around equity, diversity, and inclusion are gaining traction on college campuses in the United States. In many cases, these conversations are overdue, so a willingness to even have the talk represents progress. But how can campuses move from talking equity talk to walking the equity walk? How can the buzz be transformed into a breakthrough? This post argues that taking a metacognitive approach is essential to taking steps in more equitable directions.

Becoming more equitable is a process. As with any process, metacognition encourages us to consider what’s working, what’s not, and how we might make adjustments to improve how we are living that process. If college campuses genuinely want to travel down more equitable roads, then they need to articulate their equity goals, map their route, and remove obstacles preventing them from reaching that destination. And if along the way, campuses find that their plans aren’t working, then metacognition can point the way towards a course correction.

A guide and the need for collective metacognition

Equity Talk to Equity Walk; book

In From Equity Talk to Equity Walk: Expanding Practitioner Knowledge for Racial Justice in Higher Education, Tia Brown McNair, Estela Mara Bensimon, and Lindsey Malcolm-Piqueux (2020) offer guidance to campuses wanting to do more than just talk. They argue, for example, that campuses need a shared understanding of equity and diversity. College mission statements are a start, but their lofty words define aspirations but not a path. Big words will never amount to more than talk unless a campus can figure out how to live into those big ideas. For example, it is one thing to pepper conversation with words, like ‘diversity,’ ‘equity,’ and ‘inclusion.’ It’s another thing altogether to develop a shared campus-wide understanding of these ideas and how those ideas need to practiced in the day-to-day life on campus. If institutional change requires shared understanding, then I argue that college campuses need collective metacognitive moments.

Metacognition urges us to establish goals and continually check-in on our progress towards them. Taking a metacognitive approach to institutional change will require that campuses articulate their equity goals with shared understanding of the underlying terms, map a plan to work towards those aspirations, monitor their progress, and make adjustments when appropriate.

  • What are the shared goals around equity? What might it mean to live into these goals in concrete terms?
  • Are these goals widely shared? If not, why not?
  • How can members of the campus community contribute and see themselves in their contribution?

Taking a metacognitive approach can also help locate the “pain points.”

  • Is the lack of progress owing to a lack of shared understanding, a lack of planning, or well-intentioned individuals working at cross-purposes?
  • What can be done to get efforts back on track?

As with any process, metacognitive check-ins around what’s working and what’s not working can point to areas for improvement. Metacognition, therefore, can keep a college campus heading down the equity path.

Progress requires being aware of barriers and working to remove them

Being concrete about the move from equity talk to walking the equity walk requires removing barriers to progress. According to McNair, Besimon, and Malcolm-Piqueux, barriers include individuals claiming not to see race or substituting class issues for race. Taking a metacognitive approach could encourage individuals to get curious about why they claim not to see race or feel more comfortable talking about economic issues. Why might someone be reluctant to consider the extent of their white privilege? Why might a campus be reluctant to acknowledge the reality of institutional racism and its implications?

Taking a metacognitive approach to such questions can honor the fact that talking about inequity can be awkward and uncomfortable. Yet, metacognition also encourages us to ask whether things are working and whether we might need to make adjustments. Walking the equity walk requires asking how white privilege and institutional racism might be inadvertently influencing campus policies and the delivery of instruction. Taking a metacognitive approach encourages campuses to look for ways to make adjustments. Awareness and adjustments are precisely what is needed in the move from equity talk to the equity walk.

By way of illustration,  McNair, Besimon, and Malcolm-Piqueux call on campuses to stop employing euphemisms, such as ‘underrepresented minorities.’ In their view, campus administration, individual departments, and instructors should disaggregate data instead. The thought is that equity issues can be addressed only if they are named. If, for example, the graduation rate of African-American males is lower than other groups, then walking the equity walk requires understanding why and looking for ways to help. If first-generation students are stopping out after their second semester (or their fourth), then campuses that are aware of this reality are positioned to make the necessary adjustments.

Administrators should look at institution-wide patterns to see if institutional protocols are impediments to student success. Individual departments should review student progress across programs and within particular courses to see how they might better support student learning. And individual instructors should take a careful look at when, where, and how students struggle with particular assignments, skills, and content. It may turn out that all students are equally successful across all areas. It might also be the case that patterns emerge which indicate that some groups of students could use more support in certain identifiable areas.

A metacognitive approach to institutional change requires that universities, academic departments, and individual instructors articulate their equity goals, track progress, and make adjustments where appropriate. Disaggregating data at all levels (institution-wide, by department, individual courses) can uncover inequities. Identifying those obstacles can be a step towards making the necessary adjustments. This can, in turn, help campuses walk the equity walk.

Improve with metacognition

Taking a metacognitive approach to process improvements encourages individuals (and institutions) to get curious about what works and where adjustments need to be made. It encourages them to continuously assess and use that assessment to make additional adjustments along the way. Colleges and universities have a long way to go if they are to address the realities of systemic inequities. But learning to walk the equity walk is a process. If we know anything about metacognition, we know that it provides us with the resources to offer process improvements. So, I argue, metacognition is essential to learning to move beyond equity talk and actually walking the equity walk.