Metacognition and Mindset for Growth and Success: Part 2 – Documenting Self-Assessment and Mindset as Connected

by Steven Fleisher, California State University
Michael Roberts, DePauw University
Michelle Mason, University of Wyoming
Lauren Scharff, U. S. Air Force Academy
Ed Nuhfer, Guest Editor, California State University (retired)

Self-assessment measures and categorizing mindset preference both employ self-reported metacognitive responses that produce noisy data. Interpreting noisy data poses difficulties and generates peer-reviewed papers with conflicting results. Some published peer-review works question the legitimacy and value of self-assessment and mindset.

Yeager and Dweck (2020) communicate frustration when other scholars deprecate mindset and claim it makes no difference under what mindset students pursue education. Indeed, that seems similar to arguing that enjoyment of education and students’ attitudes toward it makes no difference in the quality of their education.

We empathize with that frustration when we recall our own from seeing in class after class that our students were not “unskilled and unaware of it” and reporting those observations while a dominant consensus that “Students can’t self-assess” proliferated. The fallout that followed from our advocacy in our workplaces (mentioned in Part 2 of the entries on privilege) came with opinions that since “the empiricists have spoken,” there was no reason we should study self-assessment further. Nevertheless, we found good reason to do so. Some of our findings might serve as an analogy to demonstrating the value of mindsets despite the criticisms being leveled against them.

How self-assessment research became a study of mindset

In the summer of 2019, the guest editor and the first author of this entry taught two summer workshops on metacognition and learning at CSU Channel Islands to nearly 60 Bridge students about to begin their college experience. We employed a knowledge survey for the weeklong program, and the students also took the paired-measures Science Literacy Concept Inventory (SLCI). Students had the option of furnishing an email address if they wanted a feedback letter. About 20% declined feedback, and their mean score was 14 points lower (significant at the 99.9% confidence level) than those who requested feedback.

In revisiting our national database, we found that every campus revealed a similar significant split in performance. It mattered not whether the institution was open admissions or highly selective; the mean score of the majority who requested feedback (about 75%) was always significantly higher than those who declined feedback. We wondered if the responses served as an unconventional diagnosis of Dweck’s mindset preference.

Conventional mindset diagnosis employs a battery of agree-disagree queries to determine mindset inclination. Co-author Michael Roberts suggested we add a few mindset items on the SLCI, and Steven Fleisher selected three items from Dweck’s survey battery. After a few hundred student participants revealed only a marginal definitive relationship between mindset diagnosed by these items and SLCI scores, Steve increased our items to five.

Who operates in fixed, and who operates in growth mindsets?

The personal act of choosing to receive or avoid feedback to a concept inventory offers a delineator to classify mindset preference that differs from the usual method of doing so through a survey of agree-disagree queries. We compare here the mindset preferences of 1734 undergraduates from ten institutions using (a) feedback choice and (b) the five agree-disagree mindset survey items that are now part of Version 7.1a of the SLCI. That version has been in use for about two years.

We start by comparing the two groups’ demonstrable competence measured by the SLCI. Both methods of sorting participants into fixed or growth mindset preferences confirmed a highly significant (99.9% confidence) greater cognitive competence in the growth mindset disposition (Figure 1A). As shown in the Figure, feedback choice created two groups of fixed and growth mindsets whose mean SLCI competency scores differed by 12 percentage points (ppts). In contrast, the agree-disagree survey items defined the two groups’ means as separated by only 4 ppts. However, the two methods split the student populace differently, with the feedback choice determining that about 20% of the students operated in the fixed mindset. In contrast, the agree-disagree items approach determined that nearly 50% were operating in that mindset.

We next compare the mean self-assessment accuracy of the two mindsets. In a graph, it is easy to compare mean skills between groups by comparing the scatter shown by one standard deviation (1 Sigma) above and below the means of each group (Figure 1B). The group members’ scatter in overestimating or underestimating their actual scores reveals a group’s developed capacity for self-assessment accuracy. Groups of novices show a larger scatter in their group’s miscalibrations than do groups of those with better self-assessment skills (see Figure 3 of resource at this link).

Graphs showing how fixed and growth mindsets relate to SLCI scores, differing based on how mindset is categorized.

Figure 1. A. Comparisons of competence (SLCI scores) of 1734 undergraduates between growth mindset participants (color-coded blue) and fixed mindset participants (color-coded red) mindsets as deduced by two methods: (left) agree-disagree survey items and (right) acceptance or opting-out or receiving feedback. “B” displays the measures of demonstrated competence spreads of one standard deviation (1 Sigma) in growth (blue) and fixed mindset (red) groups as deduced by the two methods. The thin black line at 0 marks a perfect self-assessment rating of 0, above which lie overconfident estimates and below which lie underconfident estimates in miscalibrations of self-assessed accuracy. The smaller the standard deviation revealed by the height of the rectangles in 2B, the better the group’s ability to self-assess accurately. Differences shown in A of 4 and 12 ppts and B of 2.3 and 3.5 ppts are differences between means.

On average, students classified as operating in a growth mindset have better-calibrated self-assessment skills (less spread of over- and underconfidence) than those operating in a fixed mindset by either classification method (Figure 1B). However, the difference between fixed and growth was greater and more statistically significant when mindset was classified by feedback choice (99% confidence) rather than by the agree-disagree questions (95% confidence).

Overall, Figure 1 supports Dweck and others advocating for the value of a growth mindset as an asset to learning. We urge contextual awareness by referring readers to Figure 1 of Part 1 of this two-part thematic blog on self-assessment and mindset. We have demonstrated that choosing to receive or decline feedback is a powerful indicator of cognitive competence and at least a moderate indicator of metacognitive self-assessment skills. Still, classifying people into mindset categories by feedback choice addresses only one of the four tendencies of mindset shown in that Figure. Nevertheless, employing a more focused delineator of mindset preference (e.g., choice of feedback) may help to resolve the contradictory findings reported between mindset type and learning achievement.

At this point, we have developed the connections between self-assessment, mindset, and feedback we believe are most valuable to the readers of the IwM blog. Going deeper is primarily of value to those researching mindset. For them, we include an online link to an Appendix to this Part 2 after the References, and the guest editor offers access to SLCI Version 7.1a to researchers who would like to use it in parallel with their investigations.

Takeaways and future direction

Studies of self-assessment and mindset inform one another. The discovery of one’s mindset and gaining self-assessment accuracy require knowing self, and knowing self requires metacognitive reflection. Content learning provides the opportunity for developing the understanding of self by practicing for self-assessment accuracy and acquiring the feeling of knowing while struggling to master the content. Learning content without using it to know self squanders immense opportunities.

The authors of this entry have nearly completed a separate stand-alone article for a follow-up in IwM that focuses on using metacognitive reflection by instructors and students to develop a growth mindset.

References

Dweck, C. S. (2006). Mindset: The new psychology of success. New York: Random House.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487


Metacognition and Mindset for Growth and Success: APPENDIX to Part 2 – Documenting Self-Assessment and Mindset as Connected

by Ed Nuhfer, Guest Editor, California State University (retired)
Steven Fleisher, California State University
Michael Roberts, DePauw University
Michelle Mason, University of Wyoming
Lauren Scharff, U. S. Air Force Academy
Ed Nuhfer, Guest Editor, California State University (retired)

This Appendix stresses numeracy and employs a dataset of 1734 participants from ten institutions to produce measures of cognitive competence, self-assessed competence, self-assessment accuracy, and mindset categorization. The database is sufficient to address essential issues introduced in our blogs.

Finding replicable relationships in noisy data employs groups from a database collected from instruments proven to produce high-reliability measures. (See Figure 10 at this link.). If we assemble groups, say, groups of 50, as shown in Figure 1 B, we can attenuate the random noise in individuals’ responses (Fig. 1A) and produce a clearer picture of the signal hidden within the noise (Fig. 1B).

graphs showing postdicted self-assessment and SLCI a) individual data and b) group data

Figure 1 Raw data person-by-person on over 9800 participants (Fig. 1 A) shows a highly significant correlation between measures of actual competence from SLCI scores and postdicted self-assessed competence ratings. Aggregating the data into over 180 groups of 50 (Fig. 1 B) reduces random noise and clarifies the relationship.

Random noise is not simply an inconvenience. In certain graphic types, random noise generates patterns that do not intuitively appear random. Researchers easily interpret these noise patterns as products of a human behavior signal. The “Dunning-Kruger effect” appears built on many researchers doing that for over twenty years. 

Preventing confusing noise with signal requires knowing what randomness looks like. Researchers can achieve this by ensuring that the surveys and test instruments used in any behavioral science study have high reliability and then constructing a simulated dataset by completing these instruments with random number responses. The simulated population should equal that of the participants in the research study, and graphing the simulated study should employ the same graphics researchers intend to present the participants’ data in a publication.

The 1734 participants addressed in Parts 1 and 2 of this blog’s theme pair on mindset are part of the larger dataset represented in Figure 1. The number is smaller than 9800 because we only recently added mindset questions. 

The blog containing this Appendix link showed the two methods of classifying mindset as consistent in designating growth mindset as associated with higher scores on cognitive measures and more accurate self-assessments. However, this finding does not directly test how the two classification methods are related to one another. The fact noted in the blog that the two methods classified people differently indicated a reason to anticipate that the two may not prove to be directly statistically related.

We need to employ groups to attenuate noise, and ideally, we want large groups with good prospects of a spread of values. We first picked the groups associated with furnishing information about privilege (Table 1) because these are groups large enough to attenuate random noise. Further, the groups displayed highly significant statistical spreads when we looked at self-assessed and demonstrable competence within these categories. Note well: we are not trying to study privilege aspects here. Our objective, for now, is to understand the relationship between mindset defined by agree-disagree items and mindset defined by requests for feedback.

We have aggregated our data in Table 1 from four parameters to yield eight paired measures and are ready to test for relationships. Because we already know the relationship between self-assessed competence and demonstrated competence, we can verify whether our existing dataset of 1734 participants presented in 8 paired measures groups is sufficient to deduce the relationship we already know. Looking at self-assessment serves as a calibration to help answer, “How good is our dataset likely going to be for distinguishing the unknown relationships we seek about mindset?”

Mindset and self-assessment indicators by large groups.

Table 1. Mindset and self-assessment indicators by large groups. The table reveals each group’s mindset composition derived from both survey items and feedback and the populace size of each group.

Figure 2 shows that our dataset in Table 1 proved adequate in capturing the known significant relationship between self-assessed competence and demonstrated competence (Fig. 2A). The fit-line slope and intercept in Figure 2A reproduce the relationship established from much larger amounts of data (Fig. 1 B). However, the dataset did not confirm a significant relationship between the results generated by the two methods of categorizing people into mindsets (Fig. 2B).

In Figure 2B, there is little spread. The plotted points and the correlation are close to significant. Nevertheless, the spread clustered so tightly that we are apprehensive that the linear relationship would replicate in a future study of a different populace. Because we chose categories with a large populace and large spreads, more data entered into these categories probably would not change the relationships in Figure 2A or 2B. More data might bump the correlation in Figure 2B into significance. However, this could be more a consequence of the spread of the categories chosen for Table 1 than a product of a tight direct relationship between the two methods employed to categorize mindset. However, we can resolve this by doing something analogous to producing the graph in Figure 1B above.

Relationships between self-assessed competence and demonstrated competence (A) and growth mindset diagnosed by survey items and requests for feedback (B). The data graphed is from Table 1.

Figure 2. Relationships between self-assessed competence and demonstrated competence (A) and growth mindset diagnosed by survey items and requests for feedback (B). The data graphed is from Table 1.

We next place the same participants from Table 1 into different groups and thereby remove the spread advantages conferred by the groups in Table 1. We randomize the participants to get a good mix of the populace from the ten schools, sort the randomized data by class rank to be consistent with the process used to produce Figure 1B and aggregate them into groups of 100 (Table 2).

Table 2. 1700 students are randomized into groups of 100, and the means are shown for four categories for each group.

Table 2. 1700 students are randomized into groups of 100, and the means are shown for four categories for each group.

The results employing different participant groupings appear in Figure 3. Figure 3A confirms that the different groupings in Table 2 attenuate the spread introduced by the groups in Table 1.

Figure 3. The data graphed is from Table 2. Relationships between self-assessed competence and demonstrated competence appear in (A). In (B), plotting classified by agree-disagree survey items versus mindset classified by requesting or opting out of feedback fails to replicate the pattern shown in Figure 2 B

Figure 3. The data graphed is from Table 2. Relationships between self-assessed competence and demonstrated competence appear in (A). In (B), plotting classified by agree-disagree survey items versus mindset classified by requesting or opting out of feedback fails to replicate the pattern shown in Figure 2 B

The matched pairs of self-assessed competence and demonstrable competence continue in Figure 3A to reproduce a consistent line-fit that despite diminished correlation that still attains significance like Figures 1B and 2A. 

In contrast, the ability to show replication between the two methods for categorizing mindsets has completely broken down. Figure 2B shows a very different relationship from that displayed in 1B. Deducing the direct relationship between the two methods of categorizing mindset proves not replicable across different groups.

To allow readers who may wish to try different groupings, we have provided the raw dataset used for this Appendix that can be downloaded from https://profcamp.tripod.com/iwmmindsetblogdata.xls.

Takeaways

The two methods of categorizing mindset, in general, designate growth mindset as associated with higher scores on tests of cognitive competence and, to a lesser extent, better self-assessment accuracy. However, the two methods do not show a direct relationship with each other. This indicates the two are addressing different dimensions of the multidimensional character of “mindsets.”


Metacognition and Mindset for Growth and Success: Part 1 – Understanding the Metacognitive Connections between Self-Assessment and Mindset

by Steven Fleisher, California State University
Michael Roberts, DePauw University
Michelle Mason, University of Wyoming
Lauren Scharff, U. S. Air Force Academy
Ed Nuhfer, Guest Editor, California State University (retired)

When I first entered graduate school, I was flourishing. I was a flower in full bloom. My roots were strong with confidence, the supportive light from my advisor gave me motivation, and my funding situation made me finally understand the meaning of “make it rain.” But somewhere along the way, my advisor’s support became only criticism; where there was once warmth, there was now a chill, and the only light I received came from bolts of vindictive denigration. I felt myself slowly beginning to wilt. So, finally, when he told me I did not have what it takes to thrive in academia, that I wasn’t cut out for graduate school, I believed him… and I withered away.                                                                              (actual co-author experience)

schematic of person with band aid and flowers growing who is facing other people
Image by Moondance from Pixabay

After reading the entirety of this two-part blog entry, return and read the shared experience above once more. You should find that you have an increased ability to see the connections there between seven elements: (1) affect, (2) cognitive development, (3) metacognition, (4) self-assessment, (5) feedback, (6) privilege, and (7) mindset. 

The study of self-assessment as a valid component of learning, educating, and understanding opens up fascinating areas of scholarship for new exploration. This entry draws on the same paired-measures research described in the previous blog entries of this series. Here we explain how measuring self-assessment informs understanding of mindset and feedback. Few studies connect self-assessment with mindset, and almost none rest on a sizeable validated data set. 

Mindset, self-assessment, and privilege

Mindset theory proposes that individuals lean toward one of two mindsets (Dweck, 2006) that differ based on internalized beliefs about intelligence, learning, and academics. According to Dweck and others, people fall along a continuum that ranges from having a fixed mindset defined by a core belief that their intelligence and thinking abilities remain fixed, and effort cannot change them. In contrast, having a growth mindset comes with the belief that, through their effort, people can expand and improve their abilities to think and perform (Figure 1). 

Indeed, a growth mindset has support in the stages of intellectual, ethical, and affective development discovered by Bloom & Krathwohl and William Perry mentioned earlier in this series. However, mindset theory has evolved into making broader claims and advocating that being in a state of growth mindset also enhances performance in high-stakes functions such as leadershipteaching, and athletics

diagram showing the opposite nature of fixed and growth mindset with respect to how people view effort, challenge, failure and feedback. From https://trainugly.com/portfolio/growth-mindset/

Figure 1. Fixed – growth mindset tendencies. (From https://trainugly.com/portfolio/growth-mindset/)

Do people choose their mindset or do their experiences place them in their positions on the mindset continuum?  Our Introduction to this series disclosed that people’s experiences from degrees of privilege influence their positioning along the self-assessment accuracy continuum, and self-assessment has some commonalities with mindset. However, a focused, evidence-based study of privilege on determining mindset inclination seems lacking.

Our Introduction to this series indicated that people do not choose their positions along the self-assessment continuum. People’s cumulative experiences place them there. Their positions result from their individual developmental histories, where degrees of privilege influence the placement through how many experiences an individual has that are relevant and helpful to building self-assessment accuracy. The same seems likely for determining positions along the mindset continuum.

Acting to improve equity in educational success

Because the development during pre-college years primarily occurs spontaneously by chance rather than by design, people are rarely conscious of how everyday experiences form their dispositions. College students are unlikely even to know their positions on either continuum unless they receive a diagnostic measure of their self-assessment accuracy or their tendency toward a growth or a fixed mindset. Few get either diagnosis anywhere during their education. 

Adapting a more robust growth mindset and acquiring better self-assessment accuracy first requires recognizing that these dispositions exist. After that, devoting systematic effort to consciously enlisting metacognition during learning disciplinary content seems essential. Changing the dispositions takes longer than just learning some factual content. However, the time required to see measurable progress can be significantly reduced by a mentor/coach who directs metacognitive reflection and provides feedback.

Teaching self-assessment to lower-division undergraduates by providing numerous relevant experiences and prompt feedback is a way to alleviate some of the inequity produced by differential privilege in pre-college years. The reason to do this early is to allow students time in upper-level courses to ultimately achieve healthy self-efficacy and graduate with the capacity for lifelong learning. A similar reason exists for teaching students the value of affect and growth mindset by providing awareness, coaching, and feedback. Dweck describes how achieving a growth mindset can mitigate the adverse effects of inequity in privilege.

Recognizing good feedback

Dweck places high value on feedback for achieving the growth mindset. The Figure 1 in our guest series’ Introduction also emphasizes the importance of feedback in developing self-assessment accuracy and self-efficacy during college.

Depending on a person’s beliefs about their particular skill to address a challenge, they will respond in predictable ways when a skill requires effort, when it seems challenging, when effort affects performance, and when feedback informs performance. Those with a fixed mindset realize that feedback will indicate imperfections, which they take as indicative of their fixed ability rather than as applicable to growing their ability. To them, feedback shames them for their imperfections, and it hurts. They see learning environments as places where stressful competitions occur between their own and others’ fixed abilities. Affirmations of success rest in grades rather than growing intellectual ability.

Those with a growth mindset value feedback as illuminating the opportunities for advancing quickly in mastery during learning. Sharing feedback with peers in their learning community is a way to gain pleasurable support from a network that encourages additional effort. There is little doubt which mindset promotes the most enjoyment, happiness, and lasting friendships and generates the least stress during the extended learning process of higher education.

Dweck further stresses the importance of distinguishing feedback that is helpful from feedback that is damaging. Our lead paragraph above revealed a devastating experience that would influence any person to fear feedback and seek to avoid it. A formative influence that disposes us to accept or reject feedback likely lies in the nature of feedback that we received in the past. A tour through traits of Dweck’s mindsets suggests many areas where self-perceptions can form through just a single meaningful feedback event. 

Australia’s John Hattie has devoted his career to improving education, and feedback is his specialty area. Hattie concluded that feedback is “…the most powerful single moderator that enhances achievement” and noted in this University of Auckland newsletter “…arguably the most critical and powerful aspect of teaching and learning.” 

Hattie and Timperley (2007) synthesized many years of studies to determine what constitutes feedback helpful to achievement. In summary, valuable feedback focuses on the work process, but feedback that is not useful focuses on the student as a person or their abilities and communicates evaluative statements about the learner rather than the work. Hattie and Dweck independently arrived at the same surprising conclusion: even praise directed at the person, rather than focusing on the effort and process that led to the specific performance, reinforces a fixed mindset and is detrimental to achievement.

Professors seldom receive mentoring on how to provide feedback that would promote growth mindsets. Likewise, few students receive mentoring on how to use peer feedback in constructive ways to enhance one another’s learning. 

Takeaways

Scholars visualize both mindset and self-assessment as linear continuums with two respective dispositions at each of the ends: growth and fixed mindsets and perfectly accurate and wildly inaccurate self-assessments. In this Part 1, we suggest that self-assessment and mindset have surprisingly close connections that scholars have scarcely explored.

Increasing metacognitive awareness seems key to tapping the benefits of skillful self-assessment, mindset, and feedback and allowing effective use of the opportunities they offer. Feedback seems critical in developing self-assessment accuracy and learning through the benefits of a growth mindset. We further suggest that gaining benefit from feedback is a learnable skill that can influence the success of individuals and communities. (See Using Metacognition to Scaffold the Development of a Growth Mindset, Nov 2022.)

In Part 2, we share findings from our paired measures data that partially explain the inconsistent results that researchers have obtained between mindset and learning achievement. Our work supports the validity of mindset and its relationship to cognitive competence. It allows us to make recommendations for faculty and students to apply this understanding to their advantage.

 

References

Dweck, C. S. (2006). Mindset: The new psychology of success. New York: Random House.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487

Heft, I. & Scharff, L. (July 2017). Aligning best practices to develop targeted critical thinking skills and habits. Journal of the Scholarship of Teaching and Learning, Vol 17(3), pp. 48-67. http://josotl.indiana.edu/article/view/22600

Isaacson, Randy M., and Frank Fujita. 2006. “Metacognitive Knowledge Monitoring and Self-Regulated Learning: Academic Success and Reflections on Learning.” Journal of Scholarship of Teaching and learning6, no. 1: 39–55. Retrieved from https://eric.ed.gov/?id=EJ854910

Yeager, D. S., & Dweck, C. S. (2020). What can be learned from growth mindset controversies? American Psychologist, 75(9), 1269–1284. https://doi.org/10.1037/amp0000794

 


Metacognitive Self-assessment in Privilege and Equity – Part 2: Majority Privilege in Scientific Thinking

Ed Nuhfer, California State University (Retired)
Rachel Watson, University of Wyoming
Cinzia Cervato, Iowa State University
Ami Wangeline, Laramie County Community College

Being in the majority carries the privilege of empowerment to set the norms for acceptable beliefs. Minority status for any group invites marginalization by the majority simply because the group appears different from the familiar majority. Here, we explore why this survival mechanism (bias) also operates when a majority perceives an idea as different and potentially threatening established norms.

Young adult learners achieve comfort in ways of thinking and explaining the world from their experiences obtained during acculturation. Our Introduction stressed how these experiences differ in the majority and minority cultures and produce measurable effects. Education disrupts established states of comfort by introducing ideas that force reexaminations that contradict earlier beliefs established from experiences.

Even the kind of college training that promotes only growing cognitive expertise is disruptive but more critical; research verifies that the disruptions are felt. While discovering the stages of intellectual development, William Perry Jr. found that, for some learners, the feelings experienced during transitions toward certain higher stages of thinking were so discomforting that the students ceased trying to learn and withdrew. Currently, about a third of first-year college students drop out before their sophomore year.

Educating for self-assessment accuracy to gain control over bias

We believe that the same survival mechanisms that promote prejudice and suppress empathizing and understanding different demographic groups also cripple understanding in encounters with unfamiliar or contrarian ideas. In moments that introduce ideas disruptive to beliefs or norms, unfamiliar ideas become analogous to unfamiliar groups—easily marginalized and thoughtlessly devalued in snap judgments. Practice in doing self-assessment when new learning surprises us should be valuable for gaining control over the mechanism that triggers our own polarizing bias. Image of a maze on a black background with each branch of the maze showing different words such as "response, meaning, bias, memory." credit: Image by John Hain from Pixabay

Earlier (Part 2 entry on bias), we recommended teaching students to frequently self-assess, “What am I feeling that I want to be true, and why do I have that feeling?” That assignment ensures that students encounter disruptive surprises mindfully by becoming aware of affective feelings involved in triggering their bias. Awareness gives the greater control over self needed to prevent being captured by a reflex to reject unfamiliar ideas out of hand or to marginalize those who are different.

Teaching by employing self-assessment routinely for educating provides the prolonged relevant practice with feedback required for understanding self. Educating for self-assessment accuracy constitutes a change from training students to “know stuff” to educating students to know how they can think to understand both “stuff” and self.

When the first encounter with something or someone produces apprehension, those who gain a capacity for self-assessment accuracy from practice can exercise more control over their learning through recognizing the feeling that accompanies incipient activation of bias in reaction to discomfort. Such self-awareness allows a pause for reflecting on whether enlisting this vestigial survival mechanism serves understanding and can prevent bias from terminating our learning and inducing us to speak or act in ways that do not serve to understand.

Affect, metacognition, and self-assessment: minority views of contrarian scholars

We address three areas of scholarship relevant to this guest-edited series to show how brain survival mechanisms act to marginalize ideas that contradict an established majority consensus.

Our first example area involves the marginalization of the importance of affect by the majority of behavioral scientists. Antonio Damasio (1999, p. 39) briefly described this collective marginalization:

There would have been good reason to expect that, as the new century started, the expanding brain sciences would make emotion part of their agenda…. But that…never came to pass. …Twentieth Century science…moved emotion back into the brain, but relegated it to the lower neural strata associated with ancestors whom no one worshipped. In the end, not only was emotion not rational, even studying it was probably not rational.

A past entry in Improve with Metacognition (IwM) also noted the chilling prejudice against valuing affect during the 20th Century. Benjamin Bloom’s Taxonomy of the Affective Domain (Krathwohl et al. 1964) received an underwhelming reception from educators who had given unprecedented accolades to the team’s earlier volume on Taxonomy of the Cognitive Domain (Bloom, 1956). Also noted in that entry was William G. Perry’s purposeful avoidance of referring to affect in his landmark book on intellectual and ethical development (Perry, 1999). The Taxonomy of the Affective Domain also describes a developmental model that maps onto the Perry model of development much better than Bloom’s Taxonomy of the Cognitive Domain.

Our second example involved resistance against valuing metacognition. Dunlosky and Metcalfe (2009) traced this resistence to French philosopher Auguste Comte (1798-1854), who held that an observer trying to observe self was engaged in an impossible task like an eye trying to see itself by looking inwardly. In the 20th Century, the behaviorist school of psychology gave new life to Comte’s views by professing that individuals’ ability to do metacognition, if such an ability existed, held little value. According to Dunlosky and Metcalfe (2009, p. 20), the behaviorists held “…a stranglehold on psychology for nearly 40 years….” until the mid-1970s, when the work of John Flavell (see Flavell, 1979) made the term and concept of metacognition acceptable in academic circles.

Our third example area involves people’s ability to self-assess. “The Dunning-Kruger effect” holds that most people habitually overestimate their competence, with those least competent holding the most overly inflated views of their abilities and those with real expertise revealing more humility by consistently underestimating their abilities by modest amounts. Belief in “the effect” permeated many disciplines and became popular among the general public. As of this writing, a Google search brought up 1.5 million hits for the “Dunning Kruger effect.” It still constitutes the majority view of American behavioral scientists about human self-assessment, even after recent work revealed that the original mathematical arguments for “the effect” were untenable. 

Living a scholars’ minority experience

Considering prejudice against people and bias against new ideas as manifestations of a common, innate survival mechanism obviates fragmentation of these into separate problems addressed through unrelated educational approaches. Perceiving that all biases are related makes evident that the tendency to marginalize a new idea will certainly marginalize the proponents of an idea.

Seeing all bias as related through a common mechanism supports using metacognition, particularly self-assessment, for gaining personal awareness and control over the thoughts and feelings produced as the survival mechanism starts to trigger them. Thus, every learning experience providing discomfort in every subject offers an opportunity for self-assessment practice to gain conscious control over the instinct to react with bias

Some of the current blog series authors experienced firsthand the need for higher education professionals to acquire such control. When publishing early primary research in the early 1990s, we were naively unaware of majority consensus, had not yet considered bias as a survival reaction, and we had not anticipated marginalization. Suggesting frequent self-assessments as worthwhile teaching practices in the peer-reviewed literature brought reactions that jolted us from complacency into a new awareness.

Scholars around the nation, several of them other authors of this blog series, read the guest editor’s early work, introduced self-assessment in classes and launched self-assessment research of their own. Soon after, many of us discovered disparagements at the departmental, college, and university levels, and even at professional meetings followed for doing so. Some disparagements led to damaged careers and work environments.

The bias imparted by marginalization led to our doubting ourselves. Our feelings for a time were like those of the non-binary gender group presented in the earlier Figure 1 in the previous Part 1 on privilege: We “knew our stuff,” but our feelings of competence in our knowledge lagged. Thanks to the feedback from the journal peer-reviewers of Numeracy, we now live with less doubt in ourselves. For those of us who weathered the storm, we emerged with greater empathy for minority status and minority feelings and greater valuing of self-assessment. 

Self-assessment, a type of metacognition employing affect, seems in a paradigm change that recapitulates the history of affect and metacognition. Our Numeracy articles have achieved over 10,000 downloads, and psychologists in Europe, Asia, and Australia now openly question “the effect” (Magnus and Peresetsky, 2021; Kramer et al., 2022; Hofer et al., 2022; Gignac, 2022) in psychology journals. The Office of Science and Society at McGill University in Canada reached out to the lay public (Jarry, 2020) to warn how new findings require reevaluating “the effect.” We recently discovered that paired measures could even unearth unanticipated stress indicators among students (view section at time 21.38 to 24.58) during the turbulent times of COVID and civil disruption.

Takeaways

Accepting teaching self-assessment as good practice for educating and self-assessment measures as valid assessments open avenues for research that are indeed rational to study. After one perceives bias as having a common source, developing self-assessment accuracy seems a way to gain control over personal bias that triggers hostility against people and ideas that are not threatening, just different. 

“Accept the person you are speaking with as someone who has done amazing things” is an outstanding practice stressed at the University of Wyoming’s LAMP program. Consciously setting one’s cognition and affect to that practice erases all opportunities for marking anyone or their ideas for inferiority.

References

Bloom, B.S. (Ed.). (1956). Taxonomy of educational objectives, handbook 1: Cognitive domain. New York, NY: Longman.

Damasio, A. (1999). The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York: Harcourt.

Flavell, J. H. (1979). Metacognition and cognitive monitoring: a new area of cognitive-developmental inquiry. American Psychologist 34, 906-911.

Gignac, Gilles E. (2022). The association between objective and subjective financial literacy: Failure to observe the Dunning-Kruger effect. Personality and Individual Differences 184: 111224. https://doi.org/10.1016/j.paid.2021.111224

Hofer, G., Mraulak, V., Grinschgl, S., & Neubauer, A.C. (2022). Less-Intelligent and Unaware? Accuracy and Dunning–Kruger Effects for Self-Estimates of Different Aspects of Intelligence. Journal of Intelligence, 10(1). https://doi.org/10.3390/jintelligence10010010

Kramer, R. S. S., Gous, G., Mireku, M. O., & Ward, R. (2022). Metacognition during unfamiliar face matching. British Journal of Psychology, 00, 1– 22. https://doi.org/10.1111/bjop.12553

Krathwohl, D.R., Bloom, B.S. and Masia, B.B. (1964) Taxonomy of Educational Objectives: The Affective Domain. New York: McKay.

Magnus, Jan R., and Peresetsky, A. (October 04, 2021). A statistical explanation of the Dunning-Kruger effect. Tinbergen Institute Discussion Paper 2021-092/III, http://dx.doi.org/10.2139/ssrn.3951845

Nicholas-Moon, Kali. (2018). “Examining Science Literacy Levels and Self-Assessment Ability of University of Wyoming Students in Surveyed Science Courses Using the Science Literacy Concept Inventory with Expanded Inclusive Demographics.” Master’s thesis, University of Wyoming.

Perry, W. G. Jr. (1999). Forms of Ethical and Intellectual Development in the College Years. San Francisco, CA: Jossey-Bass (a reprint of the original 1968 work with minor updating).

Tarricone, P. (2011). The Taxonomy of Metacognition (1st ed.). Psychology Press. 288p. https://doi.org/10.4324/9780203830529


Metacognitive Self-assessment in Privilege and Equity – Part 1 Conceptualizing Privilege and its Consequences

by Rachel Watson, University of Wyoming
Ed Nuhfer, California State University (Retired)
Cinzia Cervato, Iowa State University
Ami Wangeline, Laramie County Community College

Demographics of metacognition and privilege

The Introduction to this series asserted that lives of privilege in the K-12 years confer relevant experiences advantageous to acquire the competence required for lifelong learning and entry into professions that require college degrees. Healthy self-efficacy is necessary to succeed in college. Such self-efficacy comes only after acquiring self-assessment accuracy through practice in using the relevant experiences for attuning the feelings of competence with demonstrable competence. We concur with Tarricone (2011) in her recognition of affect as an essential component of the self-assessment (or awareness) component of metacognition: the “‘feeling of knowing’ that accompanies problem-solving, the ability to distinguish ideas about which we are confident….” 

A surprising finding from our paired measures is how closely the mean self-assessments of performance of groups of people track with their actual mean performances. According to the prevailing consensus of psychologists, mean self-assessments of knowledge are supposed to confirm that people, on average, overestimate their demonstrable knowledge. According to a few educators, self-reported knowledge is supposed to be just random noise with no meaningful relationship to demonstrable knowledge. Data published in 2016 and 2017 in Numeracy from two reliable, well-aligned instruments revealed that such is not the case. Our reports in Numeracy shared earlier on this blog (see Figures 2 and 3 at this link) confirm that people, on average, self-assess reasonably well. 

In 2019, by employing the paired measures, we found that particular groups of peoples’ average competence varied measurably, and their average self-assessed competence closely tracked their demonstrable competence. In brief, different demographic groups, on average, not only performed differently but also felt differently about their performance, and their feelings were accurate.

Conceptualizing privilege and its consequences

Multiple systems (structural, attitudinal, institutional, economic, racial, cultural, etc.) produce privilege, and all individuals and groups experience privilege and disadvantage in some aspects of their lives. We visualize each system as a hierarchical continuum, along which at one end lie those systematically marginalized/minoritized, and those afforded the most advantages lie at the other. Because people live and work within multiple systems, each person likely operates at different positions along different continuums.

Those favored by privilege are often unaware of their part in maintaining a hierarchy that exerts its toll on those of lesser privilege. As part of our studies of the effects on those with different statuses of privilege, we discovered that instruments that can measure cognitive competence and self-assessments of their competence offer richer assessments than competency scores. They also inform us about how students feel and how accurately they self-assess their competence. Students’ histories of privilege seem to influence how effectively they can initially do the kinds of metacognition conducive to furthering intellectual development when they enter college.

Sometimes a group’s hierarchy results from a lopsided division into some criterion-based majority/minority split. There, advantages, benefits, status, and even acceptance, deference, and respect often become inequitably and systematically conferred by identity on the majority group but not on the underrepresented minority groups. 

Being a minority can invite being marked as “inferior,” with an unwarranted majority negative bias toward the minority, presuming the latter have inferior cognitive competence and even lower capacity for feeling than the majority. Perpetual exposure to such bias can influence the minority group to doubt themselves and unjustifiably underestimate their competence and capacity to perform. By employing paired measures, Wirth et al. (2021, p. 152 Figs.6.7 & 6.8) found recently that undergraduate women, who are the less represented binary gender in science, consistently underestimated their actual abilities relative to men (the majority) in science literacy.

We found that in the majority ethnic group (white Caucasians), both binary genders, on average, significantly outperformed their counterparts in the minority group (all other self-identified ethnicities combined) in both the competence scores of science literacy and the mean self-assessed competency ratings (Figure 1). 

Graph of gender performance on measures of self-assessed competence ratings and demonstrated competence scores across ethnic majority/minority categories.

Figure 1. Graph of gender performance on measures of self-assessed competence ratings and demonstrated competence scores across ethnic majority/minority categories. This graph represents ten years of data collection of paired measures, but we only recently began to collect non-binary gender data within the last year, so this group is sparsely represented. Horizontal colored lines coded to the colored circles’ legend mark the positions of the means of scores and ratings in percent at the 95% confidence level. 

Notably, in Figure 1, the non-binary gender groups, majority or minority, were the strongest academic group of the three gender categories based on SLCI scores. Still, relative to their performance, the non-binary groups felt that they performed less well than they actually did.  

On a different SLCI dataset with a survey item on sexual preference rather than gender, researcher Kali Nicholas Moon (2018) found the same degree of diminished self-assessed competence relative to demonstrated competence for the small LGBT group (see Fig. 7 p. 24 of this link). Simply being a minority may predispose a group to doubt their competence, even if they “know their stuff” better than most.

These mean differences in performance shown in Figure 1 are immense. For perspective, pre-post measures in a GE college course or two in science rarely produce more than mean differences of more than a couple of percentage points on the SLCI. In both majority and minority groups, females, on average, underestimated their performance, whereas males overestimated theirs. 

If a group receives constant messages that their thinking may be inferior, it is hardly surprising that they internalize feelings of inferiority that are damaging. Our evidence above from several groups verifies such a tendency. We showed that lower feelings of competence parallel significant deficit performance on a test of understanding science, an area relevant to achieving intellectual growth and meeting academic aspirations. Whether this signifies a general tendency of underconfidence in minority groups for meeting their aspirations in other areas remains undetermined.

Perpetuating privilege in higher education

Academe nurtures many hierarchies. Across institutions, “Best Colleges” rating lists perpetuate a myth that institutions that make the list are, in all ways, for all students “better than” those not on the list. Some state institutions actively promote a “flagship reputation,” implying the state’s other schools as “inferior.” Being in a community of peers that reinforces such hierarchical valuing confers damaging messaging of inferiority to those attending the “inferior” institutions, much as an ethnic majority casts negative messages to the minority.  

Within institutions, different disciplines are valued differently, and people experience differential privileges across the departments and programs that focus on educating to support different disciplines. The degrees of consequences of stress, alienation, and physical endangerment are minor compared to those experienced by socially marginalized/minoritized groups. Nevertheless, advocating for any change in an established hierarchy in any community is perceived as disruptive by some and can provide consequences of diminished privilege. National communities of academic research often prove no exception. 

Takeaways

Hierarchies usually define privilege, and the majority group often supports hierarchies detrimental to the well-being of minority groups. Although test scores are the prevalent measures used to measure learning mastery, paired measures of cognitive competence and self-assessed competence provide additional information about students’ affective feelings about content mastery and their developing capacity for accurate self-assessment. This information helps reveal the inequity across groups and monitors how well students can employ the higher education environment for advancing their understanding of specialty content and understanding of self. Paired measures confirm that groups of varied privilege fare differently in employing that environment for meeting their aspirations. 


Understanding Bias in the Disciplines: Part 2 – the Physical and Quantitative Sciences 

by Ed Nuhfer, California State University (Retired)
Eric Gaze, Bowdoin College
Paul Walter, St Edwards University
Simone Mcknight (Simone Erchov), Global Systems Technology

In Part 1, we summarized psychologists’ current understanding of bias. In Part 2, we connect conceptual reasoning and metacognition and show how bias challenges clear reasoning even in “objective” fields like science and math.

Science as conceptual

College catalogs’ explanations of general education (GE) requirements almost universally indicate that the desired learning outcome of the required introductory science course is to produce a conceptual understanding of the nature of science and how it operates. Focusing only on learning disciplinary content in GE courses squeezes out stakeholders’ awareness that a unifying outcome even exists. 

Wherever a GE metadisciplinary requirement (for example, science) specifies a choice of a course from among the metadiscipline’s different content disciplines (for example, biology, chemistry, physics, geology), each course must communicate an understanding of the way of knowing established in the metadiscipline. That outcome is what the various content disciplines share in common. A student can then understand how different courses emphasizing different content can effectively teach the same GE outcome.

The guest editor led a team of ten investigators from four institutions and separate science disciplines (biology, chemistry, environmental science, geology, geography, and physics). Their original proposal was to investigate ways to improve the learning in the GE science courses. While articulating what they held in common as professing the metadiscipline of “science,” the investigators soon recognized that the GE courses they took as students had focused on disciplinary content but scarcely used that content to develop an understanding of science as a way of knowing. After confronting the issue of teaching with such a unifying emphasis, they later turned to the problem of assessing success in producing this different kind of understanding.

Upon discovering no suitable off-the-shelf assessment instrument to meet this need, they constructed the Science Literacy Concept Inventory (SLCI). This instrument later made possible this guest-edited series and the confirmation of knowledge surveys as valid assessments of student learning.

Concept inventories test understanding the concepts that are the supporting framework for larger overarching blocks of knowledge or thematic ways of thinking or doing. The SLCI tests nine concepts specific to science and three more related to the practice of science and connecting science’s way of knowing with contributions from other requisite GE metadisciplines.

Self-assessment’s essential role in becoming educated

Self-assessment is partly cognitive (the knowledge one has) and partly affective (what one feels about the sufficiency of that knowledge to address a present challenge). Self-assessment accuracy confirms how well a person can align both when confronting a challenge.

Developing good self-assessment accuracy begins with an awareness that having a deeper understanding starts to feel different from merely having surface knowledge needed to pass a multiple-choice test. The ability to accurately feel when deep learning has occurred reveals to the individual when sufficient preparation for a challenge has, in fact, been achieved. We can increase learners’ capacity for metacognition by requiring frequent self-assessments that give them the practice needed to develop self-assessment accuracy. No place needs teaching such metacognition more than the introductory GE courses.

Regarding our example of science, the 25 items on the SLCI that test understanding of the twelve concepts derive from actual cases and events in science. Their connection to bias lies in learning that when things go wrong when doing or learning science, some concept is unconsciously being ignored or violated. Violations are often traceable to bias that hijacked the ability to use available evidence.

We often say: “Metacognition is thinking about thinking.” When encountering science, we seek to teach students to “think about” (1) “What am I feeling that I want to be true and why do I have that feeling?” and (2) “When I encounter a scientific topic in popular media, can I articulate what concept of science’s way of knowing was involved in creating the knowledge addressed in the article?”

Examples of bias in physical science

“Misconceptions research” constitutes a block of science education scholarship. Schools do not teach the misconceptions. Instead, people develop preferred explanations for the physical world from conversations that mostly occur in pre-college years. One such explanation addresses why summers are warm and winters are cold. The explanation that Earth is closer to the sun in summer is common and acquired by hearing it as a child. The explanation is affectively comfortable because it is easy, with the ease coming from repeatedly using the neural network that contains the explanation to explain the seasonal temperatures we experience. We eventually come to believe that it is true. However, it is not true. It is a misconception.

When a misconception becomes ingrained in our brain neurology over many years of repeated use, we cannot easily break our habit of invoking the neural network that holds the misconception until we can bypass it by constructing a new network that holds the correct explanation. Still, the latter will not yield a network that is more comfortable to invoke until usage sufficiently ingrains it. Our bias tendency is to invoke the most ingrained explanation because doing so is easy.

Even when individuals learn better, they often revert to invoking the older, ingrained misconception. After physicists developed the Force Concept Inventory (FCI) to assess students’ understanding of conceptual relationships about force and motion, they discovered that GE physics courses only temporarily dislodged students’ misconceptions. Many students soon reverted to invoking their previous misconceptions. The same investigators revolutionized physics education by confirming that active learning instruction better promoted overcoming misconceptions than did traditional lecturing.

The pedagogy that succeeds seemingly activates a more extensive neural network (through interactive discussing, individual and team work on problem challenges, writing, visualizing through drawing, etc.) than was activated to initially install the misconception (learning it through a brief encounter).

Biases that add wanting to believe something as true or untrue are especially difficult to dislodge. An example of the power of bias with emotional attachment comes from geoscience.

Nearly all school children in America today are familiar with the plate tectonics model, moving continents, and ephemeral ocean basins. Yet, few realize that the central ideas of plate tectonics once were scorned as “Germanic pseudoscience” in the United States. That happened because a few prominent American geoscientists so much wanted to believe their established explanations as true that their affect hijacked these experts’ ability to perceive the evidence. These geoscientists also exercised enough influence in the U. S. to keep plate tectonics out of American introductory level textbooks. American universities introduced plate tectonics in introductory GE courses only years later than did Europe.

Example of Bias in Quantitative Reasoning

People usually cite mathematics as the most dispassionate discipline and the least likely for bias to corrupt. However, researchers Dan Kahan and colleagues demonstrated that bias also disrupts peoples’ ability to use quantitative data and think clearly.

Researchers asked participants to resolve whether a skin cream effectively treated a skin rash. Participants received data for subjects who did or did not use the skin cream. Among users, the rash got better in 223 cases and got worse in 75 cases. Of subjects who did not use the skin cream, the rash got better in 107 cases and worse in 21 cases.

Participants then used the data to select from two choices: (A) People who used the cream were more likely to get better or (B) People who used the cream were more likely to get worse. More than half of the participants (59%) selected the answer not supported by the data. This query was primarily a numeracy test in deducing the meaning of numbers.

Then, using the same numbers, the researchers added affective bait. They replaced the skin cream query with a query about the effects of gun control on crime in two cities. One city allowed concealed gun carry, and another banned concealed gun carry. Participants had to decide whether the data showed that concealed carry bans increased or decreased crime.

Self-identified conservative Republicans and liberal Democrats responded with a desire to believe acquired from their party affiliations. The result was even more erroneous than the skin cream case participants. Republicans greatly overestimated increased crime from gun bans, but no more than Democrats overestimated decreased crime from gun bans (Figure 1). When operating from “my-side” bias planted by either party, citizens significantly lost their ability to think critically and use numerical evidence. This was true whether the self-identified partisans had low or high numeracy skills.

Graph showing comparing responses from those with low and high numeracy skills. Those with high numeracy always have better accuracy (smaller variance around the mean). When the topic was non-partisan, the means for those with low and high numeracy skills were roughly the same and showed little bias regarding direction of error. When the topic was partisan, then then those with lower skill showed, the strong bias and those with higher skill showed some bias.

Figure 1. Effect of bias on interpreting simple quantitative information (from Kahan et al. 2013, Fig. 8). Numerical data needed to answer whether a cream effectively treated a rash triggered low bias responses. When researchers employed the same data to determine whether gun control effectively changed crime, polarizing emotions triggered by partisanship significantly subverted the use of evidence toward what one wanted to believe.

Takeaway

Decisions and conclusions that appear based on solely objective data rarely are. Increasing metacognitive capacity produces awareness of the prevalence of bias.


Understanding Bias in the Disciplines: Part 1 – the Behavioral Sciences 

by Simone Mcknight (Simone Erchov), Global Systems Technology
Ed Nuhfer, California State University (Retired)
Eric Gaze, Bowdoin College
Paul Walter, St Edwards University

Bias as conceptual

Bias arises from human brain mechanisms that process information in ways that make decision-making quicker and more efficient at the cognitive/neural level. Bias is an innate human survival mechanism, and we all employ it.

Bias is a widely known and commonly understood psychological construct. The common understanding of bias is “an inclination or predisposition for or against something.” People recognize bias by its outcome—the preference to accept specific explanations or attributions as true.

In everyday conversation, discussions about bias occur in preferences and notions people have on various topics. For example, people know that biases may influence the development of prejudice (e.g., ageism, sexism, racism, tribalism, nationalism), political, or religious beliefs.

the words "Bias in the Behavioral Sciences" on a yellow backgroundA deeper look reveals that some of these preferences are unconscious. Nevertheless, they derive from a related process called cognitive bias, a propensity to use preferential reasoning to assess objective data in a biased way. This entry introduces the concept of bias, provides an example from the behavioral sciences, and explains why metacognition can be a valuable tool to counteract bias. In Part 2, which follows this entry, we provide further examples from hard science, field science, and mathematics.

Where bias comes from

Biases develop from the mechanisms by which the human brain processes information as efficiently as possible. These unconscious and automatic mechanisms make decision-making more efficient at the cognitive/neural level. Most mechanisms that help the human brain make fast decisions are credited to adaptive survival. Like other survival mechanisms, bias loses value and can be a detriment in a modern civilized world where threats to our survival are infrequent challenges. Cognitive biases are subconscious errors in thinking that lead to misinterpreting future information from the environment. These errors, in turn, impact the rationality and accuracy of decisions and judgments.

When we frame unconscious bias within the context of cognitive bias and survival, it is easier to understand how all of us have inclinations to employ bias and why any discipline that humans manage is subject to bias. Knowing this makes it easier to account for the frequent biases affecting the understanding and interpreting of diverse kinds of data.

People easily believe that bias only exists in “subjective” disciplines or contexts where opinions and beliefs seem to guide decisions and behavior. However, bias manifests in how humans process information at the cognitive level. Although it is easier to understand bias as a subjective tendency, the typical way we process information means that bias can pervade all of our cognition.

Intuitively, disciplines relying on tangible evidence, logical arguments, and natural laws of the physical universe would seem factually based and less influenced by feelings and opinion. After all, “objective disciplines” do not predicate their findings on beliefs about what “should be.” Instead, they measure tangible entities and gather data. However, even in the “hard science” disciplines, the development of a research question, the data collected, and the interpretations of data are vulnerable to bias. Tangible entities such as matter and energy are subject to biases as simple as differences in perception of the measured readings on the same instrument. In the behavioral sciences, where investigative findings are not constrained by natural law, bias can be even harder to detect. Thus, all scientists carry bias into their practice of science, and students carry bias into their learning of it.

Metacognition can help counter our tendencies toward bias because it involves bringing relevant information about a process (e.g., conducting research, learning, or teaching) into awareness and then using that awareness to guide subsequent behaviors.

Consequences of bias

Bias impacts individual understanding of the world, the self, and how the self navigates the world – our schemas. These perceptions may impact elements of identity or characterological elements that influence the likelihood of behaving in one way versus another.

Bias should be assumed as a potentially influential factor in any human endeavor. Sometimes bias develops for an explanation after hearing it in childhood and then invoking that explanation for years. Even after seeing the evidence against that bias, our initial explanations are difficult to replace with ones better supported by evidence because we remain anchored to that initial knowledge. Adding a personal emotional attachment to an erroneous explanation makes replacing it even more difficult. Scientists can have emotional attachments to particular explanations of phenomena, especially their own explanations. Then, it becomes easy to selectively block out or undervalue evidence that modifies or contradicts the favored explanation (also known as confirmation bias).

Self-assessment, an example of long-standing bias in behavioral science

As noted in the introduction, this blog series focuses on our team’s work related to self-assessment. Our findings countered results from scores of researchers who replicated and verified the testing done in a seminal paper by Kruger and Dunning (1999). Their research asserted that most people were overconfident about their abilities, and the least competent people had the most overly optimistic perceptions of their competence. Researchers later named the phenomenon the “Dunning-Kruger effect,” and the public frequently deployed “the effect” as a label to disparage targeted groups as incompetent. “The effect” held attraction because it seemed logical that people who lacked competence also lacked the skills needed to recognize their deficits. Quite simply, people wanted to believe it, and replication created a consensus with high confidence in concluding that people, in general, cannot accurately self-assess.

While a few researchers did warn about likely weaknesses in the seminal paper, most behavioral scientists selectively ignored the warnings and repeatedly employed the original methodology. This trend of replication continued in peer-reviewed behavioral science publications through at least 2021.

Fortunately, the robust information storage and retrieval system that characterizes the metadiscipline of science (which is a characteristic distinguishing science from technology as ways of knowing) makes it possible to challenge a bias established in one discipline by researchers from another. Through publications and open-access databases, the arguments that challenge an established bias then become available. In this case, the validity of “the effect” resided mainly in mathematical arguments and not, as presumed, arguments that resided solely within the expertise of behavioral scientists.

No mathematics journal had ever hosted arguments addressing the numeracy of arguments that established and perpetuated the belief in “the effect.” However, mathematics journals offered the benefit of reviewers who specialized in quantitative reasoning and were not emotionally attached to any consensus established in behavioral science journals. These reviewers agreed that the long-standing arguments for supporting the Dunning-Kruger effect were mathematically flawed.  

In 2016 and 2017, Numeracy published two articles from our group that detailed the mathematical arguments that established the Dunning-Kruger effect conclusions and why these arguments are untenable. When examined by methods the mathematics reviewers verified as valid, our data indicated that people were generally good at self-assessing their competence and confirmed that there were no marked tendencies toward overconfidence. Experts and novices proved as likely to underestimate their abilities as to overestimate them. Further, the percentage of those who egregiously overestimated their abilities was small, in the range of about 5% to 6% of participants. However, our findings confirmed a vital conclusion of Kruger and Dunning (1999): experts self-assess better than novices (variance decreases as expertise increases), and self-assessment accuracy is attainable through training and practice.

By 2021, the information released in Numeracy began to penetrate the behavioral science journals. This blog series, our earlier posts on this site, and archived presentations to various audiences (e.g., the National Numeracy Network, the Geological Society of America) further broadened awareness of our findings.

Interim takeaways

Humans construct their learning from mentally processing life experiences. During such processing, we simultaneously construct some misconceptions and biases. The habit of drawing on a misconception or bias to explain phenomena ingrains it and makes it difficult to replace with correct reasoning. Affective attachments to any bias make overcoming the bias extremely challenging, even for the most accomplished scholars.

It is essential to realize that we can reduce bias by employing metacognition to recognize bias originating from within us at the individual level and by considering bias that influences us but is originated from or encouraged by groups. In the case above, we were able to explain the bias within the Behavioral Sciences disciplines by showing how repeatedly mistaking mathematical artifacts as products of human behavior produced a consensus that held understanding self-assessment captive for over two decades.

Metacognitive self-assessment seems necessary for initially knowing self and later for recognizing one’s own personal biases. Self-assessment accuracy is valuable in using available evidence well and reducing the opportunity for bias to hijack our ability to reason. Developing better self-assessment accuracy appears to be a very worthy objective of becoming educated.


Introduction: Why self-assessment matters and how we determined its validity 

By Ed Nuhfer, Guest Editor, California State University (retired)

There are few exercises of thinking more metacognitive than self-assessment. For over twenty years, behavioral scientists accepted that the “Dunning-Kruger effect,” which portrays most people as “unskilled and unaware of it,” correctly described the general nature of human self-assessment. Only people with significant expertise in a topic were capable of self-assessing themselves accurately, while those with the least expertise supposedly held highly overinflated views of their abilities. 

The authors of this guest series have engaged in a collaborative effort to understand self-assessment for over a decade. They documented how the “Dunning-Kruger effect,” from its start, rested on specious mathematical arguments. Unlike what the “effect” asserts, most people do not hold overly inflated views of their competence, regardless of their level of expertise. We summarized some of our peer-reviewed work in earlier articles in “Improve with Metacognition (IwM).” These are discoverable by using “Dunning-Kruger effect” in IwM’s search window. 

Confirming that people, in general, are capable of self-assessing their competence affirms the validity of self-assessment measures. The measures inform efforts in guiding students to improve their self-assessment accuracy. 

This introduction presents commonalities that unify the series’ entries to follow. In the entries, we hotlink the references available as open-source within the blogs’ text and place all other references cited at the end. 

Why self-assessment matters

After an educator becomes aware of metacognition’s importance, teaching practice should evolve beyond finding the best pedagogical techniques for teaching content and assessing student learning. The “place beyond” focuses on teaching the student how to develop a personal association with content as a basis for understanding self and exercising higher-order thinking. Capturing the changes in developing content expertise together with self in a written teaching/learning philosophy expedites understanding how to achieve both. Self-assessment could be the most valuable of all the varieties of metacognition that we employ to deepen our understanding. 

Visualization is conducive to connecting essential themes in this series of blogs that stress becoming better educated through self-assessment. Figure 1 depicts the role and value of self-assessment from birth at the top of the figure to becoming a competent, autonomous lifelong learner by graduation from college at the bottom. diagram illustrating components that come together to promote life-long learning: choices & effort through experiences; self-assessment; self-assessment accuracy; self-efficacy; self-regulation

Figure 1. Relationship of self-assessment to developing self-regulation in learning. 

Let us walk through this figure, beginning with early life Stage #1 at the top. This stage occurs throughout the K-12 years, when our home, local communities, and schools provide the opportunities for choices and efforts that lead to experiences that prepare us to learn. In studies of Stage 1, John A. Ross made the vital distinction between self-assessment (estimating immediate competence to meet a challenge) and self-efficacy (perceiving one’s personal capacity to acquire competence through future learning). Developing healthy self-efficacy requires considerable practice in self-assessment to develop consistent self-assessment accuracy.

Stage 1 is a time that confers much inequity of privilege. Growing up in a home with a college-educated parent, attending schools that support rich opportunities taught in one’s native language, and living in a community of peers from homes of the well-educated provide choices, opportunities, and experiences relevant to preparing for higher education. Over 17 or 18 years, these relevant self-assessments sum to significant advantages for those living in privilege when they enter college. 

However, these early-stage self-assessments occur by chance. The one-directional black arrows through Stage 2 communicate that nearly all the self-assessments are occurring without any intentional feedback from a mentor to deliberately improve self-assessment accuracy. Sadly, this state of non-feedback continues for nearly all students experiencing college-level learning too. Thereby, higher education largely fails to mitigate the inequities of being raised in a privileged environment.

The red two-directional arrows at Stage 3 begin what the guest editor and authors of this series advocate as a very different kind of educating to that commonly practiced in American institutions of education. We believe education could and should provide self-assessments by design, hundreds in each course, all followed by prompt feedback, to utilize the disciplinary content for intentionally improving self-assessment accuracy. Prompt feedback begins to allow the internal calibration needed for improving self-assessment accuracy (Stage #4). 

One reason to deliberately incorporate self-assessment practice and feedback is to educate for social justice. Our work indicates that we can enable the healthy self-efficacy needed to succeed in the kinds of thinking and professions that require a college education by strengthening the self-assessment accuracy of students and thus make up for the lack of years of accumulated relevant self-assessments in the backgrounds of those lesser privileged.

By encouraging attention to self-assessment accuracy, we seek to develop students’ felt awareness of surface learning changing toward the higher competence characterized by deep understanding (Stage #5). Awareness of the feeling characteristic when one attains the competence of deep understanding enables better judgment for when one has adequately prepared for a test or produced an assignment of high quality and ready for submission. 

People attain Stage #6, self-regulation, when they understand how they learn, can articulate it, and can begin to coach others on how to learn through effort, using available resources, and accurately doing self-assessment. At that stage, a person has not only developed the capacity for lifelong learning, but has developed the capacity to spread good habits of mind by mentoring others. Thus the arrows on each side of Figure 1 lead back to the top and signify both the reflection needed to realize how one’s privileges were relevant to their learning success and cycling that awareness to a younger generation in home, school, and community. 

A critical point to recognize is that programs that do not develop students’ self-assessment accuracy are less likely to produce graduates with healthy self-efficacy or the capacity for lifelong learning than programs that do. We should not just be training people to grow in content skills and expertise but also educating them to grow in knowing themselves. The authors of this series have engaged for years in designing and doing such educating.

The common basis of investigations

The aspirations expressed above have a basis in hard data from assessing the science literacy of over 30,000 students and “paired measures” on about 9,000 students with peer-reviewed validated instruments. These paired measures allowed us to compare self-assessed competence ratings on a task and actual performance measures of competence on that same task. 

Knowledge surveys serve as the primary tool through which we can give “…self-assessments by design, hundreds in each course all followed by prompt feedback.” Well-designed knowledge surveys develop each concept with detailed challenges that align well with the assessment of actual mastery of the concept. Ratings (measures of self-assessed competence) expressed on knowledge surveys, and scores (measures of demonstrated competence) expressed on tests and assignments are scaled from 0 to 100 percentage points and are directly comparable.

When the difference between the paired measures is zero, there is zero error in self-assessment. When the difference (self-assessed minus demonstrated) is a positive number, the participant tends toward overconfidence. When the difference is negative, the participant has a tendency toward under-confidence.

In our studies that established the validity of self-assessment, our demonstrated competence data in our paired measures came mainly from the validated instrument, the Science Literacy Concept Inventory or “SLCI.” Our self-assessed competence data comes from knowledge surveys and global single-queries tightly aligned with the SLCI. Our team members incorporate self-created knowledge surveys of course content into their higher education courses. Knowledge surveys have proven to be powerful research tools and classroom tools for developing self-assessment accuracy. 

Summary overview of this blog series

IwM is one of the few places where the connection between bias and metacognition has directly been addressed (e.g., see a fine entry by Dana Melone). The initial two entries of this series will address metacognitive self-assessment’s relation to the concept of bias. 

Later contributions to this series consider privilege and understanding the roles of affect, self-assessment, and metacognition when educating to mitigate the disadvantages of lesser privilege. Other entries will explore the connection between self-assessment, participant use of feedback, mindset, and metacognition’s role in supporting the development of a growth mindset. Near the end of this series, we will address knowledge surveys, the instruments that incorporate the disciplinary content of any college course to improve learning and develop self-assessment accuracy. 

We will conclude with a final wrap-up entry of this series to aid readers’ awareness that what students should “think about” when they “think about thinking” ought to provide a map for reaching a deeper understanding of what it means to become educated and to acquire the capacity for lifelong learning.


Metacognitive Discourse—Final Course Presentations that Foster Campus Conversations about Learning

by Gina Burkart, Ed.D., Learning Specialist, Clarke University

colored people and conversation bubblesPrior to the pandemic and now since returning to campus, there has been a shift in students’ use of group study and ability to learn and work in groups. When I began my position as Learning Specialist 10 years ago, it was not uncommon to find 30 students at group study sessions at 9 pm in the evening. Now, one group study session remains, and 2-3 students might attend the sessions (unless they are teams’ sessions required by athletic coaches). Colleagues have also shared in conversations that they have found it problematic that students avoid interacting with one another in the classroom and are not able to work and learn in physical groups. Further in my learning resource center year-end reports, data have shown a steady decline in group study attendance and a steady increase of students relying on support from me, the Learning Specialist. They want to work one/one with adults. In conversations with students and in online discussion blogs, students’ have shared a lack of inter- and intrapersonal communication skills as affecting their ability to work with their peers. In simple terms—overuse of electronic communication pre-pandemic and during the pandemic has left them unable to communicate interact with their classmates. This is problematic for a variety of reasons. In terms of learning, pedagogy is clear—learning is social (Bandura, 1977).

An Assignment to Reinforce Social Learning and Metacognition

In response, this semester, to reinforce social learning and metacognition, I changed the final assessment for the College Study Strategy course to be a final presentation that embedded metacognition and social discourse. The College Study Strategy course is metacognitive in nature in that it begins by having students reflect on their prior learning experiences, assess themselves and their skills, and set goals for the semester. It is a 1-credit course open to any student below 90 credits and can be retaken. However, in the second semester, it is almost entirely filled with students placed on academic probation or warning who are required to take the course. Curriculum includes theorists such as Marzano (2001), Bandura (1994), Ducksworth (2013), Dweck (2014), and Covey (2004) and requires them to begin applying new motivation, emotional intelligence, learning, reading, time management, study, note-taking, and test-taking strategies to their courses. In the past, students created a portfolio that demonstrated the use of their new strategies and presented their growth to me in a midterm and final conference. This year, I wanted them to share their new growth with more than me—I wanted them to share their growth with the entire community.

By changing the final project to be more outward-facing, the assignment would still be metacognitive in nature—requiring students to reflect on past learning, show how they made adjustments to learning and applied new methods and strategies, share in conversation how they made the adjustments, and finally explain how they will continue to apply strategies and continue their growth in the future with the new knowledge and strategies. Again,  it would require students to share with more than me. They would need to envision a larger audience and needs—the entire campus community (administrators, students, Athletic coaches, staff, professors, recruits) and create a presentation that could be adjusted to the audience. They would practice inter and intra-personal skills as they made adjustments to their presentation over the course of 2 hours while they remained at station in the library, prepared to share their presentation as members of the campus community approached. This also allowed for the campus community to benefit from the students’ new knowledge and growth of the semester. And, being on a small scale, it re-introduced students to the art of in-person, face-face conversation between each other and the value of seeking information from each other. This is something that has been eroding due to a heavy use of electronic communication and isolated learning that occurred during the pandemic.

Students were introduced to this assignment in week one of the semester. They were told that in week 6 they would choose any topic from the course curriculum that they felt they needed to focus on more intently based on their semester goals. Once choosing the curriculum they would focus on (ex: motivation, reading, procrastination, time management, studying, growth mindset), they would then research a different article each week related to their chosen topic (weeks 6-12) and apply the new critical reading strategy taught in class to create journal entries that would be used to prepare content for the final presentation. In weeks 14 or 15, they would present in the library at a table (poster session style) during a time of their choosing (two-hour block) to the campus community about their topic. The presentation needed to include some type of visual and the content needed to include all of the following metacognitive information about the topic:

  • past struggles
  • reasons for choosing the topic
  • strategies learned in class
  • information learned in their research
  • recommendations for other students struggling
  • strategies for continued growth

Positive Impact and Take-Aways

While students were nervous and hesitant prior to the presentations, during and after the presentations, they admitted to having fun sharing about their growth and learning. Staff, faculty, and students were also appreciative of the presentations and made a point of attending. Some future students/recruits even attended as they were touring. Not surprising, most students chose to present about motivation, time management and procrastination. A few students chose to present about growth mindset, Bloom’s Taxonomy as a study strategy, and reading. A surprising take-away was that in the metacognitive process of the presentation, many students connected improved reading strategies to increased motivation and reduction in procrastination.

While observing the presentations, it was encouraging to see students learn to adapt their presentations as people approached. Since they were stationed at a table for two hours, they needed to present the material many times to different types of audiences—and they had to field questions. As they presented and represented, they learned how to interact and present differently based on the needs of the audience. This adaptation required the use of metacognition and rhetorical analysis. It also built inter- and intrapersonal communication skills. It also came at a good time in the semester, as students were authentically seeking many of the strategies and skills to prepare for finals, conclude the semester, and look forward to the next semester. Many of the presenters had friends and team members, coaches, and faculty come to hear their presentations (as I had advertised the presentations to the campus in advance). In conclusion, metacognitive presentations that engage the entire campus community in discourse about learning may be a helpful step toward rebuilding learning communities post-pandemic. Next semester, I will continue this assignment. Additionally, next semester, I am working on embedding group reading labs into targeted courses to improve learning, motivation and reduce procrastination in the classroom.

 


Using metacognition to move from talking the equity talk, to walking the equity walk

Conversations around equity, diversity, and inclusion are gaining traction on college campuses in the United States. In many cases, these conversations are overdue, so a willingness to even have the talk represents progress. But how can campuses move from talking equity talk to walking the equity walk? How can the buzz be transformed into a breakthrough? This post argues that taking a metacognitive approach is essential to taking steps in more equitable directions.

Becoming more equitable is a process. As with any process, metacognition encourages us to consider what’s working, what’s not, and how we might make adjustments to improve how we are living that process. If college campuses genuinely want to travel down more equitable roads, then they need to articulate their equity goals, map their route, and remove obstacles preventing them from reaching that destination. And if along the way, campuses find that their plans aren’t working, then metacognition can point the way towards a course correction.

A guide and the need for collective metacognition

Equity Talk to Equity Walk; book

In From Equity Talk to Equity Walk: Expanding Practitioner Knowledge for Racial Justice in Higher Education, Tia Brown McNair, Estela Mara Bensimon, and Lindsey Malcolm-Piqueux (2020) offer guidance to campuses wanting to do more than just talk. They argue, for example, that campuses need a shared understanding of equity and diversity. College mission statements are a start, but their lofty words define aspirations but not a path. Big words will never amount to more than talk unless a campus can figure out how to live into those big ideas. For example, it is one thing to pepper conversation with words, like ‘diversity,’ ‘equity,’ and ‘inclusion.’ It’s another thing altogether to develop a shared campus-wide understanding of these ideas and how those ideas need to practiced in the day-to-day life on campus. If institutional change requires shared understanding, then I argue that college campuses need collective metacognitive moments.

Metacognition urges us to establish goals and continually check-in on our progress towards them. Taking a metacognitive approach to institutional change will require that campuses articulate their equity goals with shared understanding of the underlying terms, map a plan to work towards those aspirations, monitor their progress, and make adjustments when appropriate.

  • What are the shared goals around equity? What might it mean to live into these goals in concrete terms?
  • Are these goals widely shared? If not, why not?
  • How can members of the campus community contribute and see themselves in their contribution?

Taking a metacognitive approach can also help locate the “pain points.”

  • Is the lack of progress owing to a lack of shared understanding, a lack of planning, or well-intentioned individuals working at cross-purposes?
  • What can be done to get efforts back on track?

As with any process, metacognitive check-ins around what’s working and what’s not working can point to areas for improvement. Metacognition, therefore, can keep a college campus heading down the equity path.

Progress requires being aware of barriers and working to remove them

Being concrete about the move from equity talk to walking the equity walk requires removing barriers to progress. According to McNair, Besimon, and Malcolm-Piqueux, barriers include individuals claiming not to see race or substituting class issues for race. Taking a metacognitive approach could encourage individuals to get curious about why they claim not to see race or feel more comfortable talking about economic issues. Why might someone be reluctant to consider the extent of their white privilege? Why might a campus be reluctant to acknowledge the reality of institutional racism and its implications?

Taking a metacognitive approach to such questions can honor the fact that talking about inequity can be awkward and uncomfortable. Yet, metacognition also encourages us to ask whether things are working and whether we might need to make adjustments. Walking the equity walk requires asking how white privilege and institutional racism might be inadvertently influencing campus policies and the delivery of instruction. Taking a metacognitive approach encourages campuses to look for ways to make adjustments. Awareness and adjustments are precisely what is needed in the move from equity talk to the equity walk.

By way of illustration,  McNair, Besimon, and Malcolm-Piqueux call on campuses to stop employing euphemisms, such as ‘underrepresented minorities.’ In their view, campus administration, individual departments, and instructors should disaggregate data instead. The thought is that equity issues can be addressed only if they are named. If, for example, the graduation rate of African-American males is lower than other groups, then walking the equity walk requires understanding why and looking for ways to help. If first-generation students are stopping out after their second semester (or their fourth), then campuses that are aware of this reality are positioned to make the necessary adjustments.

Administrators should look at institution-wide patterns to see if institutional protocols are impediments to student success. Individual departments should review student progress across programs and within particular courses to see how they might better support student learning. And individual instructors should take a careful look at when, where, and how students struggle with particular assignments, skills, and content. It may turn out that all students are equally successful across all areas. It might also be the case that patterns emerge which indicate that some groups of students could use more support in certain identifiable areas.

A metacognitive approach to institutional change requires that universities, academic departments, and individual instructors articulate their equity goals, track progress, and make adjustments where appropriate. Disaggregating data at all levels (institution-wide, by department, individual courses) can uncover inequities. Identifying those obstacles can be a step towards making the necessary adjustments. This can, in turn, help campuses walk the equity walk.

Improve with metacognition

Taking a metacognitive approach to process improvements encourages individuals (and institutions) to get curious about what works and where adjustments need to be made. It encourages them to continuously assess and use that assessment to make additional adjustments along the way. Colleges and universities have a long way to go if they are to address the realities of systemic inequities. But learning to walk the equity walk is a process. If we know anything about metacognition, we know that it provides us with the resources to offer process improvements. So, I argue, metacognition is essential to learning to move beyond equity talk and actually walking the equity walk.

 

 

 


The College Transition: Making Time Tangible

by Mary L. Hebert, PhD; Director, Regional Center for Learning Disabilities; Fairleigh Dickinson University

In preparing students for the college transition, it behooves them to reflect on the differences between high school and college. Important considerations for reflection include questions related to the difference of the pace and volume of work, and the degree of independence required for that work. Students with learning differences are statistically more at risk of challenge and adjustment issues and consequently the incompletion of college. The more metacognitively they enter their new academic environment, the greater the likelihood they will be prepared, build upon their self-efficacy and self-advocacy. Using metacognition as a tool to pause, reflect, and pivot accordingly has the potential to optimize capacity to adapt and adjust to the context of one’s learning environment.drawing of a human brain with a 5-step cycle overlaid: Plan, Apply strategies and monitor, Reflect and adjust if needed, Assess the task, Evaluate strengths and weaknesses

                                                                          

Making Time Tangible

Executive function issues can have a significant impact on college students. Many factors can contribute to this. For students who have a learning disability, high co-morbidity rates are noted in the literature (Mohammadi et al., 2019). The executive function skill sets are some of the most critical to manage the rigor and independence of the adult learning experience. A student learning in an adult context are often adjusting to a living and learning environment on a college campus for the first time. Common symptoms of executive function challenges include a distorted sense of time, procrastination, difficulty engaging and disengaging in tasks, and cognitive shifts in task management. The more tangible and observable time can be made, the greater the likelihood of manipulating time and advantageously managing it towards the achievement of one’s immediate, short term and longer term goals.

It takes a synthesis of academic, social, and emotional skill sets to operate collaboratively during a time of transition. In work with new students, it is prudent to encourage and sharpen metacognitive reflection on the process of recognizing time as something that is tangible and malleable and now on the student to manipulate accordingly to accommodate their new adult learning environment. Enriched self-awareness of one’s challenges as well as strengths in regard to executive function, has the potential to support enriched self-competence. Both are cornerstones for success.

Reflect and plan: tackle time management, don’t let it tackle you!

One of the metacognitive tasks that a supportive adult can encourage when a student prepares for the college transition is to create a weekly schedule with their courses listed on the schedule. Likely, the student will observe that there is far more white space than ‘ink on the page’ or black space. I tell the student that I am far less concerned about the ink on the page. Why they ask? Because the ink on the page very nicely identifies where they have to be, for what and with whom. I ask students what they notice about their schedule in comparison to their high school schedule, which is often structured from 7:00 am until 3:00 pm, or even later, given extracurricular commitments and homework. Next, I ask students to identify and list not only academic commitments but study time, wellness hygiene tasks (eating, sleeping, doctor’s appointments, exercise), social time, and other responsibilities and suggest plotting how many hours these will take during the 24 hours day.

image of a blank weekly calendar planner                                    

It becomes evident during this task that college success is highly dependent on the use of the white space. Academic coaching has become a popular and sought out experience. In fact, embracing a coaching experience correlates with a higher GPA, retention and success for students (Capstick et al. 2019). While academic coaching has the potential to offset executive function challenges and is excellent to have available, ultimately the goal is internalization of metacognitive skills that support more independent and effective executive function. Consequently, the coaching model should focus on internalization as the goal.

Executive function skills are essential to sustain motivation and support perseverance in academics, particularly for students with a learning difference. If executive function skills are challenged and the student does not possess adequate focus, stamina, and organization, there is potential for impact on academic performance. This can increase risk for poor grades and low self-efficacy, and have the potential to compromise the completion of academic tasks. Metacognition facilitates success through promoting self-awareness of one’s executive skill profile of strengths and challenges, and then using that awareness to promote self-monitoring and checking in on one’s task management.

Making time tangible is a powerful strategy in managing executive function symptoms. Metacognitive reflection of the college schedule is a power-tool to support college students who now are in the driver’s seat of managing time rather than being a passenger with others who have managed it for them. The internalization of this skill will be essential to the successful navigation of the ‘white space.’

This added layer of independence and competence will lead to a position of empowerment in the transition to college and be a skill set necessary for career readiness.

References

Capstick, M.K., Harrell-Williams, L.M., Cockrum, C.D. et al. Exploring the Effectiveness of Academic Coaching for Academically At-Risk College Students. Innov High Educ 44, 219–231 (2019). https://doi.org/10.1007/s10755-019-9459-1

Mohammadi M-R, Zarafshan H, Khaleghi A, et al. Prevalence of ADHD and Its Comorbidities in a Population-Based Sample. Journal of Attention Disorders. 2021;25(8):1058-1067. doi:10.1177/1087054719886372


The Deliberate Educator and Metacognition: Is there a fit?

by Dr. Kim A. Hosler, Director of Instructional Design, United States Air Force Academy

What struck me…

A few days ago, a colleague and I were talking about what it means to be a deliberate educator. As I was thinking about what that meant, it struck me that to be a deliberate and purposeful educator, one must also be metacognitive about what they are doing and why. Can we say being a deliberate educator is also being a metacognitive educator? This notion gave me pause.Flow chart diagram listing 3 elements of the metacognitive instructor (reflective, deliberate, self-regulates)

At times we may have a tendency to teach the way we were taught, or in a way that feels right to us. It is possible that an approach that is comfortable for us could lead to effective instruction, but shouldn’t a deliberate educator’s approach to teaching be questioned and explored? Deliberate instructors take time to choose materials, plan course content and learning activities, to respond thoughtfully to learners, all done with intentionality.

Teaching deliberately means that as instructors we are thoughtful, purposeful, and studied about what we do in our classes. It means we put a sustained effort into improving our performance and enriching the learning experiences of our students. According to the McRel Organization (2017), “Being intentional means that teachers know and understand why they are doing what they are doing in the classroom to coach their students to deeper understanding and knowledge.”

Trede and McEwan (2016) talked about a pedagogy of deliberateness, stating that “beyond praxis, the pedagogy of deliberateness is also about knowing when to and when not to act and to challenge existing ways of doing, saying, relating and knowing” (p. 22). They further explained that a “deliberate professional has to be a thinker and a doer, where the thinking informs the doing and the doing informs the thinking. In that sense, the doing is as much a source for learning as the knowing and thinking” (p. 7). This claim speaks directly to critical elements of metacognition, such as awareness, reflection, cognitive monitoring and improvement. Metacognition is generally summarized as control of one’s cognitive skills, which involves planning, monitoring, and evaluating and then modifying one’s approach as needed to ensure student learning.

Where does metacognition fit in? Answer: Everywhere.

Being intentional and purposeful about my course design and teaching presents only part of the picture. Without thoughtful reflection, are we truly being deliberate and metacognitive? Schaefer (2019) reminded us that a metacognitive instructor “asks why they are proceeding in a particular manner” and then uses that reflective awareness to guide final decisions and actions. This supports the notion that being deliberate necessitates asking reflective, self-regulating questions regarding what we are being deliberate about. Specific questions might include:

  • Have I thought through the purpose of the learning activity(ies) I have students completing?
  • Can I explain the why of this activity to them?
  • Have I taken time to reflect on and note what went well with the learning activity and what I could have done better?
  • Have I considered why I am giving students a quiz over the material rather than a short essay? What are the consequences if I don’t give them a quiz or essay?
  • In my XYZ lesson, did I relate that content to previous lessons clearly?
  • What points of confusion did I observe during class? Why do I think some learners became confused?
  • What did I do to make this lesson engaging and interesting? Was it effective?

While I am deliberate and purposeful in my teaching and course design, I find I skimp on the reflection part and avoid asking myself the hard questions. Why, I wonder, am I not taking time to reflect? Do I think I intuitively “get it” and that “it” is correct or the best way? Do I think that being a deliberate educator is enough (no reflection necessary)? Additionally, when I more closely consider what metacognition means, I realize I am missing the self-regulation component, the intentional changes I may need to make after the lesson or course. Reflecting and noting my observations and ideas coupled with deliberate action to improve (self-regulating) will result in my becoming a more effective metacognitive instructor.

Meaningful reflection involves the conscious consideration of one’s beliefs and actions for the purposes of learning and improvement. To reflect, I need to slow down, tolerate the messiness and ambiguity reflection may bring, along with feelings of discomfort, vulnerability, and defensiveness. Without reflection, how do I know what to improve and what needs to be changed to better support student learning?

icon image of woman's head within a mirror frame, with a lightbulb at the top of her head, indicating thinking

To help me get started, Porter (2017) offered the following about reflecting.

  • Identify important questions and self a reflection process that works for you. Is that talking to others or writing in a journal?
  • Set aside time to reflect and stick to it. If you avoid that time, ask yourself why
  • Be still with your thoughts
  • Consider multiple perspectives
  • Start small, set aside 10-minute blocks of time to reflect, especially after an event or class while ideas and observations are fresh

A deliberate educator considers teaching as a purposeful act that can benefit from reflection, analysis, an intentional approach and action. When we are deliberate in our teaching, we know where we are going, how to get there, and the why behind what we are doing. This deliberate process involves taking time for reflection; reflection in planning, for asking the hard questions, and for monitoring our instructional practice. The monitoring of our instructional practice and resulting changes as realized through reflection, moves one from being a deliberate instructor to becoming a metacognitive instructor. Thus, being a deliberate educator is part of being a metacognitive instructor; however, as Scharff (2015) noted, metacognitive instructors also need to make intentional changes based on their reflections and situational awareness.

Please excuse me now, as I want to reflect on what I’ve just written and perhaps make intentional changes.

References

McRel Organization (2017). Intentional teaching inspires intentional learning. Retrieved from https://www.mcrel.org/intentional-teaching-inspires-intentional-learning.

Porter, J. (2017) Why you should make time for self-reflection (even if you hate doing it). Harvard Business Review. Retrieved from https://hbr.org/2017/03/why-you-should-make-time-for-self-reflection-even-if-you-hate-doing-it

Scharff, L. (2015). What Do We Mean by “Metacognitive Instruction”? Retrieved from https://www.improvewithmetacognition.com/what-do-we-mean-by-metacognitive-instruction/

Schaffer, A. (2019) Metacognitive instruction: Suggestions for faculty. Improve with Metacognition. Retrieved from https://www.improvewithmetacognition.com/metacognitive-instruction-suggestions/

Trede, F., & McEwen, C. (2016). Educating the deliberate professional: Preparing for future practice (Vol. 17). Springer. https://doi.org/10.1007/978-3-319-32958-1.


U.S. Army Cadets and Faculty Reflecting on a Metacognitive Assignment from a General Education Writing Class

by Brody Becker, Jack Curry, Charlie Gorman, Caleb Norris, J. Michael Rifenburg, and Erik Siegele

We offer an assignment from a general education writing class that invites students to hone their metacognitive knowledge by, oddly enough, writing about writing. Before we turn to this assignment, we need to detail who we are. We are a six-person author team. Five of us are first-year U.S. Army cadets. All five plan to commission into the U.S. Army following graduation. One of us is a civilian, tenured professor in the English Department.

group of 5 army cadets standing outsideDuring the Fall 2021 semester, we met in English 1102, a general education writing class offered at the University of North Georgia (UNG). Our university is a federally designated senior military college, like Texas A&M and The Citadel, tasked with educating future U.S. Army officers. Civilians also attend UNG. At our school, roughly 700 cadets learn alongside roughly 20,000 civilian undergraduate students. These details are important to what we want to describe in this post: not only a metacognitive writing assignment for this specific class but also the perspective of cadets who completed this assignment and the value of such metacognitive work for cadets. We write as a six-person team and offer collective ideas (as we do in this paragraph). However, we also value individual perspective. Author order is alphabetical and does not signal one writer contributing more than another writer.

An Overview of this Metacognitive Assignment

I (Michael) regularly teach this general education writing class. One writing assignment opened with the following prompt: “For this second paper, I invite you to reflect on a previous paper you wrote during your college or high school career. Through detailing when and where you wrote the paper, the processes you undertook to write the paper, and the feedback or grade you received on this paper, you will make a broader argument about the importance of reflecting back on writing and lessons one learns from undertaking such reflection.” This assignment is a modified version of a similar writing assignment in Wardle and Downs’s (2014) popular textbook Writing About Writing.

To prepare to write this paper, we read the “Framework for Success in Postsecondary Writing,” a national consensus document outlining, as the title suggests, a framework for students to succeed at college level writing. This document offers eight habits of mind essential for student-writers to hone: one of these habits of mind is metacognition. We also read through portions of Tanner’s (2017) “Promoting Student Metacognition.” Tanner provided a table of metacognitive questions instructors can ask students before, during, and after the course.

Students then wrote a 1,500 word essay in response to this assignment. All student co-authors for this blog post enrolled in this specific class and completed this assignment. I now turn to my co-authors, cadet Charlie Gorman and cadet Brody Becker, to hear their perspectives on this assignment.

Cadets’ Reflections

Charlie’s reflection on this metacognitive assignment

I found that this assignment was beneficial to grow as a writer. Reflecting back on activities or assignments is a great way to improve in any aspect of life. I would have never thought about writing a paper about a paper until I was given the opportunity to write this assignment. As a future leader in the military, my writing will consist of educating material, reports, and special directions. Completing this assignment has set me up and taught me how to use past failures and successes to improve upon a future performance.

photo of a cadet writing on his laptop

Brody’s reflection on this metacognitive assignment

This paper on metacognition was difficult for me because I had never done anything along these lines in a writing aspect previously. However, I soon found it to be helpful because of all the things I could learn from and look for in future writing. I had never thought about how looking back at previous writing could be helpful to me, so I always disregarded any past assignments and never thought about them again. This was a teaching moment for me, and I always take chances to learn new things. This assignment was one of the more beneficial things that I have done that I will continue to use for future assignments and will carry over to other things in life.

Why such an assignment is particularly helpful for cadets

In this section, Cadet Jack Curry considers why such a metacognitive writing assignment is particularly helpful for cadets who, after graduation, will commission as officers in the U.S. Army.

As a cadet, metacognition is an important step for our future progress. Being able to review and learn from our mistakes and our successes, helps us become better leaders. After any exercise or training, we conduct After Action Reviews (AARs) to find out how we can either improve upon or continue upon our training. As future officers, our job is to continue improving the skills we will use to lead future soldiers. The U.S. Army’s publication Training Circular 25-20: A Leader’s Guide to After Action Reviews (1993), states “the reason we conduct AARs are in order to find candid insights into specific soldier, leader, and unit strengths and weaknesses from various perspectives, and to find feedback and insight critical to battle-focused training.”

Concluding words of hope for more faculty-student partnerships

Our partnership started as a teacher-student one. Michael designed writing assignments and led classroom activities, and Charlie, Caleb, Jack, Brody, and Erik completed writing assignments and completed classroom activities. Near the end of the semester, our partnership shifted into one of co-authors where we wrote this blog post together over Google Docs, bounced ideas back and forth in-person after class, and coordinated further over email. We use the noun partnership intentionally to signal our commitment to pedagogical partnerships, an international and interdisciplinary movement to re-see the student-faculty relationship as one in which both serve as active agents in curriculum design, implementation, and assessment (e.g., Cook-Sather et al., 2019). As readers of and contributors to Improve with Metacognition continue to explore the benefits of structured metacognitive tasks throughout higher education, we hope that undergraduate students are at the forefront of this exploration. Partnerships between faculty and students are one productive step to ensuring that our classroom practices and processes best serve all our students.

References

Cook-Sather, A., Bahti, M., & Ntem, A. (2019). Pedagogical partnerships: A how-to guide for faculty, students, and academic developers in higher education. Elon University’s Center for Engaged Learning Open Access Book Series. Retrieved from https://www.centerforengagedlearning.org/books/pedagogical-partnerships/

Council of Writing Program Administrators et al. (2011). “Framework for Success in Postsecondary Writing.” Retrieved from http://wpacouncil.org/files/framework-for-success-postsecondary-writing.pdf.

U.S. Department of the Army. (1993). Training Circular 25-20: A Leader’s Guide to After Action Reviews. Army Publishing Directorate. Retrieved from https://armypubs.army.mil/productmaps/pubform/details.aspx?pub_id=71643

Tanner, K. (2017). Promoting student metacognition. Life Sciences Education, 11(2). Retrieved from https://www.lifescied.org/doi/full/10.1187/cbe.12-03-0033

Wardle, E., & Downs, D. (2014). Writing About Writing: A College Reader. 2nd ed. Bedford.


Writing metacognitive learning objectives for metacognitive training that supports student learning

by Patrick Cunningham, Ph.D., Rose-Hulman Institute of Technology

Teaching through the COVID-19 pandemic has highlighted disparities in how students approach their learning. Some have continued to excel with hybrid and online instruction while others, and more than usual, have struggled. Compounding these struggles, these students also find themselves behind or with notable gaps in their prerequisite knowledge for following courses. A significant component of these struggles may be due to not having developed independence in their learning. Engaging in explicit metacognitive activities directly addresses this disparity, improving students’ abilities to overcome these struggles. Given the present challenges of living through COVID-19, this is more important now than ever. However, creating activities with metacognitive focus is likely unfamiliar and there are not a lot of resources to guide their development. Here I seek to demonstrate an accessible approach, an entry point, for supporting students’ growth as more skillful and independent learners grounded in metacognition.

Cognitive Learning Objectives are Just the Start

Creating explicit learning objective is one means by which educators commonly try to support students’ independence in learning. Typically learning objectives focus on the cognitive domain, often based on Bloom’s Taxonomy. The cognitive domain refers to how we think about or process information. Bloom’s taxonomy for the cognitive domain is comprised of Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating (Krathwohl, 2002). Each of these gives an indication how a student is expected to engage or use the material we are teaching. For constructing learning objectives, there are lists of action verbs associated with each Bloom category.

Consider this cognitive learning objective for a computer programming course.

Students will be able to create and implement functions with inputs and an output in C++ programs to accomplish a specified task on an Arduino board with a prewired circuit.

This learning objective is specific to a lesson and targets the Apply level of Bloom’s taxonomy. (The approach I am presenting could equally apply to broader course-level learning objectives, but I think the specificity here makes the example more tangible.) This objective uses good action verbs (bolded) and has a prescribed scope and context. But is it adequate for guiding student learning if they are struggling with it?

Metacognitive Learning Objectives can Direct Learning Activities

silhouette shape of brain with the words "metacognitive learning objectives"inside the shape

Cognitive learning objectives point students to what they should be able to do with the information but do not usually provide guidance for how they should go about developing their ability to do so. Metacognition illuminates the path to developing our cognitive abilities. As a result, metacognitive training can support students’ attainment of cognitive learning objectives. Such training requires metacognitive learning objectives.

Metacognitive learning objectives focus on our awareness of the different ways we process information and how we regulate and refine how we process information. Metacognitive knowledge includes knowledge of how people (and we as individuals) process information, strategies for processing information and monitoring our thinking, and knowledge of the cognitive demands of specific tasks (Cunningham, et al., 2017). As we engage in learning we draw on this knowledge and regulate our thinking processes by planning our engagement, monitoring our progress and processes, adjusting or controlling our approaches, and evaluating the learning experience (Cunningham, et al., 2017). Metacognitive monitoring and evaluation feed back into our metacognitive knowledge, reinforcing, revising, or adding to it.

Example Implementation of Metacognitive Learning Objectives

Considering our example cognitive learning objective, how could we focus metacognitive training to support student attainment of it? Two possibilities include 1) focusing on improving students’ metacognitive knowledge of strategies to practice and build proficiency with writing functions or 2) supporting students’ accurate self-assessment of their ability to demonstrate this skill. Instructors can use their knowledge of their students’ current strategies to decide which approach (or both) to take. For example, if it appears that most students are employing limited learning strategies, such as memorizing examples by reviewing notes and homework, I might focus on teaching students about a wider range of effective learning strategies. The associated metacognitive learning objective could be:

Students will select and implement at least two different elaborative learning strategies and provide a rationale for how they support greater fluency with functions.

The instructional module could differentiate categories of learning objectives (e.g., memorization, elaboration, and organization), demonstrate a few examples, and provide a more complete list of elaborative learning strategies (Seli & Dembo, 2019). Then students could pick one to do in class and one to do as homework. If, on the other hand, it appears that most students are struggling to self-assess their level of understanding, I might focus on teaching students how to better monitor their learning. The associated metacognitive learning objective could be:

Students will compare their function written for a specific application, and completed without supports, to a model solution, using this as evidence to defend and calibrate their learning self-assessment.

Here the instructional module could be a prompt for students to create and implement a function, from scratch without using notes or previously written code. After completing their solutions, students would be given access to model solutions. In comparing their solution to the model, they could note similarities, differences, and errors. Then students could explain their self-assessment of their level of understanding to a neighbor or in a short paragraph using the specific comparisons for evidence. These examples are metacognitive because they require students to intentionally think about and make choices about their learning and to articulate their rationale and assessment of the impact on their learning. I believe it is important to be explicit with students about the metacognitive aim – to help them become more skillful learners. This promotes transfer to other learning activities within the class and to their learning in other classes.

Implementing and Supporting Your Metacognitive Outcomes

In summary, to create actionable metacognitive learning objectives I recommend,

  • clarifying the cognitive learning objective(s) you aim to support
  • investigating and collecting evidence for what aspect(s) of learning students are struggling with
  • connecting the struggle(s) to elements of metacognition
  • drafting a metacognitive learning objective(s) that address the struggle(s)

Armed with your metacognitive learning objectives you can then craft metacognitive training to implement and assess them. Share them with a colleague or someone from your institution’s teaching and learning center to further refine them. You may want to explore further resources on metacognition and learning such as Nilson’s (2013) Creating Self-Regulated Learners, Seli and Dembo’s (2019) Motivation and learning strategies for college success, and Svinicki’s GAMES© survey in (Svinicki, 2004). Or you could watch my Skillful Learning YouTube video, What is Metacognition and Why Should I Care?.

If metacognition is less familiar to you, avoid overwhelm by choosing one element of metacognition at a time. For example, beyond the above examples, you could focus on metacognitive planning to support students better navigating an open-ended project. Or you could help students better articulate what it means to learn something or experience the myth of multitasking (we are task switchers), which are elements pertaining to metacognitive knowledge of how people process knowledge. Learn about that element of metacognition, develop a metacognitive learning objective for it, create the training materials, and implement them with your students. You will be supporting your students’ development as learners generally, while you also promote deeper learning of your cognitive course learning objectives. Over time, you will have developed a library of metacognitive learning objectives and training, which you could have students explore and self-select from based on their needs.

Acknowledgements

This blog post is based upon metacognition research supported by the National Science Foundation under Grant Nos. 1932969, 1932958, and 1932947. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.

References

Cunningham, P. J., Matusovich, H. M., Hunter, D. A., Williams, S. A., & Bhaduri, S. (2017). Beginning to Understand Student Indicators of Metacognition. In the proceedings of the American Society for Engineering Education (ASEE) Annual Conference & Exposition, Columbus, OH.

Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory into practice41(4), 212-218.

Nilson, L. (2013). Creating self-regulated learners: Strategies to strengthen students? self-awareness and learning skills. Stylus Publishing, LLC.

Seli, H., & Dembo, M. H. (2019). Motivation and learning strategies for college success: A focus on self-regulated learning. Routledge.

Svinicki, M. D. (2004). Learning and motivation in the postsecondary classroom. Anker Publishing Company.


Using Learning Portfolios to Support Metacognition

Dr. Sarah Benes, Associate Clinical Professor, Department of Nutrition and Public Health, Merrimack College

Over the past four years, I have been exploring the concept of metacognition. In many ways, I think metacognition has been a large part of how I work as a practitioner both in my personal practice of reflection and in how I practice the art of teaching. However, it wasn’t until I switched faculty positions that I really started to dive into intentional research and practice around metacognition.

line drawing of a satchel, pen and paper inside a circleAs noted in the “Finding Your People” blog post, this was largely because I had difficulty adjusting to new students at a new school. The challenges that arose prompted me to find ways to meet the needs of my new students in order to support their growth as learners and as people. One of the strategies that quickly arose as a strategy that could help was metacognition.

I am the kind of teacher who likes to try things. I have done a number of different activities (both research based and more “practice based”) over the past 4 years and have learned much from all of them. However, one practice in particular that stands out to me as having a significant impact on student learning and in the overall experience of the course was the use of learning portfolios. I have used similar strategies previously in both graduate and undergraduate courses, but never with an intentional focus on metacognition. The books, Using Reflection and Metacognition to Improve Student Learning: Across the Disciplines, Across the Academy (New Pedagogies and Practices for Teaching in Higher Education) by Kaplan et al., (2013) and Creating Self Regulated Learners by Nilson (2013), were resources I used (along with other research) to put the pieces together to design and develop the learning portfolio.

I primarily teach two courses: Introduction to Public Health (mostly first-year students) and Health Behavior and Promotion (mostly sophomores and juniors). Both courses serve students in the School of Health Science. I first integrated the learning portfolio into my Health Behavior and Promotion course with great success. I plan to create a learning portfolio for my Introduction to Public Health course this fall and am excited to see how it works!

Overview of the Learning Portfolio

The learning portfolio was a “deliverable” that students worked on for the whole semester. The learning portfolio was connected to a course “e-book” in which I introduced weekly topics and objectives, outlined the class preparation & included prompts for the learning portfolio (more on the “e-book” below). Students kept notes, reflections, and responses to other assignments in their portfolios. In order to support student success, students submitted the portfolios 4 times over the semester (about every 3 weeks). Each time students submitted the portfolio they received a grade based mainly on completeness. I considered “completeness” the extent to which they addressed all prompts.

I should note here that not all of their reflections are necessarily connected to metacognition. However, in most sets of prompts given, the majority of the prompts related to metacognition. Students were asked to reflect specifically their experiences in the course, how their experiences were impacting their learning, connections they are making to the content, their perceptions of the usefulness and applicability of content in their lives, their use (or lack of) metacognitive and self-regulation strategies, etc.

E-Book

One component of the learning portfolio involved responding to prompts in the “e-book”. The “e-book” included the following three “components”: 1) an introduction to the content for each week (and how it connects to previous learning), 2) guidance on what to focus on in the class preparation, and 3) metacognitive reflective questions.

The introduction to the content included connections to the learning objectives (which were also presented in the syllabus), described why they were learning the material and how it connected to previous learning. I hoped that the introduction would help them monitor and evaluate their understanding of the course content week to week and within the broader context of the whole course.

With the class preparation guidance, I was hoping to help students develop task oriented skills. I have often found it a challenge to get students to complete class preparation. Students have also been honest and shared that my concerns around the lack of class preparation completion were not unfounded. I thought that providing some guidance on what to focus on and look for might help increase the number of students completing the class prep and also increase students’ ability to retain the information and be ready to use the content in class. I also hoped that the guidance might also help them with task oriented and evaluative skills.

While I don’t have any specific data about the impacts, I definitely noticed a positive difference in student participation during this semester compared to others. Students also seemed to have a stronger grasp on the content. Of course, there are many reasons that I could attribute to these improvements, but my teaching itself didn’t change that much and the one variable that was definitely different was the “e-book” and learning portfolio.

The final component of the “e-book” were the reflective questions. Questions varied week to to week. Sample questions::

  • How does what you read and watched for today connect to your prior knowledge learning? How does it connect to the reading from Monday?
  • Review the syllabus and assignments posted in the Assignments folder, what assignments do you feel align with your strengths as a student? Which might be more challenging? Why? What are strategies you could use to help you to be successful?
  • What are 3 key points from these readings and the video that you think are important for college students to know?

Each class prep assignment had these kinds of reflective questions for students to activate and connect to prior learning, to monitor and evaluate their learning, and to help them identify their strengths and areas for improvement.

Lessons Learned

Using a learning portfolio in my course taught me many things:

  • I have learned that students communicate their thoughts, reflections and experiences in many different ways. Some responses are brief and concise, some are more “stream of consciousness”, and some provide extremely thoughtful and thorough, more polished responses. I learned to focus more on the purpose of the activity (to think about themselves and their learning), rather than the “quality” of their reflections. I felt that my my bias of what I believe a quality reflection “looks like” might impact students’ learning and growth.
  • I experienced the value of being able to have a “dialogue” with students through the portfolio though my feedback. Sometimes the feedback was a question, my perspectives, a connection to course content, etc. I saw the learning portfolio as a dialogue between me and the students more than a gradable assignment (though assigning points helps with motivation and completion). Student responses to these questions helped me to connect with students more deeply and provide feedback to support their learning and also add different perspectives than we may have been able to cover in class. I feel that I was able to get to know students a lot better through this model, that I was able to engage differently with each student (which I don’t always get to do in a course) .
  • The learning portfolio was also a place where students recorded responses to in-class discussion prompts. Sometimes I would have students respond to discussion prompts before the discussion in class to allow students to gather their thoughts, and sometimes it was after discussion to allow for processing time. I learned that this was a great way to be able to receive responses from all students as I often can’t get around to hear from students when discussing in class and students don’t always feel comfortable speaking up but it is often not because they don’t have valuable contributions. The learning portfolio structure allowed me to “hear from” each student.
  • I learned that it takes a little work to get “buy in” from students, which is why I spend about 2 weeks at the start of the semester talking about learning and metacognition. That way, students have a foundation to understand the “why” behind the learning portfolio (and other aspects of the course). However, I believe the time is well spent and that the content and skills they gain from both the class content and the learning portfolio are as important (maybe for some students even more important) than the course content itself.

Conclusion

Adding the learning portfolio to my class has been one of the more impactful strategies I have tried. It is a lot of upfront work and a decent amount of work during the semester if I respond to all students, but I saw a significant improvement in student engagement and student learning. I also felt that I connected more with students and got to know them better. I am looking forward to trying this approach with my first-year students this fall (perhaps another blog post will be in order to share how it goes)!


Meta What? Scaffolding learning for the still developing prefrontal cortex

by Kristy Forrest, M.Ed., Academic Advisor and Success Coach,
Office of Academic Support & Advising, Merrimack College

(Post #5 Integrating Metacognition into Practice Across Campus, Guest Editor Series Edited by Dr. Sarah Benes)

Metacognition is the awareness and understanding of thought itself. In practice, it involves student planning, monitoring, evaluating and regulating thoughts in relation to learning and problem-solving. More broadly, metacognition in college refers to higher-order thinking, “thinking about thinking,” and impacts student reflection and educational motivation.

drawing of human head silhouette with a light bulb lighting inside to represent thinking     

My team and I, representing the Office of Academic Support & Advising at Merrimack College, serve traditional college-aged students at a private four-year catholic institution. As Advisors and Success Coaches, part of our role is to provide developmental skill building workshops, programming, and courses. Together, we strive to support all student development including metacognitive growth. We teach our students the concept of metacognition, how to apply it, and its value in achieving academic success.

Metacognition in Teaching

Our approach is interdisciplinary. Combining concepts from advising theory, pedagogy, and developmental and learning psychology, we designed a one-credit academic development course for our students on academic probation. The idea is to learn new academic skills that are transferable in all coursework. As instructors, we help students understand metacognition as an intervention to trigger deeper understanding, comprehension, and most importantly, how applying metacognitive practices can eliminate prior gaps in their learning.

Students learn that applying metacognitive study practices not only develops deeper curiosity as learners but also how the cycle of previewing, attending, and reviewing increases the quality of their work. Additionally, they see that reflective thinking, self-regulation and self-discipline results in higher quality academic performance. Once they begin to achieve success they are more empowered and motivated to engage in work that is difficult. Moving beyond study skills to enhanced scholarly work has become our hook. We gain student buy-in much faster than when we focused only on study skills.

Nuts & Bolts of Our Course


For students who take our 1 credit course, over 90% enrolled increase their GPA an average of 1 to 1.7 points, and ultimately get and stay off academic probation. As mentioned above, we do teach the basic mechanics, habits and skills needed to be effective college students but buy-in can be tough with getting college students to see value in foundational study skills. Although we know how critical these skills are, they say that workshops about these topics often make them feel belittled. Until struggling students experience the benefits of these practices, there is quite a bit of resistance. They are, however, intrigued when we use terminology like, “applying metacognitive practices for academic success and development of higher order thinking”, and “increased competencies”.

In our one-credit course my team and I combine philosophical and practical theoretical concepts such as Chickering’s “Seven Vectors of Student Identity”; Bandura’s Self-Efficacy: Rotter’s Locus of Control: Dweck’s Growth Mindset; Duckworth’s Grit Model; and, of course, Flavell and McGuire’s Models on Metacognition. Students are required to engage in this scholarship and reflect on how each applies to them.      

We begin with the concept of student identity and ask them what this means for them. We learned that so many of our students do not identify as scholarly students and so their self-concept needs reframing. This is where we initiate reflective thinking. When we ask students to take the time to define and explore what it means to be a student, and compare it with newly learned metacognitive strategies, we begin to see transformation in their approach to their learning. With greater awareness of how knowledge is acquired, that the expectation in college is to move beyond memorization to instead analyzing and evaluating, and that learning how to think about thinking, our students better understand where the goal post is. They now value becoming self-disciplined, self-regulated learners.

With this messaging, we also help them to connect how their thoughts and emotions impact their behaviors and how they are in control of their academic consequences. After establishing this new insight we discuss locus of control, growth mindset, grit, and metacognitive study practices. Through these frameworks we also work to dispel their imposter syndrome and slowly we see them disarm.

Metacognition in Advising & Coaching

Beyond our course, through general advising and coaching, we typically find academic struggles are not a reflection of student capacity, but rather a problem in habits and skills, or, lack of metacognition in their practice. In one-to-one student coaching meetings we provide individualized attention using metacognitive strategies in supporting our students to connect the dots in their coursework. Ideally, professors are doing this already. However, there are times when we see the assignments go unquestioned by students simply trying to check the done box and not understanding the why and how in the work. Basically, students sometimes fail to integrate incremental assignments with the larger concepts.

We work to develop academic skills including metacognitive strategies, so that students better comprehend their material and build competencies in their discipline. As college educators serving adolescents, we need to consider that developmentally, without a fully developed prefrontal cortex, adolescents may not have the full capacity or neuro-connectivity to put these pieces together on their own. Metacognition supports the development of this exact connectivity. 

The Missing Meta Link in Student Learning

Students frequently report after learning about metacognition they now have an explanation to why they were struggling. They share that they never knew how impactful the little things like organizing, planning, scheduling, previewing, attending, and reviewing are. After recognizing that they previously did not exercise metacognitive skills, and they begin using them, students are able to recognize enhancement in their academic performance. A bonus is when their defenses come down and they open up.

Metacognition is one way that allows students to connect their choices and actions to their academic results. Referring back to developmental psychology, connecting thoughts, feelings and behavior is still really hard for a person with an underdeveloped prefrontal cortex. As advisors, coaches, and instructors, shedding light on this for students is where we can make a difference. Additionally, we plant seeds for further integrated learning as our adolescents develop into emerging adulthood.

Big Returns of Metacognition

The returns of learning this concept are bountiful. My favorite thing about teaching students the concept of metacognition is how it can open the floodgates to their development because it applies to every area of learning across all academic programs, disciplines, professions, or careers in every industry. Knowing about metacognition is not just a college tool. It’s a life tool.


Wisdom Gained from a Tree Assignment

by Dr. Anne Gatling, Associate Professor, Chair Education Department, Merrimack College

(Post #4 Integrating Metacognition into Practice Across Campus, Guest Editor Series Edited by Dr. Sarah Benes)

On the first day of class, I greet my new students with “get to know you” games before walking them through the outline of the semester. I am a science educator and my students are either juniors or graduate students preparing to teach early childhood and elementary education majors.

The last assignment I share with my students is a tree study. Out of all of my assignments, the tree study assignment captures their attention in very different ways. Students often say: “Observe a ‘what’, for the whole semester?” They ponder this for a while. I reply, “Yes, observe a tree, any tree, at least once a month for the whole semester.”

You may be wondering what the connection is to metacognition with this assignment. I view the tree study as a “stepping stone” toward building metacognitive skills. Students develop self-awareness and mindfulness, which can both contribute to metacognition. It can be helpful to have multiple “entry points” for students when it comes to developing metacognition and metacognitive skills. While this may be a more “indirect” path, it can be beneficial to address self-awareness and mindfulness on their own and recognize the potential benefits for metacognition as well.

Tree Study Overview

Each month, for this assignment all they need to do is make a prediction of their tree and an additional new task along the way, such as sketch your tree, observe little signs of critters, and/or work to identify it. Little did I know that this assignment would become much more than a simple observation. Yes, the students became aware of their surroundings through the observation of the trees, more in tune with the process of observing how things change over time, but more importantly I see my students becoming more and more aware of themselves and their environment.

Here is an example of one students’ tree sketch.

a student's sketch of a large tree along with a note regarding the beauty of the day (May 1) when it was sketched.

This assignment is much different than my other assignments in that I don’t require much more than them taking a picture of their adopted tree once a month and making a few general observations and predictions. I try to meet the students where they are. Some dive in and some just skip around with minimal observations. It is ok. There are far too many things that are high stakes, I just let this one be. I honestly have come to a point where I don’t even want to give this assignment a grade.

What have I learned?

However, I didn’t always have this perspective about the assignment. Initially, this assignment was to help students experience a long-term biology observation, closely investigating changes in a tree, identification, tree rubbings, height etc. But over the years I have come to discover that this assignment means so much more to the students, especially now with quarantines etc.

While I initially didn’t think of this assignment in this way, I have come to realize that these students were also building an awareness of how much of their lives aren’t in the moment and are just beginning to build skills to find their place in the world. This has the potential to help them with their emotional regulation and mindfulness.

More recently I have come to realize that these students were also building an awareness of how much they weren’t in the moment and are beginning to build skills to find their place in the world. This has the potential to help them with their emotional regulation and mindfulness.  

While I enjoy seeing their tree pictures, sketches and observations throughout the semester, I have come to love their final reflections. Students each find their own way with the assignment, learning patience in waiting for a new bud, or reaching to touch a tree for the first time. Many students mention becoming more aware of, and appreciating, nature and their surroundings and becoming more aware of small changes. As I consider metacognition and its role in this assignment, I see it as a type of proto-metacognition activity.  

Student Outcomes

This process of long-term observation has many students learning the importance of patience. Either their tree sprouted much later than others or their predictions missed the mark. Many students become more aware of and gain an appreciation for the subtle changes as well. “I would never have paid any attention to the trees or thought about doing this if it were not for this assignment. I was able to observe how quickly the tree changes and how crazy it is how the trees just do that on their own.”

One student named her tree and a few students even got their friends involved in making observations. Some were able to spy critters they never knew visited their trees via tracks, and even direct observation. Many students mention looking forward to continuing to observe their tree to see how it continues to grow and change and think of a variety of ways to bring a similar type of study to their future students.

In the beginning, I set more expectations, and not every student saw such value in the assignment. Yet, over time I have learned where to give and where to let go and students seem more ready to see where this experience takes them.   This final tree study reflection gives students an opportunity to consider how this tree study impacted them and their learning.

Some students have even found a deeper connection to this assignment. One student, a graduate student placed in a challenging classroom, said, “You go about your day-to-day life and never notice the intricate details that nature undergoes during the springtime. Overall, I think that this assignment forced me to take a second and look at the things that surround me every day. I had never really noticed the tree across the street. . . I like that I got to look closer at the things around me and just take a second. I love trees when I am hiking and sometimes feel like I can only get it then, but this assignment showed me that it is right out my front door always.”

Students, especially now since Covid, seem to be making more changes in how they are looking, slowing down in their process of observation. Maybe by developing more self-awareness and a deeper awareness of their surroundings this assignment can contribute to metacognition perhaps in a more indirect way, offering my students different entry points to the field.

I just assigned the fall tree study this week. I will check in each week and yesterday took them to visit the school garden. There I welcomed them to taste some of its bounty and relax in the peaceful lawn under the trees. Just take time.

In closing, I feel one undergraduate truly embraced this experience in her final project. She placed this poem just above her final tree illustration slide.

Here I sit beneath a tree,
Heartbeat strong,
My soul hums free.
Angie Weiland Crosby

A special thank you to Marcia Edson and Jeff Mehigan for their design of the initial tree study.


Building Emotional Regulation and Metacognition through Academic Entrepreneurship

by Traci McCubbin, M.A., Director of the Promise Program, Merrimack College

(Post #3 Integrating Metacognition into Practice Across Campus, Guest Editor Series Edited by Dr. Sarah Benes)

I teach a required academic study skills course for undergraduate students that have been placed on academic probation. Students share a variety of reasons that have led to their academic predicament, including but not limited to: underdeveloped academic and/or study skills, social and emotional difficulties, time management flaws, and economic challenges.

After digging a bit deeper with students, I found a common trend in addition to the reasons they shared: they lacked positive coping strategies for regulating their emotions. These emotions could be related to difficulties experienced both inside and outside of the classroom. For example, I had students report that they had not been able to cope with the crushing emotions of a close friendship ending. They had either stopped attending class or could not focus in class for weeks.

cartoon of guy sitting in chair and overwhelmed by negative thoughts

As you may guess, their poor academic performance was hindering their academic confidence, and their mindset was more fixed than growth. This blog post shares my creation of self-regulation and metacognition development activities that parallel steps that might be taken when professionals create a business plan. Hence the course title, Academic Entrepreneurship.

Motivating Question: How could I even begin to teach academic strategies or have students reflect on their metacognition, if I couldn’t address their emotional state?

Drawing on Literature and Personal Experience

To begin to answer this question, I turned to the research and published work of Mary Helen Immordino-Yang, Emotions, Learning & the Brain and Carol Dweck, Mindset: The New Psychology of Success. Immordino-Yang’s (2016) research reveals that emotions must be present for learning to occur and that strong social emotions, both positive and negative, have the power to motivate our decisions and actions including educational decisions and actions (Imordino-Yang, 2016, pp. 107,171). Dweck’s (2006) studies consistently show the positive power of a growth mindset and the disruptive power of a fixed mindset. Growth mindset is the idea that intelligence and abilities can be developed overtime with hard work and persistence, while fixed mindset is the belief that intelligence is predetermined or set (Dweck, 2006).

Through my own reflection on my academic journey, I began to understand how my emotions both positively and negatively impacted my learning. During my middle school days, I struggled with math. My mindset was fixed, and I believed that I was not capable of being successful in this subject area. It was as if every time a new concept was taught, I could feel a metal fortress of walls enclose around my brain to prevent any helpful information from penetrating the walls. Despite this struggle, I did finally master fractions and some of the intro to algebra concepts.

As one might expect of a student with a fixed mindset, my frustrations with math and my feelings of defeat followed me from middle school to high school. My high school math teacher started our class off with a review of fractions; immediately, I felt my heart race, my palms get sweaty, and the metal walls beginning to enclose. It was in this moment of panic that I decided to take a few deep breaths, which allowed me to gain clarity. I reminded myself that I already knew how to handle fractions and that I was capable of learning. That moment was life changing, I had adopted a growth mindset. I began to apply this strategy to my fixed mindset areas including but not limited to: running, science, and drumming. Overtime, I began to take more advance math courses and my overall high school GPA began to climb. I have demonstrated both a growth and fixed mindset in different areas of my academic, professional, and personal life. I believe the same must be true for most people as well as for my students.

My personal experiences, combined with the literature, led me to incorporate key components into my study skills course: emotional regulation practices, regular activities to incorporate mindfulness and mindset, and an overarching course theme of entrepreneurship.

Academic Entrepreneurship Class Context

I decided to provide my students with the opportunity to practice coping skills for regulating their emotions, better understand their mindset, and explore the power of growth mindset. Throughout the semester, we opened the start of each class with a 5-minute-or-less mindfulness meditation or a meditative activity such as mindfulness coloring or progressive relaxation. Students were then given time to reflect on the activity and share how they could apply this strategy in their personal lives and/or in the classroom when they felt overwhelmed or highly energized. Mindset was introduced through a series of video clips and case studies. Students were given multiple opportunities throughout the semester to reflect on their mindset and identify opportunities to challenge their mindset.

Concurrent with the self-regulation activities, students were asked to view their academic approach through the lens of an entrepreneur to enhance their metacognitive perspective. The idea is that by building their personal academic business plan, students are empowered to take ownership of their academic experience through a series of metacognitive reflections, exploration of new study skill strategies, and opportunities to practice new and strengthen pre-existing academic skillsets. Students were asked to focus on four areas of a business plan:

  • Company Descriptions: Students create their description by engaging in activities and reflections designed to help them identify their interests, personal values, previous academic experiences, activities that bring them joy, and areas of struggle.
  • Projections: Instead of setting financial projections, students are introduced to SMART Goals and set 4-5 goals with benchmarks for tracking their progress. Students are encouraged to set 2 goals related to their academic progress, one for health and wellness, and one for professional discovery.
  • SWOT Analysis: Students work through motivational interviewing to help each other identify their strengths and successes, areas of weakness, opportunities, and threats. They are also challenged to address their weaknesses and threats by applying their strengths and resources.
  • Marketing Plan: Through a series of activities and reflections, students create a plan to sell their Academic Success Business by identifying skills that they strengthen over the semester, resources they accessed, strategies they incorporated, and how these steps translate to leadership.

Schematic with three components: 1) Fixed Mindset; Emotional Disregulation, 2) Practicing emotional regulation skills; identifying mindset; working towards growth mindset, 3) Postive Student Development Outcomes

Figure 1. Academic Entrepreneurship Course Process

Concluding Question: Was I able to help my students practice and implement coping skills for managing their emotions, take ownership of their academic experience, develop a growth mindset, and think critically about their own thinking and learning?

Yes, somewhat, and no….the answer is a bit more complicated and dependent on the student.

Students did proactively engage in the mindfulness meditations and activities of their own accord. They always had the option to remain respectfully quiet and not participate in the meditations or activities. When prompted by an anonymous poll in class about their recent meditative experience, the majority of students requested that we allow for longer practices and activities. They also proactively engaged in dialogues on how they could use these techniques during study breaks, stressful parts of a test, or when dealing with their roommates.

Students landed in very different places when it came to taking ownership of their academic experience, development of a growth mindset, and metacognitive thinking. By the end of the semester a few students had fully taken ownership of their academic experience, were thinking critically and questioning their learning approach and actions, were working towards developing a growth mindset, and could identify when a fixed mindset was starting to develop.

The majority of the students made progress in one area and less progress in the other areas, or only made progress in one area. A few did not make progress outside of practicing their emotional regulation activities.

Though results were mixed, I still believe it is important to teach emotional regulation techniques, provide space for practice, and give students the time to explore and understand their mindset and metacognitive perspective. If students are more aware of their emotional state and able to exercise regulation strategies, they will be better equipped for reflecting on their mindset and metacognitive perspective. This understanding will help them implement a potential shift in perspective and targeted strategies for success. Development takes time and cannot always occur in the framework of a semester. I believe the seeds have been planted and can be nurtured by the student when they are ready to tend to their garden.

References

Dweck, C.S. (2006). Mindset: the New Psychology of Success. Random House.

Immordino-Yang, M.H. (2016). Emotions, learning, and the brain: Exploring the educational implications of affective neuroscience. W.W.Norton & Company.

Resources

TEDx Manhattan Beach. (2011). Mary Helen Immordino-Yang – Embodied Brains, Social Minds. Retrieved from https://www.youtube.com/watch?v=RViuTHBIOq8

Trevor Ragan. (2016). Growth Mindset Introduction: What it is, How it Works, and Why it Matters. Retrieved from: https://www.youtube.com/watch?v=75GFzikmRY0

Trevor Ragan. (2014). Carol Dweck – A Study on Praise and Mindsets. Retrieved from: https://www.youtube.com/watch?v=NWv1VdDeoRY#action=share


Helping students become self-directed writers

Dr. Christina Hardway, Professor, Department of Psychology, Merrimack College

(Post #2: Integrating Metacognition into Practice Across Campus, Guest Editor Series Edited by Dr. Sarah Benes)

Helping students to become self-directed learners is, arguably, one of the most important outcomes of education. Self-directed learning is proposed as a circular (and iterative) process. It involves making a plan, monitoring one’s progress, and then making changes or adapting as needed. These behaviors occur within the context of one’s beliefs about learning and abilities to succeed (see figure, adapted from Ambrose, et al., 2010).

schematic showing elements of self-directed learning as adapted from Ambrose et al 2010: Assessing the assignment, Evaluating personal resources, Planning accordingly, Applying plan and monitoring progress, Reflecting and adjusting if needed

Helping students to build better metacognitive skills during their regular coursework is important (see Education Endowment Foundation, 2020). This is, perhaps, because metacognitive knowledge (e.g. cognition about cognition), is a relatively abstract concept. Learning theorists like Jean Piaget suggest that learning concrete concepts occurs before learning abstract principles. For this reason, I believe that it is important to provide students with explicit tasks embedded in their courses so that they can practice these skills in order to build this more abstract and flexible set of metacognitive competencies.

This blog post shares activities and suggestions to help students build more metacognitive skills and become better self-directed learners as they complete a challenging, semester-long writing assignment.      

Beliefs and Assumptions

I have taught a writing intensive research methodology course for many years, and the work in this course lends itself to an embedded approach to teaching metacognitive skills. It also presents an opportunity to help students examine their implicit attitudes toward learning and writing. Students come to the classroom with ideas about themselves as writers and may labor under notions like, “I am not a good writer” or “I have to wait until the last minute to start, because that is when I do my best work.” It is within this context that teaching students explicit and concrete ways to self-regulate their learning of the writing process is helpful. Providing activities throughout the semester helps students adjust these beliefs and build better writing practices, which can help them to not only convey their ideas, but also learn from that writing process.

Additionally, the kind of writing required in research courses is often novel for undergraduate students. Many students enrolled in the course are in their second or third semester of college and have never written a long research proposal.      Their assumptions about how to approach this task are, therefore, not always aligned with the requirements. Many students also experience anxiety when faced with an assignment like writing an extensive research paper for the first time. As a result, the assignment of writing a long research proposal, as they are asked to do in this course, provides an opportunity to practice the emotional regulation skills required to successfully manage their intellectual endeavors.

Activities to guide the process of self-directed learning

For each phase of this self-directed learning cycle, I include prompts to guide students to explicitly consider their (often) implicit assumptions about the way they work. Each of these activities gives students the opportunity to reflect on their understanding of the writing process and build better metacognitive skills. Sometimes, these activities are presented in a free-writing exercise, and I commonly divide students into smaller groups to discuss their responses and then report back to the group. This sharing allows students to see that their peers often experience the same struggles during the writing process, and they can offer one another advice or support.

Assessing the assignment. With the permission of previous students, I provide examples of completed work to new students, together with my own annotations, highlighting places where and how requirements were met. This gives them a concrete understanding of what to accomplish. Additionally, I provide a detailed rubric that I review with students multiple times so they can continually compare their progress with the final expectations of the assignment.

Evaluating personal resources. I prompt students to evaluate their personal resources as writers, early in the course. To accomplish this,     I ask them to reflect on their approach to writing by responding to questions like: “Please tell me a bit about your writing process and a few ways you would like to improve as a writer” (adapted from Dunn, 2011). This reflection invites them to step back from the immediate tasks and see their work as connected to their development as scholars, writers, and learners.

Planning. To help students make appropriate plans for completing a long multi-step assignment, I ask them to develop a concrete work-plan, as well as to discuss these plans with others. Two kinds of conversations can facilitate this process. One set of prompts gives students a chance to make specific plans to complete their work, including questions like, Identify times you can complete this work” and “How much work will you complete at each time?” The other set of prompts are designed to scaffold their intellectual development. Through small-group conversations, students describe their research ideas to other students, with instructions like this: “Please describe your research interest. This is an opportunity to discuss your research ideas with someone else. Talking through your ideas is a good way to not only receive feedback, but also, it gives you a sense about which things are clear to you and which concepts need more clarification.”  

Applying & Monitoring. I also ask students to write drafts of sections of this larger paper and to visit a writing fellow in our College Writing Center to discuss them. To help students monitor their progress, I have asked them to complete reflective activities after tutorial sessions, including questions like, “Please describe what you learned about the writing process in your meeting.” and “Please describe AT LEAST three specific revisions for your paper, based on your meeting with the Writing Fellow.”

Reflecting & Adjusting. Several reflective opportunities embedded in the course help students to adjust their approach to writing.

  1. Peer review reflections: At the more immediate level, I ask students to engage in an intensive peer-review process, whereby they read each others’ papers to provide specific feedback. This process of helping others to improve their writing often provokes them to reflect more broadly on the writing process. I ask students to use the paper’s grading rubric, as well as a series of questions that help them to think about ways to evaluate whether the paper under review meets the criteria. For example, I ask them to notice if they need to re-read a passage to understand the author’s point, as this might indicate revision is warranted. After peer-review, students engage in conversations about what they have learned from the process, and I also ask them to identify at least three specific changes to their papers they should focus on next. By providing this feedback, students must step back and think about what makes writing successful, and our subsequent discussions facilitate the development of metacognitive knowledge.
  2. Personal growth reflections: A second set of reflective activities were suggested by our Writing Center and are designed to help students consider the broader ways in which they have changed as writers. These include questions like, “Please consider the different phases of this assignment and discuss what you have learned about writing. and “What are the ways you have improved as a writer? What are some ways that you would like to improve in the future?” This combination of fine-grained, detail-oriented and bigger picture questions is intended to help students develop fundamental metacognitive skills and also a more nuanced understanding of metacognition for their identity as learners and writers.

The self-directed learning cycle is a circular process whereby students bring the skills they learn in one course to their next endeavors. Through this process of sharing and reflecting, they build their metacognitive skills and become more comfortable with their inchoate ideas and compositions. Hopefully, students are then able to transfer these skills into future courses and into their lives outside of academics as well.

References

Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How learning works: Seven research-based principles for smart teaching. Jossey-Bass.

Dunn, D.S. (2011). A Short Guide to Writing about Psychology (3e). Boston: Pearson Longman.

Education Endowment Foundation (2020). Metacognition and Self-Regulated Learning: Guidance Report. Retrieved on July 7, 2021 from https://educationendowmentfoundation.org.uk/public/files/Publications/Metacognition/EEF_Metacognition_and_self-regulated_learning.pdf


Finding Your People

by Dr. Leah Poloskey, Assistant Clinical Professor, Department of Exercise and Rehabilitation Science, Merrimack College, and 

by Dr. Sarah Benes, Associate Clinical Professor, Department of Nutrition and Public Health, Merrimack College

(Post #1: Integrating Metacognition into Practice Across Campus, Guest Editor Series Edited by Dr. Sarah Benes)

How it all began . . .

Reflecting on the journey to having a Guest Editor spot with a mini-series of blog posts about metacognition with our colleagues from across campus was a great opportunity to reconnect to the power of community in this work. And it all began with a problem . . .

We had been discussing how challenging it was to engage students in our Health Science classes (Leah teaches in the Exercise and Rehabilitation Department and Sarah teaches in the Nutrition and Public Health Department). We decided to work together to investigate more deeply (rather than just dwelling on the challenge). We applied to host a Teaching Circle, which is an informal structure at Merrimack College that allows faculty and staff to come together around common interests. Teaching Circle facilitators are awarded small stipends for their time and effort in developing and running these opportunities. We believed that the Teaching Circle structure would provide a great opportunity for us to work within existing campus initiatives to enhance collaboration and engagement with faculty and staff across campus.schematic of three people facing each other with lightbulb being lit over their heads

Our first Teaching Circle was about student engagement. We ended up exploring mindset and the ways that mindset can impact engagement. We conducted a research study where we developed a tool that essentially is a measure of metacognitive states (Mandeville, et al., 2018). With this tool we learned how to assess a student’s self appraisal of their learning, which is a great opportunity to review a student’s intellectual development, mindset and metacognition. Now we had a way to assess these constructs, but what next?

We decided to apply for another Teaching Circle with a focus specifically on Metacognition. Our idea was approved and we were able to engage an even larger group of faculty, staff, and administrators from our academic support staff, to the psychology and business departments and more! Everyone in the group was interested in learning more about ways to support metacognition in our students in our various spaces. And this was the beginning of this blog post series!

What We Learned

Every meeting we had brought together a different group of people depending on schedules and availability. We had core folks who came each time and then a variety of others who came when they were able. Thinking about it now, we remember every meeting being exciting, dynamic, and invigorating.

We didn’t have set agendas and we didn’t have much reading or preparation (unless people asked for items to read). We really just came together to talk and share about our successes and challenges related to supporting students developing their metacognitive skills and to brainstorm ideas to try in our spaces. However, this opportunity for informal community gathering and building was a needed breath of fresh air. We always left energized for the work ahead (and we think the other participants did too!).

In fact, as a result of the Metacognition Teaching Circle, we embarked on a whole new project in which we used the MINDS survey (Mandeville, et al., 2018) at the beginning of the semester and then created “low touch” interventions to support metacognition and growth mindset depending on how students scored on the scale. From this we learned that many students are not familiar with concepts of metacognition and mindfulness, that many actually appreciated the tips and strategies we sent them (and some even used them!), and that students felt that more learning on these topics would be beneficial.

This then lead us to another study, this time examining faculty perceptions of metacognition which we were excited about because our experience suggested that it is likely that folks in certain settings or with certain backgrounds would be more familiar with metacognition and that faculty may not have the understanding or skills to teach metacognition in their courses. For faculty, it is so important to understand the idea of metacognition as it enables students to become flexible and self-directed learners. The teaching and the support of metacognition in the classroom is impactful. It allows students to become aware of their own thinking and to become proficient in choosing appropriate thinking strategies for different learning tasks. Unfortunately, this line of inquiry did not last long due to COVID 19 but we hope to pick this back up this year as we feel it is an important area that could be impactful for faculty and students.

While the research ideas and changes to practice are exciting and were impactful benefits of our Teaching Circles, one of our biggest takeaways was the reminder of the importance of finding others who are also doing the work. Sometimes on our campus, and we suspect it is the case at other institutions as well, we get siloed and often our meetings are with the same folks about the same topics. Being able to facilitate and participate in a cross-campus initiative about a passion topic was an amazing opportunity to meet new people, make new connections, gain different perspectives and create new ideas and strategies to try. We found many people doing great work with students on our campus across so many different departments and schools, and most importantly, found “our people” – people who you can go to when you are stuck, people who you can bounce ideas off of and collaborate with . . . we found our “metacognition people” (some of them at least).

While this was not a “new” idea or “cutting edge”, coming off a year in which we have been separated (in so many ways), we were reminded of the power of connections with others to maintain and sustain ourselves as academics and as humans. We wanted to share that in the guest series by not only showcasing some of the work that our colleagues are doing but also to remind readers to try and find your people . . . whether they are on your campus or off, whether you meet in person or virtual – or only via Tweets on Twitter . . . find the people who can help you maintain, sustain and grow your interest, skills, passion and joy!

We hope you enjoy reading the work of our colleagues and that it helps you on your journey.

References

​​Mandeville, D., Perks. L., Benes, S. & Poloskey, L. (2018). The Mindset and Intellectual Development Scale (MINDS): Metacognitive Assessment for Undergraduate Students. International Journal of Teaching and Learning in Higher Education, 30(3).