Collateral Metacognitive Damage

Why Seeing Others as “The Little Engines that Could” beats Seeing Them as “The Little Engines Who Were Unskilled and Unaware of It”

by Ed Nuhfer,Ph.D. Professor of Geology, Director of Faculty Development and Director of Educational Assessment, California State Universities (retired)

What is Self-Assessment?

At its root, self-assessment registers as an affective feeling of confidence in one’s ability to perform in the present. We can become consciously mindful of that feeling and begin to distinguish the feeling of being informed by expertise from the feeling of being uninformed. The feeling of ability to rise in the present to a challenge is generally captured by the phrase “I think I can….” Studies indicate that we can improve our metacognitive self-assessment skill with practice.

Quantifying Self-Assessment Skill

Measuring self-assessment accuracy assessment lies in quantifying the difference between a felt competence to perform and a measure of the actual competence demonstrated. However, what at first glance appears to be a calculation of simple subtraction has proven to be a nightmarish challenge to a researcher’s efforts in presenting data clearly and interpreting it accurately. I speak of this “nightmare” with personal familiarity. Some colleagues and I recently summarized different kinds of self-assessments, self-assessment’s relationship to self-efficacy, the importance of self-assessment to achievement, and the complexity of interpreting self-assessment measurements (Nuhfer and others, 2016; 2017).

Can we or can’t we do it?

The children’s story, The Little Engine that Could is a well-known story of the power of positive self-assessment. The throbbing “I think I can, I think I can…” and the success that follows offers an uplifting view of humanity’s ability to succeed. That view is close to the traits of the “Growth Mindset” of Stanford Psychologist Carol Dweck (2016). It is certainly more uplifting than an alternative title, The Little Engine that Was Unskilled and Unaware of It,” which predicts a disappointing ending to “I think I can I think I can….” The dismal idea that our possible competence is capped by what nature conferred at birth is a close analog to the traits of Dweck’s “Fixed Mindset,” which her research revealed as toxic to intellectual development.

As writers of several Improve with Metacognition blog entries have noted, “Unskilled and Unaware of It” are key words from the title of a seminal research paper (Kruger & Dunning, 1999) that offered one of the earliest credible attempts to quantify the accuracy of metacognitive self-assessment. That paper noted that some people were extremely unskilled and unaware of it. Less than a decade later, psychologists were claiming: “People are typically overly optimistic when evaluating the quality of their performance on social and intellectual tasks” (Ehrlinger and others, 2008). Today, laypersons cite the “Dunning-Kruger Effect” and often use it to label any individual or group that they dislike as “unskilled and unaware of it.” We saw the label being applied wholesale in the 2016 presidential election, not just to the candidates but also to the candidates’ supporters.

Self-assessment and vulnerability

Because self-assessment is about taking stock of ourselves rather than judging others, using the Dunning-Kruger Effect to label others is already on shaky ground. But are the odds that those we are tempted to label as “unskilled and unaware of it” likely to be correct? While the consensus in the literature of psychology seems to indicate that they are, our investigation of the numeracy underlying the consensus indicates otherwise (Nuhfer and others, 2017).

We think that nearly two decades of replicated studies that concluded that people are “…typically overly optimistic…” exhibited replication because they all relied on variants of a unique graphic introduced in the seminal paper in 1999. These graphs generate artifact patterns from both actual data and random numbers that are patterns expected from a Dunning-Kruger Effect, and the artifacts are easily mistaken for expressions of actual human self-assessment traits.

After gaining some understanding of the hazards presented by the devilish nature of self-assessment measures, our quantitative results showed that people, in general, have a surprisingly good awareness of their capabilities (Nuhfer and others, 2016, 2017). About half of our studied populace of over a thousand students and faculty accurately self-assessed their performance within ± 10 percentage points (ppts), and about two-thirds of people proved accurate within ±15 ppts. About 25% might qualify as having inadequate self-assessment skills (greater than ± 20 ppts), but only about 5% of our academic populace might merit the label “unskilled and unaware of it” (overestimated their abilities by 30 ppts or more). Odds seem high against a randomly selected person being seriously “unskilled and unaware of it” and are very high against this label being validly applicable to a group.

Others often rise to the expectations we have of them.

Consider the collective effects of people’s accepting beliefs about themselves and others as “unskilled and unaware of it.” This negative perspective can predispose an organization to accept, as a given, that people are less capable than they really are. Further, for those of us with power, such as instructors over students or tenured peers over untenured instructors, we should become aware of a term called “gaslighting.” In gaslighting, our negatively biased actions or comments may result in taking away the self-confidence of others who accept us as credible, trustworthy, and important to their lives. This type of influence can lead to lower performance, thus seeming to substantiate the initial negative perspective. When gaslighting is deliberate, it constitutes a form of emotional abuse.

Aren’t you CURIOUS yet?

Wondering about your self-assessment skills and how they compare with those of novices and experts? Give yourself about 45 minutes and try the self-assessment instrument used in our research at <http://tinyurl.com/metacogselfassess>. You will receive a confidential report if you furnish your email at the end of completing that self-assessment.

Several of us, including our blog founder Lauren Scharff, will be presenting the findings and implications of our recent numeracy studies in August, at the Annual Meeting of the American Psychological Association in Washington DC. We hope some of our fellow bloggers will be able to join us there.

References

Dweck, C. (2016). Mindset: The New Psychology of Success. New York: Ballantine.

Ehrlinger J., Johnson, K., Banner M., Dunning, D., and Kruger, J. (2008). Why the unskilled are unaware: Further explorations of absent self-insight among the incompetent. Organizational Behavior and Human Decision Processes 105: 98–121. http://dx.doi.org/10.1016/j.obhdp.2007.05.002.

Kruger, J. and Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one‘s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology 77: 1121‒ 1134. http://dx.doi.org/10.1037/0022- 3514.77.6.1121

Nuhfer, E. B., Cogan, C., Fleisher, S., Gaze, E., and Wirth, K., (2016). Random number simulations reveal how random noise affects the measurements and graphical portrayals of self-assessed competency.” Numeracy 9 (1): Article 4. http://dx.doi.org/10.5038/1936-4660.9.1.4

Nuhfer, E. B., Cogan, C., Fleisher, S., Wirth, K. and Gaze, E., (2017), “How random noise and a graphical convention subverted behavioral scientists’ explanations of self-assessment data: Numeracy underlies better alternatives. Numeracy: 10 : (1): Article 4.

<DOI: http://dx.doi.org/10.5038/1936-4660.10.1.4>


Developing Mindfulness as a Metacognitive Skill

by Ed Nuhfer Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029

A simple concept for enhancing learning is to engage more of the brain in more of the students. “Interactive-engagement,” “collaborative/cooperative learning,” “problem-based learning” and an entire series of active learning pedagogies use the concept to optimize learning. Research shows that active learning works. While frequently espoused as “student-centered learning,” advocates frequently use the active learning terms to promote particular kinds of pedagogy as “student-centered.”

However, active learning is neither the only way to enhance learning nor is it usually as student-centered as advocates claim. Whether the design occurs by the course instructor or with an involvement of a more recent profession of “learning designers,” the fact is that the emphasis is on pedagogy and on student learning. As such, they are more focused on student learning than were older traditional methods of content delivery, but the reach to proclaim most learning-centered pedagogies as student-centered leaves a bit of a gap. Metacognition is the factor missing to help close the gap needed to make learning-centered practices more student-centered.

While pedagogy focuses on teaching, mindfulness focuses on knowing of one’s present state of engagement. Mindfulness develops by the learner from within, and this makes it different from the learning developed through a process designed from without. Metacognition is very student centered, and mindfulness could be the most student-centered metacognitive skill of all.

Because mindfulness involves being aware in the present moment, it can engage more of the brain needed for awareness by enlisting the parts of the brain concurrently distracted by our usual “default mode.” Operating in default mode includes thinking of imagined conversations, playing music inside of one’s head, unproductive absorption in activities in which one is not presently engaged, or thinking of responses to a conversation while not attending fully to hearing it.

Mindfulness receives frequent mention as a method of stress management, particularly when it enlists the parts of the brain that would otherwise be engaging in worrying or in preparing an unneeded flight-or-fight reaction. The need to manage stress by today’s college students seems greater than before. However, its value to student success extends beyond managing stress to enhancing cognitive learning through improving concentration and increasing the ability to focus and to improve interpersonal communication by enhancing ability to listen.

Mindfulness has its roots in Zen meditation, which laypersons easily perceive as something esoteric, mystical, or even bordering on religion. In reality, mindfulness is none of these. It is simply the beneficial outcome of practice to develop metacognitive skill. It is simple to learn, and measurable improvements can occur in as little as six weeks.

For blog readers, an opportunity to develop mindfulness is fast approaching on September 19, 2016, when Australia’s Monash University again offers its free massive open online course (MOOC) in mindfulness. Rather than gurus dressed in costumes, the instructors are psychology professors Drs Craig Hassed and Richard Chambers, who occasionally appear in ties and sportcoats. The course is immensely practical, and the two professors are also authors of a highly rated book, Mindful Learning, which is likely of interest to all members of this particular metacognitive blogosphere. Perhaps we’ll see each other online in Australia!

**This blog contribution is a short derivation from “Mindfulness as a Metacognitive Skill: Educating in Fractal Patterns XLVII” by the author and forthcoming in National Teaching and Learning Forum V25 N5.


When is Metacognitive Self-Assessment Skill “Good Enough”?

Ed Nuhfer Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029 (with Steve Fleisher CSU-Channel Islands; Christopher Cogan, Independent Consultant; Karl Wirth, Macalester College and Eric Gaze, Bowdoin College)

We noted the statement by Zell and Krizan (2014, p. 111) that: “…it remains unclear whether people generally perceive their skills accurately or inaccurately” in Nuhfer, Cogan, Fleisher, Gaze and Wirth (2016) In our paper, we showed why innumeracy is a major barrier to the understanding of metacognitive self-assessment.

Another barrier to progress exists because scholars who attempt separately to do quantitative measures of self-assessment have no common ground from which to communicate and compare results. This occurs because there is no consensus on what constitutes “good enough” versus “woefully inadequate” metacognitive self-assessment skills. Does overestimating self-assessment skill by 5% allow labeling a person as “overconfident?” We do not believe so. We think that a reasonable range must be exceeded before those labels should be considered to apply.

The five of us are working now on a sequel to our above Numeracy paper. In the sequel, we interpret the data taken from 1154 paired measures from a behavioral science perspective. This extends our first paper’s describing of the data through graphs and numerical analyses. Because we had a database of over a thousand participants, we decided to use it to propose the first classification scheme for metacognitive self-assessment. It defines categories based on the magnitudes of self-assessment inaccuracy (Figure 1).

metacogmarchfig1
Figure 1. Draft of a proposed classification scheme for metacognitive self-assessment based upon magnitudes of inaccuracy of self-assessed competence as determined by percentage points (ppts) differences between ratings of self-assessed competence and scores from testing of actual competence, both expressed in percentages.

If you wonder where the “good” definition comes from in Figure 1, we disclosed on page 19 of our Numeracy paper: “We designated self-assessment accuracies within ±10% of zero as good self-assessments. We derived this designation from 69 professors self-assessing their competence, and 74% of them achieving accuracy within ±10%.”

The other breaks that designate “Adequate,” “Marginal,” “Inadequate,” and “Egregious” admittedly derive from natural breaks based upon measures expressed in percentages. By distribution through the above categories, we found that over 2/3 of our 1154 participants had adequate self-assessment skills, a bit over 21% exhibited inadequate skills, and the remainder lay within the category of “marginal.” We found that less than 3% qualified by our definition as “unskilled and unaware of it.”

These results indicate that the popular perspectives found in web searches that portray people in general as having grossly overinflated views of their own competence may be incomplete and perhaps even erroneous. Other researchers are now discovering that the correlations between paired measures of self-assessed competence and actual competence are positive and significant. However, to establish the relationship between self-assessed competency and actual competency appears to require more care in taking the paired measures than many of us researchers earlier suspected.

Do the categories as defined in Figure 1 appear reasonable to other bloggers, or do these conflict with your observations? For instance, where would you place the boundary between “Adequate” and “Inadequate” self-assessment? How would you quantitatively define a person who is “unskilled and unaware of it?” How much should a person overestimate/underestimate before receiving the label of “overconfident” or “underconfident?”

If you have measurements and data, please compare your results with ours before you answer. Data or not, be sure to become familiar with the mathematical artifacts summarized in our January Numeracy paper (linked above) that were mistakenly taken for self-assessment measures in earlier peer-reviewed self-assessment literature.

Our fellow bloggers constitute some of the nation’s foremost thinkers on metacognition, and we value their feedback on how Figure 1 accords with their experiences as we work toward finalizing our sequel paper.


Quantifying Metacognition — Some Numeracy behind Self-Assessment Measures

Ed Nuhfer, Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029

Early this year, Lauren Scharff directed us to what might be one of the most influential reports on quantification of metacognition, which is Kruger and Dunning’s 1999 “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments.” In the 16 years that since elapsed, a popular belief sprung from that paper that became known as the “Dunning-Kruger effect.” Wikipedia describes the effect as a cognitive bias in which relatively unskilled individuals suffer from illusory superiority, mistakenly assessing their ability to be much higher than it really is. Wikipedia thus describes a true metacognitive handicap in a lack of ability to self-assess. I consider Kruger and Dunning (1999) as seminal because it represents what may be the first attempt to establish a way to quantify metacognitive self-assessment. Yet, as time passes, we always learn ways to improve on any good idea.

At first, quantifying the ability to self-assess seems simple. It appears that comparing a direct measure of confidence to perform taken through one instrument with a direct measure of demonstrated competence taken through another instrument should do the job nicely. For people skillful in self-assessment, the scores on both self-assessment and performance measures should be about equal. Seriously large differences can indicate underconfidence on one hand or “illusory superiority” on the other.

The Signal and the Noise

In practice, measuring self-assessment accuracy is not nearly so simple. The instruments of social science yield data consisting of the signal that expresses the relationship between our actual competency and our self-assessed feelings of competency and significant noise generated by our human error and inconsistency.

In analogy, consider signal as your favorite music on a radio station, the measuring instrument as your radio receiver, the noise as the static that intrudes on your favorite music, and the data as the actual sound mix of noise and signal that you hear. The radio signal may truly exist, but unless we construct suitable instruments to detect it, we will not be able to generate convincing evidence that the radio signal even exists. Failures can lead to the conclusions that metacognitive self-assessment is no better than random guessing.

Your personal metacognitive skill is analogous to an ability to tune to the clearest signal possible. In this case, you are “tuning in” to yourself—to your “internal radio station”—rather than tuning the instruments that can measure this signal externally. In developing self-assessment skill, you are working to attune your personal feelings of competence to reflect the clearest and most accurate self-assessment of your actual competence. Feedback from the instruments has value because they help us to see how well we have achieved the ability to self-assess accurately.

Instruments and the Data They Yield

General, global questions such as: “How would you rate your ability in math?” “How well can you express your ideas in writing?” or “How well do you understand science?” may prove to be crude, blunt self-assessment instruments. Instead of single general questions, more granular instruments like knowledge surveys that elicit multiple measures of specific information seem needed.

Because the true signal is harder to detect than often supposed, researchers need a critical mass of data to confirm the signal. Pressures to publish in academia can cause researchers to rush to publish results from small databases obtainable in a brief time rather than spending the time, sometimes years, needed to generate the database of sufficient size that can provide reproducible results.

Understanding Graphical Depictions of Data

Some graphical conventions that have become almost standard in the self-assessment literature depict ordered patterns from random noise. These patterns invite researchers to interpret the order as produced by the self-assessment signal. Graphing of nonsense data generated from random numbers in varied graphical formats can reveal what pure randomness looks like when depicted in any graphical convention. Knowing the patterns of randomness enables acquiring the numeracy needed to understand self-assessment measurements.

Some obvious questions I am anticipating follow: (1) How do I know if my instruments are capturing mainly noise or signal? (2) How can I tell when a database (either my own or one described in a peer-reviewed publication) is of sufficient size to be reproducible? (3) What are some alternatives to single global questions? (4) What kinds of graphs portray random noise as a legitimate self-assessment signal? (5) When I see a graph in a publication, how can I tell if it is mainly noise or mainly signal? (6) What kind of correlations are reasonable to expect between self-assessed competency and actual competency?

Are There Any Answers?

Getting some answers to these meaty questions requires more than a short blog post, but some help is just a click or two away. This blog directs readers to “Random Number Simulations Reveal How Random Noise Affects the Measurements and Graphical Portrayals of Self-Assessed Competency” (Numeracy, January 2016) with acknowledgments to my co-authors Christopher Cogan, Steven Fleisher, Eric Gaze and Karl Wirth for their infinite patience with me on this project. Numeracy is an open-source journal, and you can download the paper for free. Readers will likely see self-assessment literature in different ways way after reading the article.


Metacognition in Psychomotor Development and Positive Error Cultures

Ed Nuhfer, Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029

All of us experience the “tip of the tongue” phenomenon. This state occurs when we truly do know something, such as the name of a person, but we cannot remember the person’s name at a given moment. The feeling that we do know is a form of metacognitive awareness that confirms the existence of a real neural network appropriate to the challenge. It is also an accurate knowing that carries confidence that we can indeed retrieve the name given the right memory trigger.

In “thinking about thinking” some awareness of the connection between our psychomotor domain and our efforts to learn can be useful. The next time you encounter a tip-of-the-tongue moment, try clenching your left hand. Ruth Propper and colleagues confirmed that left hand clenching activates the right hemisphere of the brain and can enhance recall. When learning names, clenching of the right hand activates the left hemisphere and can enhance encoding (http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0062474). Not all connections between the psychomotor domain and intellectual development are this direct, but it is very useful to connect efforts to develop intellectually with established ways that promote psychomotor development.

Young people are active, so many things that excite them to initiate their learning have a heavy emphasis on psychomotor development. Examples are surfing, snowboarding, dance, tennis, martial arts, yoga, or a team sport. We can also include the hand-eye coordination and learning patterns involved in many addictive video games as heavy on kinesthetic learning, even though these do not offer health benefits of endurance, strength, flexibility, balance, etc. It is rare that anyone who commits to learning any of these fails to achieve measurably increased proficiency.

K-12 teacher Larry Ferlazzo uses the act of missing a wastebasket with a paper wad to help students understand how to value error and use it to inform strategies for intellectual development (http://larryferlazzo.edublogs.org/2011/10/31/an-effective-five-minute-lesson-on-metacognition). His students begin to recognize how the transfer of practices that they already accept as valid from their experiences may likely improve their mastery in less familiar challenges during intellectual development.

College teachers also know that the most powerful paths to high-level thinking engage the psychomotor domain. Visualization that involves explaining to self by diagram and developing images of the knowledge engages psychomotor skills. Likewise, writing engages the psychomotor in developing text, tracking and explaining reasoning and in revising the work (Nuhfer, 2009, 2010 a, b).

Students already “get” that many trips down the ski trail are needed to master snowboarding; they may not “get” that writing many evaluative argument papers is necessary to master critical thinking. In the former, they learn from their most serious error and focus on correcting it first. They correctly surmise that the focused effort to correct one troublesome issue will be beneficial. In efforts to develop intellectually, students deprived of metacognitive training may not be able to recognize or prioritize their most serious errors. This state deprives them of awareness needed to do better on subsequent challenges.

It is important for educators to recognize how particular cultures engage with error. Author and neuroscientist Gerd Gigerenzer, Director of the Max Planck Institute for Human Development and  the Harding Center for Risk Literacy (2014) contrasts positive and negative error cultures. A positive error culture promotes recognition and understanding of error. They discuss error openly, and sharing of experienced error is valued as a way to learn. This culture nurtures a growth mindset in which participants speak metacognitively to self in terms of: “Not yet… change this …better next time.” Gigerenzer cites aviation as a positive error culture of learning that has managed to reduce plane crashes to one in ten million flights. Interestingly, the cultures of surfing, snowboarding, dance, tennis, martial arts and yoga all promote development through positive error cultures. Positive error cultures make development through practice productive and emotionally safe.

Gigerenzer cites the American system of medical practice as one example of a negative error culture, wherein systems of reporting, discussing and learning from serious errors are nearly nonexistent. Contrast aviation safety with the World Heath Organization report that technologically advanced hospitals harm about 10% of their patients. James (2013) deduced that hospital error likely causes over 400,000 deaths annually (http://journals.lww.com/journalpatientsafety/Fulltext/2013/09000/A_New,_Evidence_based_Estimate_of_Patient_Harms.2.aspx). Negative error cultures make it unsafe to discuss or to admit to error and therefore, they are ineffective learning organizations. In negative error cultures, error discovery results in punishment. Negative error cultures nurture fear and humiliation and thereby make learning unsafe. Error there delivers the metacognitive declaration, “I failed.”

We should think in what ways our actions in higher education support positive or negative error cultures and what kinds of metacognitive conversations we nurture in participants (colleagues, students) of the culture. We can often improve intellectual development through understanding how the positive error cultures promote psychomotor development.

 

References

Gigerenzer, G. (2014) Risk Savvy: How to Make Good Decisions. New York New York: Penguin.

Nuhfer, E.B. (2009) “A Fractal Thinker Designs Deep Learning Exercises: Learning through Languaging. Educating in Fractal Patterns XXVIII, Part 2.” The National Teaching & Learning Forum, Vol. 19, No. 1, pp. 8-11.

Nuhfer, E.B. (2010a) “A Fractal Thinker Designs Deep Learning Exercises: Acts of Writing as “Gully Washers”- Educating in Fractal Patterns XXVIII, Part 3.” The National Teaching & Learning Forum, Vol. 19, No. 3, pp. 8-11.

Nuhfer, E.B. (2010b) “A Fractal Thinker Designs Deep Learning Exercises: Metacognitive Reflection with a Rubric Wrap Up – Educating in Fractal Patterns XXVIII, Part 4.” The National Teaching & Learning Forum, Vol. 19, No. 4, pp. 8-11.


Developing Metacognitive Literacy through Role Play: Edward De Bono’s Six Thinking Hats

Ed Nuhfer, Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029

In recent posts on the “Improve with Metacognition” blog, we gained some rich contributions that are relevant to teaching metacognition across all disciplines. (Scharff, 2015) offered a worthy definition of metacognition as “the intentional and ongoing interaction between awareness and self-regulation.” Lauren Scharff’s definition references intentionality, which John Flavell, the founding architect of metacognitive theory, perceived as essential to doing metacognition. Actions that arise from intentional thinking are deliberate, informed and goal-directed (see http://www.lifecircles-inc.com/Learningtheories/constructivism/flavell.html).

Dr Edward de Bono created Six Thinking Hats as a framework of training for thinking. De Bono’s hats assign six distinct modes of thinking. Each role is so simple and clear that the thinker can easily monitor if she or he is engaged in the mode of thinking assigned by the role. Further, communicating the thinking through expressions and arguments to others familiar with the roles allows a listener to correctly assess the mode of thinking of the speaker or writer. Successful training eventually vests participants with the ability to shift comfortably between all six modes as a way to understand an open-ended problem from a well-rounded perspective before committing to a decision. Both training and application constitute role-play, in which each participant must, for a time, assume and adhere to the role of thinking represented by each particular hat. During training, the participant experiences playing all six roles.

Six Thinking Hats: Summary of the Roles

The White Hat role offers the facts. It is neutral, objective and practical. It provides an inventory of the best information known without advocating for solutions or positions.

The Yellow Hat employs a sunny, positive affect to advocate for a particular position/action but always justifies the proposed action with supporting evidence. In short, this hat advocates for taking informed action.

The Black Hat employs a cautious and at times negative role in order to challenge proposed positions and actions, but this role also requires the challenging argument to be supported by evidence. This hat seeks to generate evidence-based explanations for why certain proposals may not work or may prove counter-productive.

The Red Hat’s role promotes expression of felt emotion—positive, negative, or apathy—without any need to justify the expressed position with evidence. Red Hat thinking runs counter to the critical thinking promoted in higher education. However, to refuse to allow voice to Red Hat thinking translates into losing awareness that such thinking exists and may ultimately undermine an evidence-based decision. De Bono recognized a sobering reality: citizens often make choices and take actions based upon affective feelings rather than upon good use of evidence.

The Green Hat role is provocative in that it questions assumptions and strives to promote creative thinking that leads to unprecedented ideas or possibly redefines the challenge in a new way. Because participants recognize that each presenter is playing a role, the structure encourages creativity in the roles of all hats. It enables a presenter to stretch and present an idea or perspective that he or she might feel too inhibited to offer if trepidation exists about being judged personally for so-doing.

The Blue Hat is the control hat. It is reflective and introspective as it looks to ensure that the energy and contributions of all of the other hats are indeed enlisted in addressing a challenge. It synthesizes the awareness that grows from discussions and, in a group, is the hat that is charged with summarizing progress for other participants. When used by an individual working alone, assuming the role of the Blue Hat offers a check on whether the individual has actually employed the modes of all the hats in order to understand a challenge well.

Six Thinking Hats exercises are overtly metacognitive—intentional, deliberate, and goal-directed. One must remain focused on the role and the objective of contributing to meeting the challenge by advocating from thinking in that role. Those who have experienced training know how difficult it can be to thoughtfully argue for a position that one dislikes and how easily one can slip out of playing the assigned role into either one’s usual style of thinking and communicating or toward starting to advocate for one’s favored position.

Classroom use can take varied forms, and the most useful props are a one-page handout that concisely explains each role and six hats in the appropriate colors. Two of several formats that I have used follow.

(1) The class can observe a panel of six engage a particular challenge, with each member on the panel wearing the assigned hat role and contributing to discussing the challenge in that role. After the six participants have each contributed, they pass the hat one person clockwise and repeat the process until every person has assumed all six roles. Instructors, from time to time, pause the discussion and invite the observers to assume each of the roles in sequence.

(2) One can arrange the class into a large circle and toss a single hat in the center of the circle. Every person must mindfully assume the role of that hat and contribute. The instructor can serve the blue-hat role as a recorder at the whiteboard and keep a log of poignant contributions that emerge during the role-plays. The process continues until all in the class have experienced the six roles. In follow-up assignments, self-reflection exercises should require students to analyze a particular part of the assignment regarding the dominant kind of “colored hat” thinking that they are engaged in.

I first learned about Six Thinking Hats from a geology professor at Colorado School of Mines, who had learned it from Ruth Streveler, CSM’s faculty developer. The professor used it to good advantage to address open-ended case studies, such as deciding whether to permit a mine in a pristine mountain area or to develop a needed landfill near Boulder, Colorado. Subsequently, I have used it to good advantage in many classes and faculty development workshops.

When one develops ability to use all six hats well, one actually enters the higher stages of adult developmental thinking models. All involve the obtaining of relevant evidence, weighing of contradictory evidence, addressing affective influences, developing empathy with others oppositional viewpoints, and understanding the influences of one’s own bias and feelings on a decision (Nuhfer and Pavelich, 2001 mapped Six Thinking Hats onto several developmental models: ModelsAdultThinkingmetacog).

We can become aware of metacognition by reading about it, but we only become literate about metacognition through experiences gained through consciously applying it. Draeger, 2015 offered thoughts that expanded our thinking about Scharff’s definition by suggesting that advantages can come from embracing metacognition as vague. The flexibility gained by practicing applications on diverse cases allows us to appreciate the plastic, complex nature of metacognition as we stretch to do think well as we engage challenges.

Six Thinking Hats offers an amorphous approach to engaging nearly any kind of open-ended real-life challenge while mindfully developing metacognitive awareness and skill. After experiencing such an exercise, one can return to a definition of metacognition, like that of Lauren Scharff’s and find deeper meanings within the definition that were unlikely apparent from initial exposure to the definition.

Reference

Nuhfer E. and Pavelich M. (2001). Levels of thinking and educational outcomes. National Teaching and Learning Forum 11 (1) 9-11.


Self-assessment and the Affective Quality of Metacognition: Part 2 of 2

Ed Nuhfer, Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029

In Part 1, we noted that knowledge surveys query individuals to self-assess their abilities to respond to about one hundred to two hundred challenges forthcoming in a course by rating their present ability to meet each challenge. An example can reveal how the writing of knowledge survey items is similar to the authoring of assessable Student Learning Outcomes (SLOs). A knowledge survey item example is:

I can employ examples to illustrate key differences between the ways of knowing of science and of technology.

In contrast, SLOs are written to be preceded by the phrase: “Students will be able to…,” Further, the knowledge survey items always solicit engaged responses that are observable. Well-written knowledge survey items exhibit two parts: one affective, the other cognitive. The cognitive portion communicates the nature of an observable challenge and the affective component solicits expression of felt confidence in the claim, “I can….” To be meaningful, readers must explicitly understand the nature of the challenges. Broad statements such as: “I understand science” or “I can think logically” are not sufficiently explicit. Each response to a knowledge survey item offers a metacognitive self-assessment expressed as an affective feeling of self-assessed competency specific to the cognitive challenge delivered by the item.

Self-Assessed Competency and Direct Measures of Competency

Three competing hypotheses exist regarding self-assessed competency relationship to actual performance. One asserts that self-assessed competency is nothing more than random “noise” (https://www.koriosbook.com/read-file/using-student-learning-as-a-measure-of-quality-in-hcm-strategists-pdf-3082500/; http://stephenporter.org/surveys/Self%20reported%20learning%20gains%20ResHE%202013.pdf).  Two others allow that self-assessment is measurable. When compared with actual performance, one hypothesis maintains that people typically overrate their abilities and generally are “unskilled and unaware of it” (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2702783/).  The other, “blind insight” hypothesis, indicates the opposite: a positive relationship exists between confidence and judgment accuracy (http://pss.sagepub.com/content/early/2014/11/11/0956797614553944).

Suitable resolution of the three requires data acquired from paired instruments of known reliability and validity. Both instruments must be highly aligned to collect data that addresses the same learning construct. The Science Literacy Concept Inventory (SLCI), a 25-item test tested on over 17,000 participants, produces data on competency with Cronbach Alpha Reliability .84, and possesses content, construct, criterion, concurrent, and discriminant validity. Participants (N=1154) who took the SLCI also took a knowledge survey (KS-SLCI with Cronbach Alpha Reliability of .93) that produced a self-assessment measure based on the identical 25 SLCI items. The two instruments are reliable and tightly aligned.

If knowledge surveys register random noise, then data furnished from human subjects will differ little from data generated with random numbers. Figure 1 reveals that data simulated from random numbers 0, 1, and 2 yield zero reliability, but real data consistently show reliability measures greater than R = .9 (Figure 1). Whatever quality(ies) knowledge surveys register is not “random noise.” Each person’s self-assessment score is consistent and characteristic.

randomksevenodd1154.eps

Figure 1. Split-halves reliabilities of 25-item KS-SLCI knowledge surveys produced by 1154 random numbers (left) and by 1154 actual respondents (right). 

Correlation between the 1154 actual performances on the SLCI and the self-assessed competencies through the KS-SLCI is a highly significant r = 0.62. Of the 1154 participants, 41.1% demonstrated good ability to self-assess their actual abilities to perform within ±10%, 25.1% of participants proved to be under-estimators, and 33.8% were over-estimators.

Because each of the 25 SLCI items poses challenges of varying difficulty, we could also test whether participants’ self-assessments gleaned from the knowledge survey did or did not show a relationship to the actual difficulty of items as reflected by how well participants scored on each of them. The collective self-assessments of participants revealed an almost uncanny ability to reflect the actual performance of the group on most of the twenty-five items (Figure 2), thus supporting the “blind insight” hypothesis. Knowledge surveys appear to register meaningful metacognitive measures, and results from reliable, aligned instruments reveal that people do generally understand their degree of competency.

knowledgesurveySLCIdifficulty

Figure 2. 1154 participants’ average scores on each of 25 SLCI items correspond well (r = 0.76) to their average scores predicted by knowledge survey self-assessments.

Advice in Using Knowledge Surveys to Develop Metacognition

  • In developing competency in metadisciplinary ways of knowing, furnish a bank of numerous explicit knowledge survey items that scaffold novices into considering the criteria that experts consider to distinguish a specific way of knowing from other ways of thinking.
  • Keep students in constant contact with self-assessing by redirecting them repeatedly to specific blocks of knowledge survey items relevant to tests and other evaluations and engaging them in debriefings that compare their self-assessments with performance.
  • Assign students in pairs to do short class presentations that address specific knowledge-survey items while having the class members monitor their evolving feelings of confidence to address the items.
  • Use the final minutes of the class period to enlist students in teams in creating alternative knowledge survey items that address the content covered by the day’s lesson.
  • Teach students the Bloom Taxonomy of the Cognitive Domain (http://orgs.bloomu.edu/tale/documents/Bloomswheelforactivestudentlearning.pdf) so that they can recognize both the level of challenge and the feelings associated with constructing and addressing different levels of challenge.

Conclusion: Why use knowledge surveys?

  • Their skillful use offers students many practices in metacognitive self-assessment over the entire course.
  • They organize our courses in a way that offers full transparent disclosure.
  • They convey our expectation standards to students before a course begins.
  • They serve as an interactive study guide.
  • They can help instructors enact instructional alignment.
  • They might be the most reliable assessment measure we have.

 

 


Self-assessment and the Affective Quality of Metacognition: Part 1 of 2

Ed Nuhfer, Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029

In The Feeling of What Happens: Body and Emotion in the Making of Consciousness(1999, New York, Harcourt), Antonio Damasio distinguished two manifestations of the affective domain: emotions (the external experience of others’ affect) and feelings(the internal private experience of one’s own affect). Enacting self-assessment constitutes an internal, private, and introspective metacognitive practice.

Benjamin Bloom recognized the importance of the affective domain’s involvement in successful cognitive learning, but for a time psychologists dismissed the importance of both affect and metacognition on learning (See Damasio, 1999; Dunlosky and Metcalfe, 2009, Metacognition, Los Angeles, Sage). To avoid repeating these mistakes, we should recognize that attempts to develop students’ metacognitive proficiency without recognizing metacognition’s affective qualities are likely to be minimally effective.

In academic self-assessment, an individual must look at a cognitive challenge and accurately decide her/his capability to meet that challenge with present knowledge and resources. Such decisions do not spring only from thinking cognitively about one’s own mental processes. Affirming that “I can” or “I cannot” meet “X” (the cognitive challenge) with current knowledge and resources draws from affective feelings contributed by conscious and unconscious awareness of what is likely to be an accurate decision.

“Blind insight” (http://pss.sagepub.com/content/early/2014/11/11/0956797614553944) is a new term in the literature of metacognition. It confirms an unconscious awareness that manifests as a feeling that supports sensing the correctness of a decision. “Blind insight” and “metacognitive self-assessment” seem to overlap with one another and with Damasio’s “feelings.”

Research in medical schools confirmed that students’ self-assessment skills remained consistent throughout medical education (http://files.eric.ed.gov/fulltext/ED410296.pdf.)  Two hypotheses compete to explain this confirmation.  One is that self-assessment skills establish early in life and cannot be improved in college. The other is that self-assessment skill remains fixed in post-secondary education only because it is so rarely taught or developed. The first hypothesis seems contradicted by the evidence supporting brain plasticity, constructivist theories of learning and motivation, metacognition theory, self-efficacy theory (http://files.eric.ed.gov/fulltext/EJ815370.pdf), and by experiments that confirm self-assessment as a learnable skill that improves with training (http://psych.colorado.edu/~vanboven/teaching/p7536_heurbias/p7536_readings/kruger_dunning.pdf).

Nursing is perhaps the discipline that has most recognized the value of developing intuitive feelings informed by knowledge and experience as part of educating for professional practice.

“At the expert level, the performer no longer relies on an analytical principle (rule, guideline, maxim) to connect her/his understanding of the situation to an appropriate action. The expert nurse, with her/his enormous background of experience, has an intuitive grasp of the situation and zeros in on the accurate region of the problem without wasteful consideration of a large range of unfruitful possible problem situations. It is very frustrating to try to capture verbal descriptions of expert performance because the expert operates from a deep understanding of the situation, much like the chess master who, when asked why he made a particularly masterful move, will just say, “Because it felt right. It looked good.” (Patricia Benner, 1982, “From novice to expert.” American Journal of Nursing, v82 n3 pp 402-407)

Teaching metacognitive self-assessment should include an aim toward improving students’ ability to clearly recognize the quality of “feels right” regarding whether one’s own ability to meet a challenge with present abilities and resources exists. Developing such capacity requires practice in committing errors and learning from them through metacognitive reflection. In such practice, the value of Knowledge Surveys (see http://profcamp.tripod.com/KS.pdf and http://profcamp.tripod.com/Knipp_Knowledge_Survey.pdf) becomes apparent.

Knowledge Surveys (Access tutorials for constructing knowledge surveys and obtaining downloadable examples at http://elixr.merlot.org/assessment-evaluation/knowledge-surveys/knowledge-surveys2.) consist of about a hundred to two hundred questions/items relevant to course learning objectives. These query individuals to self-assess by rating their present ability to meet a challenge on a three-point multiple-choice scale:

A. I can fully address this item now for graded test purposes.
B. I have partial knowledge that permits me to address at least 50% of this item.
C. I am not yet able to address this item adequately for graded test purposes.

and thereafter to monitor their mastery as the course unfolds.

In Part 2, we will examine why knowledge surveys are such powerful instruments for supporting students’ learning and metacognitive development, ways to properly employ knowledge surveys to induce measurable gains, and we will provide some surprising results obtained from pairing knowledge surveys in conjunction with a standardized assessment measure.


Metacognition for Guiding Students to Awareness of Higher-level Thinking (Part 2)

by Ed Nuhfer (Contact: enuhfer@earthlink.net; 208-241-5029)

Part 1 introduced the nature of adult intellectual development in terms of the stages ascended as one becomes educated. Each stage imparts new abilities that are valuable. This Part 2 reveals why awareness of these stages is important and offers metacognitive exercises through which students can begin to engage with what should be happening to them as they become better thinkers.

 

A disturbingly tiny contingent of professors in disciplines outside adult education have read the adult developmental research and recognized the importance of Perry’s discovery. Even fewer pass on this awareness directly to their students. Thus, recognition that the main value of a university education does not lie in acquired knowledge of facts and formulae but rather in acquiring higher level thinking abilities remains off the radars of most students.

Given what we know from this research, a potential exists for American higher education’s evolving into a class-based higher educational system, with a few institutions for the privileged supporting curricula that emphasize developing the advanced thinking needed for management and leadership, and a larger group of institutions fielding curricula emphasizing only content and skills for producing graduates destined to be managed. Until students in general (and parents) recognize how these two educational models differ in what they offer in value and advantages for life, they will fail to demand to be taught higher-order thinking. Overcoming this particular kind of ignorance is a struggle that neither individual students nor a free nation can afford to lose.

 

Teaching Metacognition: Mentoring Students to Higher Levels of Thinking

One way to win this struggle is to bring explicit awareness of what constitutes becoming well educated directly to students, particularly those not enrolled in elite, selective schools. All students should know what is happening to them, which requires understanding the stages of adult intellectual development and the sequence in which they occur offers the explicit framework needed to guide students to do beneficial “thinking about thinking.” (See Part 1, Table 1.) This research-based framework offers the foundation required for understanding the value of higher-level thinking. It offers a map for the journey on which one procures specific abilities by mastering successively higher stages of adult thinking. Through learning to use this framework metacognitively, individuals can start to discover their current stage of intellectual development and determine what they need for achieving the next higher stage.

I have included two exercises for students to show how the research that informs what we should be “thinking about” can be converted into metacognitive components of lessons. These modules have been pilot tested on  students in introductory general education and critical thinking courses.

The first, “Module 12 – Events a Learner Can Expect to Experience,” uses the research that defines the Perry stages (Table 1) as a basis for authoring an exercise that guides students through key points to “think about” as they start to reflect upon their own thinking. Instructors can employ the module as an assignment or an in-class exercise, and should modify it as desired. For many students, this will serve as their first exposure to metacognition. If this is the reader’s first introduction to adult intellectual development, work through this module, ideally with a colleague on a lunch break. Start to procure some of the key resources listed in the references for your personal library.

With the exception of Perry Stages 7, 8, and 9, Module 12 largely addresses the cognitive realm. However, when intellectual development occurs successfully, affective or emotional development occurs in parallel as one advances through higher cognitive stages (see Nuhfer, 2008). Metacognition or “thinking about thinking” should extend also to a reflective “thinking about feelings.” Since the 1990s, we have learned that our feelings about our learning–our affective component of thinking– influence how well we can learn. Further, our affective development or “emotional intelligence” determines how well that we can work with others by connecting with the through their feelings, which is a huge determinant in work and life success.

The second “Module 4—Enlisting the Affective Domain” helps students to recognize why the feelings and emotions that occur as one transitions into higher stages are important to consider and to understand. At the higher levels of development, one may even aspire to deeply understand another by acquiring the capacity for experiencing another’s feelings (Carper, 1978; Belenky and others, 1986).

Frequent inclusion of metacognitive components in our assignments is essential for providing students with the practice needed for achieving better thinking. Guiding students in what to “think about” can help students engage in challenges that arise at the finer scales of metadisciplines, disciplines, courses, and lessons. This requires us to go beyond articulating: “What should students learn and how can we assess this?”  by extending our planning to specify “What is essential that students should think about, and how can we mentor them into such thinking?”

REFERENCES CITED (additional references are provided in the two exercises furnished)

Arum, R. and Roksa, J. (2011). Academically Adrift: Limited Learning on College Campuses. Chicago, IL: University of Chicago Press.

Belenky, M.F., B.M. Clinchy, N.R. Goldberger, and J.M. Tarule. (1986) Women’s Ways of Knowing: The Development of Self, Voice, and Mind, New York: Basic Books. (Reprinted in 1997).

Carper, B. A. (1978). Fundamental patterns of knowing in nursing. Advances in Nursing Science 1 1 13–24.

Flavell, J. H. (1976). Metacognitive aspects of problem solving. In L. B. Resnick (Ed.), The nature of intelligence (pp. 231–235). Hillsdale, NJ: Erlbaum.

Journal of Adult Development (2004). Special volume of nine papers on the Perry legacy of cognitive development. Journal of Adult Development (11, 2) 59-161 Germantown NY: Periodicals Service Co.

Nuhfer, E. B (2008) The feeling of learning: Intellectual development and the affective domain: Educating in fractal patterns XXVI. National Teaching and Learning Forum, 18 (1) 7-11.

Perry, W. G., Jr. (1999). Forms of intellectual and Ethical Development in the College Years. (Reprint of the original 1968 1st edition with introduction by L. Knefelkamp). San Francisco: Jossey-Bass.


Metacognition for Guiding Students to Awareness of Higher-level Thinking (Part 1)

by Ed Nuhfer (Contact: enuhfer@earthlink.net; 208-241-5029)

When those unfamiliar with “metacognition” first learn the term, they usually hear: “Metacognition is thinking about thinking.” This is a condensation of John Flavell’s (1976) definition: “Metacognition refers to one’s knowledge concerning one’s own cognitive processes or anything related to them…” Flavell’s definition reveals that students cannot engage in metacognition until they first possess a particular kind of knowledge. This reminds us that students do not innately understand what they need to be “thinking about” in the process of “thinking about thinking.” They need explicit guidance.

When students learn in most courses, they engage in a three-component effort toward achieving an education: (1) gaining content knowledge, (2) developing skills (which are usually specific to a discipline), and (3) gaining deeper understanding of the kinds of thinking or reasoning required for mastery of the challenges at hand. The American higher educational system generally does best at helping students achieve the first two. Many students have yet to even realize how these components differ, and few ever receive any instruction on mastering Component 3. Recently, Arum and Roksa (2011) summarized the effectiveness of American undergraduate education in developing students’ capacity for thinking. The record proved dismal and revealed that allowing the first two components to push aside the third produces serious consequences.

This imbalance has persisted for decades. Students often believe that education is primarily about gaining content knowledge—that the major distinction between freshmen and seniors is “Seniors know more facts.” Those who never get past this view will likely acquire a degree without acquiring any significantly increased ability to reason.

We faculty are also products of this imbalanced system, so it is not too surprising to hear so many of us embracing “covering the material” as a primary concern when planning our courses. Truth be told, many of us have so long taught to content and to skills necessary for working within the disciplines that we are less practiced in guiding our students to be reflective on how to improve their thinking. Adding metacognitive components to our assignments and lessons can provide the explicit guidance that students need. However, authoring these components will take many of us into new territory, and we should expect our first efforts to be awkward compared to what we will be authoring after a year of practice. Yet, doing such work and seeing students grow because of our efforts is exciting and very worthwhile. Now is the time to start.

Opportunities for developing metacognitive reflection exist at scales ranging from single-lesson assignments to large-scale considerations. In my first blog for this site, I chose to start with the large-scale considerations of what constitutes development of higher-level thinking skills.

 

What Research Reveals about Adult Thinking

More than five decades have passed since William Perry distinguished nine stages of thinking that successful adult intellectual development (Table 1) produces. The validity of his developmental model in general seems firmly established (Journal of Adult Development, 2004). Contained within this model is the story of how effective higher education improves students’ abilities to think and respond to challenges. Knowing this story enables us to be explicit in getting students aware of what ought to be happening to them if higher education is actually increasing their capacity for thinking. This research enables us to guide students in what to look for as they engage in the metacognition of understanding their own intellectual development.

Enhanced capacity to think develops over spans of several years. Small but important changes produced at the scale of single quarter or semester-long courses are normally imperceptible to students and instructors alike. Even the researchers who discovered the developmental stages passed through them as students, without realizing the nature of the changes that they were undergoing. For learning that occurs in the shorter period of a college course, it is easier to document measurable changes in learning of disciplinary content and the acquisition of specific skills than it is to assess changes in thinking. Research based on longitudinal studies of interviews with students as they changed over several years finally revealed the nature of these subtle changes and the sequence in which they occur (Table 1).

 

Table 1: A Summary of Perry’s Stages of Adult Intellectual Development

Stage 1 & 2 thinkers believe that all problems have right and wrong answers, that all answers can be furnished by authority (usually the teacher), and that ambiguity is a needless nuisance that obstructs getting at right answers.
Stage 3 thinkers realize that authority is fallible and does not have good answers for all questions. Thinkers at this stage respond by concluding that all opinions are equally valid and that arguments are just about proponents’ thinking differently. Evidence to the contrary does not change this response.
Stage 4 thinkers recognize that not all challenges have right or wrong answers, but they do not yet recognize frameworks through which to resolve how evidence best supports one among several competing arguments.
Stage 5 thinkers can use evidence. They also accept that evaluations that lead to best solutions can be relative to the context of the situation within which a problem occurs.
Stage 6 thinkers appreciate ambiguity as a legitimate quality of many issues. They can use evidence to explore alternatives. They recognize that the most reasonable answers often depend upon both context and value systems.
Stages 7, 8 and 9 thinkers incorporate metacognitive reflection in their reasoning, and they increasingly perceive how their personal values act alongside context and evidence to influence chosen decisions and actions.

In part 2 of this blog, we will provide metacognitive class exercises that help students to understand what occurs during intellectual development and why they must strive for more than learning content when gaining an education.