Metacognition, the Representativeness Heuristic, and the Elusive Transfer of Learning

by Dr. Lauren Scharff, U. S. Air Force Academy*

When we instructors think about student learning, we often default to immediate learning in our courses. However, when we take a moment to reflect on our big picture learning goals, we typically realize that we want much more than that. We want our students to engage in transfer of learning, and our hopes can be grand indeed…

  • We want our students to show long-term retention of our material so that they can use it in later courses, sometimes even beyond those in our disciplines.
  • We want our students to use what they’ve learned in our course as they go through life, helping them both in their profession and in their personal lives.

These grander learning goals often involve learning of ways of thinking that we endeavor to develop, such as critical thinking and information literacy. And, for those of us who believe in the broad value of metacognition, we want our students to develop metacognition skills. But, as some of us have argued elsewhere (Scharff, Draeger, Verpoorten, Devlin, Dvorak, Lodge & Smith 2017), metacognition might be key for the transfer of learning and not just a skill we want our students to learn and then use in our course.

Metacognition involves engaging in intentional awareness of a process and using that awareness to guide subsequent behavioral choices (self-regulation). In our 2017 paper, we argued that students don’t engage in transfer of learning because they aren’t aware of the similarities of context or process that would indicate that some sort of learning transfer would be useful or appropriate. What we didn’t explore in that paper is why that first step might be so difficult.

If we look to research in cognitive psychology, we can find a possible answer to that question – the representativeness heuristic. Heuristics are mental short-cuts based on assumptions built from prior experience. There are several different heuristics (e.g. representativeness heuristic, availability heuristic, anchoring heuristic). They allow us to more quickly and efficiently respond to the world around us. Most of the time they serve us well, but sometimes they don’t.

The representativeness heuristic occurs when we attend to obvious characteristics of some type of group (objects, people, contexts) and then use those characteristics to categorize new instances as part of that group. If obvious characteristics aren’t shared, then the new instances are categorized separately.

For example, if a child is out in the countryside for the first time, she might see a four-legged animal in the field. She might be familiar with dogs from her home. When she sees the four-legged creature in the field, so might immediately characterize the new creature as a dog based on that characteristic. Her parents will correct her, and say, “No. Those are cows. They say moo moo. They live in fields.” The young girl next sees a horse in a field. She might proudly say, “Look another cow!” Her patient parents will now have to add characteristics that will help her differentiate between cows and horses, and so on. At some level, however, the young girl must also learn meta-characteristics that make all these animals connected as mammals: warm-blooded, furred, live-born, etc. Some of these characteristics may be less obvious from a glance across a field.

Now – how might this natural, human way-of-thinking impact transfer of learning in academics?

  • To start, what are the characteristics of academic situations that support the use of the representative heuristic in ways that decrease the likelihood of transfer of learning?
  • In response, how might metacognition help us encourage transfer of learning?

There are many aspects of the academic environment that might answer the first question – anything that leads us to perceive differences rather than connections. For example, math is seen as a completely different domain than literature, chemistry, or political science. The content and the terminology used by each discipline are different. The classrooms are typically in different buildings and may look very different (chemistry labs versus lecture halls or small group active learning classrooms), none of which look or feel like the physical environments in “real life” beyond academics. Thus, it’s not surprising that students do not transfer learning across classes, much less beyond classes.

In response to the second question, I believe that metacognition can help increase the transfer of learning because both mental processes rely on awareness/attention as a first step. Representativeness categorization depends on the characteristics that are attended. Without conscious effort, the attended characteristics are likely to be those most superficially obvious, which in academics tend to highlight differences rather than connections.

But, with some guidance and encouragement, other less obvious characteristics can become more salient. If these additional characteristics cross course/disciplinary/academic boundaries, then opportunities for transfer will enter awareness. The use of this awareness to guide behavior, transfer of learning in this case, is the second step in metacognition.

Therefore, there are multiple opportunities for instructors to promote learning transfer, but we might have to become more metacognitive about the process in order to do so. First we must develop awareness of connections that will promote transfer, rather than remaining within the comfort zone of their disciplinary expertise. Then we must use that awareness and self-regulate our interactions with students to make those connections salient to students. We can further increase the likelihood of transfer behaviors by communicating their value.

We typically can’t do much about the different physical classroom environments that reinforce the distinctions between our courses and nonacademic environments. Thus, we need to look for and explicitly communicate other types of connections. We can share examples to bridge terminology differences and draw parallels across disciplinary processes.

For example, we can point out that creating hypotheses in the sciences is much like creating arguments in the humanities. These disciplinary terms sound like very different words, but both involve a similar process of thinking. Or we can point out that MLA and APA writing formats are different in the details, but both incorporate respect for citing others’ work and give guidance for content organization that makes sense for the different disciplines. These meta-characteristics unite the two formatting approaches (as well as others that students might later encounter) with a common set of higher-level goals. Without such framing, students are less likely to appreciate the need for formatting and may interpret the different styles as arbitrary busywork that doesn’t deserve much thought.

We can also explicitly share what we know about learning in general, which also crosses disciplinary boundaries. A human brain is involved regardless of whether it’s learning in the social sciences, the humanities, the STEM areas, or the non-academic professional world. In fact, Scharff et al (2017) found significant positive correlations between thinking about learning transfer and thinking about learning processes and the likelihood to use awareness of metacognition to guide practice.

Cognitive psychologists know that we can reduce errors that occur from relying on heuristics if we turn conscious attention to the processes involved and disengage from the automatic behaviors in which we tend to engage. Similarly, as part of a metacognitive endeavor, we can help our students become aware of connections rather than differences across learning domains, and encourage behaviors that promote transfer of learning.

Scharff, L., Draeger, J., Verpoorten , D., Devlin, M., Dvorakova, L., Lodge, J. & Smith, S. (2017). Exploring Metacognition as Support for Learning Transfer. Teaching and Learning Inquiry, Vol 5, No. 1. DOI: http://dx.doi.org/10.20343/5.1.6 A Summary of this work can also be found at https://www.improvewithmetacognition.com/researching-metacognition/

* Disclaimer: The views expressed in this document are those of the author and do not reflect the official policy or position of the U. S. Air Force, Department of Defense, or the U. S. Govt.


Metacognition at Goucher II: Training for Q-Tutors

by Dr. Justine Chasmar & Dr. Jennifer McCabe; Goucher College

In the first post of this series, we described various implementations of Goucher College’s metacognition-focused model called the “New 3Rs”: Relationships, Resilience, and Reflection. Here we focus on how elements of metacognition have driven the training of tutors in Goucher’s Quantitative Reasoning (QR) Center.

image from https://www.goucher.edu/explore/ (faculty and student giving a high five)

The QR Center was established in the fall of 2017 to support the development of numeracy in our students and also specifically to bolster our new data analytics general education requirement (part of the Goucher Commons Curriculum, described in depth in our first article). The QR Center started at a time of transition as Goucher shifted from a one-course quantitative reasoning requirement to a set of two required courses: foundational data analytics and data analytics within a discipline. The QR Center mission is to help students with quantitative skill and content development across all disciplines, with a focus on promoting quantitative literacy. To foster these skills, the QR Center offers programming such as appointment-based tutoring, drop-in tutoring, workshops, and academic consultations, with peers (called Q-tutors) as the primary medium of support.

Metacognition is a guiding principle for the QR Center – especially reflection and self-regulated learning. This theme is woven through each piece of QR Center programming, from a newly-developed tutor training course to the focus on academic skill-building at tutoring sessions.

To support the professional development and training of the Q-tutors, the director (co-author of this blog, Dr. Justine Chasmar) created a one-credit course required for all students new to the position. This course combines education, mathematics, quantitative reasoning, and data analytics, and focuses on the intersection of teaching pedagogy within each realm. Because it is primarily set within the context of quantitative content, this course is more focused, and inherently more meaningful, than traditional tutor training. The course is also unique in combining practical exercises with metacognitive reflection. Individual lessons range from basic pedagogy to reviews of essential quantitative content for the tutoring position. Learning is scaffolded by supporting professional practice with continuous reflection and applications toward improved self-regulated learning – both for the tutors and for the students they will assist.

The content of each tutor preparation class meeting is sandwiched by metacognitive prompting. Before class, the Q-tutors prepare, engage, and reflect; for example, they may read a relevant piece of literature and respond to several open-ended reflective prompts about the reading (see “Suggested Readings” below). The synchronous tutor preparation class lesson, attended by all new Q-tutors and the director who teaches the course, involves discussion and other activities relating to the assigned reading, especially emphasizing conversation about issues or concerns the tutors are facing in their new roles. The “metacognition sandwich” is completed by a reflective post to a discussion board, where the Q-tutors respond and build on each other’s reflections, describing what they had learned that day, asking and answering questions, and elaborating on how to apply the lesson to tutoring.

In addition to these explicit reflection activities, the tutor preparation course facilitates discussion of the use and importance of self-regulated learning strategies (SRL) and behaviors. Q-tutors are provided many opportunities to reflect on their own learning. For example, they complete and discuss multiple SRL-based inventories, such as the GAMES (Svinicki, 2006) and the Index of Learning Styles Questionnaire (credit to Richard Felder and Barbara Solomon). Class lessons revolve around evidence-based learning strategies, such as self-testing, help-seeking, and techniques to transform information.

One assignment requires tutors to create and present a “study hack,” an idea adapted from a thread on a popular and supportive listserv for academic support professionals (LRNASST). The assignment, inherently reflective, allows the tutors to consider strategies they successfully utilize, summarize that information, and translate the SRL strategy into a meaningful presentation and worksheet for the tutor group. The Q-tutors present their “study hacks” during class time, with examples from past semesters ranging from mindfulness exercises to taking notes with color coding. These worksheets are also saved as a resource for students so they can learn from SRL strategies endorsed by Q-tutors.

Q-tutors are encouraged to “pay forward” their metacognitive training by focusing on SRL and reflection during their tutoring sessions. They teach study strategies such as self-testing and learning-monitoring, and support student reflection through “checking for understanding” activities at the end of each tutoring session. Tutors know that teaching study skills is one of the major priorities during tutoring sessions; and they close the loop by meeting with other tutors regularly to discuss new and useful skills they can communicate to students they work with. Tutors also get a regular reminder about the importance of study skill development when they read the end-of-appointment survey responses from their tutees, particularly in response to the prompt for “study skill reviewed.”

As a summative assignment in the course, Q-tutors write a Tutoring Philosophy, similar to a teaching statement. By this time, the tutors have gained an awareness of the importance of SRL and metacognitive reflection, as seen in excerpts from sample philosophies from previous semesters:

I strive to strengthen numeracy within our tutees, rid them of their anxieties surrounding quantitative subjects, and build up their skills to become better learners.

Once the tutee gains enough trust and confidence in the material, it is essential for them to begin guiding the direction of the session toward their own learning goals.

By practicing good study habits, self-advocacy, organizational skills, and a     calm demeanor when tutoring, tutees learn what it takes to be a better student.

By thinking intentionally about what it means to be an effective tutor,these students realize that they must model what they teach in a reflective, continuous mutual-learning process: “[In tutoring] my job is to identify what each person needs, use my skills to support their learning, and reflect on these interactions to improve my methods over time.”

In sum, using an intentional metacognitive lens, Q-tutor training at Goucher College supports quantitative skills and general learning strategies in the many students the QR Center reaches. Through this metacognitive cycle, the QR Center supports Goucher’s learning community in improving the Reflection component of the Goucher 3Rs.

Suggested References

Scheaffer, R. L. (2003). Statistics and quantitative literacy. Quantitative Literacy: Why Numeracy Matters for Schools and Colleges, 145-152. Retrieved from https://www.maa.org/sites/default/files/pdf/QL/pgs145_152.pdf

Siegle, D., & McCoach, D. B. (2007). Increasing student mathematics self-efficacy through teacher training. Journal of Advanced Academics, 18, 278–312. https://doi.org/10.4219/jaa-2007-353

Svinicki, M. D. (2006). Helping students do well in class: GAMES. APS Observer, 19(10). Retrieved from https://www.psychologicalscience.org/observer/helping-students-do-well-in-class-games


Williamson, G. (2015). Self-regulated learning: an overview of metacognition, motivation and behaviour. Journal of Initial Teacher Inquiry, 1, 25-27. Retrieved from http://hdl.handle.net/10092/11442


Paired Self-Assessment—Competence Measures of Academic Ranks Offer a Unique Assessment of Education

by Dr. Ed Nuhfer, California State Universities (retired)

What if you could do an assessment that simultaneously revealed the student content mastery and intellectual development of your entire institution, and you could do so without taking either class time or costing your institution money? This blog offers a way to do this.

We know that metacognitive skills are tied directly to successful learning, yet metacognition is rarely taught in content courses, even though it is fairly easy to do. Self-assessment is neither the whole of metacognition nor of self-efficacy, but self-assessment is an essential component to both. Direct measures of students’ self-assessment skills are very good proxy measures for metacognitive skill and intellectual development. A school developing measurable self-assessment skill is likely to be developing self-efficacy and metacognition in its students.   

This installment comes with lots of artwork, so enjoy the cartoons! We start with Figure 1A, which is only a drawing, not a portrayal of actual data. It depicts an “Ideal” pattern for a university educational experience in which students progress up the academic ranks and grow in content knowledge and skills (abscissa) and in metacognitive ability to self-assess (ordinate). In Figure 1B, we now employ actual paired measures. Postdicted self-assessment ratings are estimated scores that each participant provides immediately after seeing and taking a test in its entirety.

Figure 1.

Figure 1. Academic ranks’ (freshman through professor) mean self-assessed ratings of competence (ordinate) versus actual mean scores of competence from the Science Literacy Concept Inventory or SLCI (abscissa). Figure 1A is merely a drawing that depicts the Ideal pattern. Figure 1B registers actual data from many schools collected nationally. The line slopes less steeply than in Fig. 1A and the correlation is r = .99.

The result reveals that reality differs somewhat from the ideal in Figure 1A. The actual lower division undergraduates’ scores (Fig. 1B) do not order on the line in the expected sequence of increasing ranks. Instead, their scores are mixed among those of junior rank. We see a clear jump up in Figure 1B from this cluster to senior ranks, a small jump to graduate student rank and the expected major jump to the rank of professors. Note that Figure 1B displays means of groups, not ratings and scores of individual participants. We sorted over 5000 participants by academic rank to yield the six paired-measures for the ranks in Figure 1B.

We underscore our appreciation for large databases and the power of aggregating confidence-competence paired data into groups. Employment of groups attenuates noise in such data, as we described earlier (Nuhfer et al. 2016), and enables us to perceive clearly the relationship between self-assessed competence and demonstrable competence.  Figure 2 employs a database of over 5000 participants but depicts them in 104 randomized (from all institutions) groups of 50 drawn from within each academic rank. The figure confirms the general pattern shown in Figure 1 by showing a general upwards trend from novice (freshmen and sophomores), developing experts (juniors, seniors and graduate students) through experts (professors), but with considerable overlap between novices and developing experts.

Figure 2

Figure 2. Mean postdicted self-assessment ratings (ordinate) versus mean science literacy competency scores by academic rank.  Figure 2 comes from selecting random groups of 50 from within each academic rank and plotting paired-measures of 104 groups.

The correlations of r = .99 seen in Figure 1B have come down a bit to r = .83 in Figure 2. Let’s learn next why this occurs. We can understand what is occurring by examining Figure 3 and Table 1. Figure 3 comes from our 2019 database of paired measures, that is now about four times larger than the database used in our earlier papers (Nuhfer et al. 2016, 2017), and these earlier results we reported in this same kind of graph continue to be replicated here in Figure 3A.  People generally appear good at self-assessment, and the figure refutes claims that most people are either “unskilled and unaware of it” or “…are typically overly optimistic when evaluating the quality of their performance….” (Ehrlinger, Johnson, Banner, Dunning, & Kruger, 2008). 

Figure 3

Figure 3. Distributions of self-assessment accuracy for individuals (Fig. 3A) and of collective self-assessment accuracy of groups of 50 (Fig. 3B).

Note that the range in the abscissa has gone from 200 percentage points in Fig 3A to only 20 percentage points in Fig. 3B. In groups of fifty, 81% of these groups estimate their mean scores within 3 ppts of their actual mean scores. While individuals are generally good at self-assessment, the collective self-assessment means of groups are even more accurate. Thus, the collective averages of classes on detailed course-based knowledge surveys seem to be valid assessments of the mean learning competence achieved by a class.

The larger the groups employed, the more accurately the mean group self-assessment rating is likely to approximate the mean competence test score of the group (Table 1). In Table 1, reading across the three columns from left to right reveals that, as group sizes increase, greater percentages of each group converge on the actual mean competency score of the group.

Table 1

Table 1. Groups’ self-assessment accuracy by group size. The ratings in ppts of groups’ postdicted self-assessed mean confidence ratings closely approximate the groups’ actual demonstrated competency mean scores (SLCI). In group sizes of 200 participants, the mean self-assessment accuracy for every group is within ±3 ppts. To achieve such results, researchers must use aligned instruments that produce reliable data as described in Nuhfer, 2015 and Nuhfer et al. 2016.

From Table 1 and Figure 3, we can now understand how the very high correlations in Figure 1B are achievable by using sufficiently large numbers of participants in each group. Figure 3A and 3B and Table 1 employ the same database.

Finally, we verified that we could achieve high correlations like those in Figure 2B in single institutions, even when we examined only the four undergraduate ranks within each. We also confirmed that the rank orderings and best-fit line slopes formed patterns that differed measurably by the institution.  Two examples appear in Figure 4. The ordering of the undergraduate ranks and the slope of the best-fit line in graphs such as those in Fig. 4 are surprisingly informative.

Figure 4

Figure 4. Institutional profiles from paired measures of undergraduate ranks. Figure 4A is from a primarily undergraduate, public institution. Figure 4B comes from a public research-intensive university. The correlations remain very high, and the best-fit line slopes and the ordering pattern of undergraduate ranks are distinctly different between the two schools. 

In general, steeply sloping best-fit lines in graphs like Figures 1B, 2, and 4A indicate when significant metacognitive growth is occurring together with the development of content expertise. In contrast, nearly horizontal best-fit lines (these do exist in our research results but are not shown here) indicate that students in such institutions are gaining content knowledge through their college experience but are not gaining  metacognitive skill. We can use such information to guide the assessment stage of “closing the loop.” The information provided does help taking informed actions. In all cases where undergraduate ranks appear ordered out of sequence in such assessments (as in Fig. 1B and Fig. 4B), we should seek understanding why this is true.

In Figure 4A, “School 7” appears to be doing quite well. The steeply sloping line shows clear growth between lower division and upper division undergraduates in both content competence and metacognitive ability. Possibly, the school might want to explore how it could extend gains of the sophomore and senior classes. “School 3”  (Fig. 4B) probably should want to steepen its best-fit line by focusing first on increasing self-assessment skill development across the undergraduate curriculum.

We recently used paired measures of competence and confidence to understand the effects of privilege on varied ethnic, gender, and sexual orientation groups within higher education. That work is scheduled for publication by Numeracy in July 2019. We are next developing a peer-reviewed journal article to use the paired self-assessment measures on groups to understand institutions’ educational impacts on students. This blog entry offers a preview of that ongoing work.

Notes. This blog follows on from earlier posts: Measuring Metacognitive Self-Assessment – Can it Help us Assess Higher-Order Thinking? and Collateral Metacognitive Damage, both by Dr. Ed Nuhfer.

The research reported in this blog distills a poster and oral presentation created by Dr. Edward Nuhfer, CSU Channel Islands & Humboldt State University (retired); Dr. Steven Fleisher, California State University Channel Islands; Rachel Watson, University of Wyoming; Kali Nicholas Moon, University of Wyoming; Dr. Karl Wirth, Macalester College; Dr. Christopher Cogan, Memorial University; Dr. Paul Walter, St. Edward’s University; Dr. Ami Wangeline, Laramie County Community College; Dr. Eric Gaze, Bowdoin College, and Dr. Rick Zechman, Humboldt State University. Nuhfer and Fleisher presented these on February 26, 2019 at the American Association of Behavioral and Social Sciences Annual Meeting in Las Vegas, Nevada. The poster and slides from the oral presentation are linked in this blog entry.