Distributed Metacognition: Insights from Machine Learning and Human Distraction

by Philip Beaman, Ph.D., University of Reading, UK

Following the success of Google’s AlphaGo programme in competition with a human expert over five games, a result previously considered beyond the capabilities of mere machines (https://deepmind.com/alpha-go), there has been much interest in machine learning. Broadly speaking, machine learning comes in two forms: supervised learning (where the machine is trained by means of examples and errors it makes are corrected) or unsupervised learning (where there is no error signal to indicate previous failures). AlphaGo, as it happens, used supervised learning based upon examples of human expert-level games and it is this type of learning which looks very much like meta-cognition, even though the meta-cognitive monitoring and correction of the machine’s performance is external to the system itself, although not necessarily to the machine which is running the system. For example: an artificial neural network (perhaps of the kind which underpins AlphaGo) is trained to output Y when presented with X by means of a programme which stores training examples – and calculates the error signal from the neural network’s first attempts – outside the neural network software itself but on the same hardware. This is of interest because it illustrates the fluid boundary between a cognitive system (the neural network implemented on computer hardware) and its environment (other programmes running on the same hardware to support the neural network) and demonstrates that metacognition, like first-order cognition, is often a form of situated activity. Here, the monitoring and the basis for correction of performance is (like all supervised learning) external to the learning system itself.

In contrast, when psychologists talk about metacognition, we tend to assume that all the processing is going on internally (in the head), whereas in fact it is usually only partly in the head and partly in the world. This is not news to educationalists or to technologists: learners are encouraged to make effective use of external aids which help manage work and thought, but external aids to cognition are often overlooked by psychological theories and investigations. This was not always the case. In the book “Plans and the Structure of Behaviour” which introduced the term “working memory” to psychology, Miller, Galantner and Pribram (1960) spoke of working memory as being a “special state or place” used to track the execution of plans where the place could be in the frontal lobes of the brain (a prescient suggestion for the time!) or “on a sheet of paper”. This concept that was originally defined wholly functionally has, in subsequent years, morphed into a cognitive structure with a specific locus, or loci, of neural activity (e.g., Baddeley, 2007; D’Esposito, 2007; Henson, 2001; Smith, 2000).

We have come across the issue of distributed metacognition in our own work on auditory distraction. For many years, our lab (along with several others) collected and reported data on the disruptive effects of noise on human cognition and performance. We carefully delineated the types of noise which cause distraction and the tasks which were most sensitive to distraction but – at least until recently – neither we nor (so far as we know) anyone else gave any thought to meta-cognitive strategies which might be employed to reduce distraction outside the laboratory setting. Our experiments all involved standardized presentation schedules of material for later recall and imposed environmental noise (usually over headphones) which participants were told to ignore but which they could not avoid. The results of recent studies which both asked participants for their judgments of learning (JoLs) concerning the material and gave them the opportunity to control their own learning or recall strategy (e.g., Beaman, Hanczakowski & Jones, 2014) are of considerable interest. Theoretically, one of three things might happen: meta-cognition might not influence ability to resist distraction in any way, meta-cognitive control strategies might ameliorate the effects of distraction, or meta-cognition might itself be affected by distraction potentially escalating the disruptive effects. For now, let’s focus on the meta-cognitive monitoring judgments since these need to be reasonably accurate in order for people to have any idea that distraction is happening and that counter-measures might be necessary.

One thing we found was that people’s judgments of their own learning was fairly well-calibrated, with judgements of recall in the quiet and noise conditions mirroring the actual memory data. This is not a surprise because earlier studies, including one by Ellermeier and Zimmer (1997) also showed that , when asked to judge their confidence in their memory, people are aware of when noise is likely to detract from their learning. What is of interest, though, is where this insight comes from. No feedback was given after the memory test (i.e., in neural network terms this was not supervised learning) so it isn’t that participants were able to compare their memory performance in the various conditions to the correct answers. Ellermeier and Zimmer (1997) included in their study a measure of participants’ confidence in their abilities before they ever took the test and this measure was less well calibrated with actual performance so this successful metacognitive monitoring does seem to be dependent upon recent experience with these particular distractors and the particular memory test used, rather than being drawn from general knowledge or past experience. What then is the source of the information used to monitor memory accuracy (and hence the effects of auditory distraction on memory)? In our studies, the same participants experienced learning trials in noise and in quiet in the same sessions and the lists of items they were required to try and recall were always of the same set length and recalled by means of entering into a physical device (either writing or typing responses). Meta-cognitive monitoring, in other words, could be achieved in many of our experiments by learning the approximate length of the list to be recalled and comparing the physical record of number of items recalled with this learned number on a trial-by-trial basis. This kind of meta-cognitive monitoring is very much distributed because it relies upon the physical record of the number of items recalled on each trial to make the appropriate comparison. Is there any evidence that something like this is actually happening? An (as yet unpublished) experiment of ours provides a tantalising hint: If you ask people to write down the words they recall but give one group a standard pen to do so and another group a pen which is filled with invisible ink (so both groups are writing their recall, but only one is able to see the results) then it appears that monitoring is impaired in the latter case – suggesting (perhaps) that meta-cognition under distraction benefits from distributing some of the relevant knowledge away from the head and into the world.

References:

Baddeley, A. D. (2007). Working memory, thought and action. Oxford: Oxford University Press.

Beaman, C. P., Hanczakowski, M., & Jones, D. M. (2014). The effects of distraction on metacognition and metacognition on distraction: Evidence from recognition memory. Frontiers in Psychology, 5, 439.

D’Esposito, M. (2007) From cognitive to neural models of working memory. Philosophical Transactions of the Royal Society B: Biological Sciences, 362, 761-772.

Ellermeier, W. & Zimmer, K. (1997). Individual differences in susceptibility to the “irrelevant sound effect” Journal of the Acoustical Society of America, 102, 2191-2199.

Henson, R. N. A. (2001). Neural working memory. In: J. Andrade (Ed.) Working memory in perspective. Hove: Psychology Press.

Miller, G. A., Galanter, E. & Pribram, K. H. (1960). Plans and the structure of behavior. New York: Holt.

Smith, E. E. (2000). Neural bases of human working memory. Current Directions in Psychological Science, 9, 45-49.


Learning to Write and Writing to Learn: The Intersection of Rhetoric and Metacognition

by Amy Ratto Parks, Ph.D., University of Montana

If I had to choose the frustration most commonly expressed by students about writing it is this: the rules are always changing. They say, “every teacher wants something different” and because of that belief, many of them approach writing with feelings ranging from nervous anxiety to sheer dread. It is true that any single teacher will have his or her own specific expectations and biases, but most often, what students perceive as a “rule change” has to do with different disciplinary expectations. I argue that metacognition can help students anticipate and negotiate these shifting disciplinary expectations in writing courses.

Let’s look at an example. As we approach the end of spring semester, one single student on your campus might hold in her hand three assignments for final writing projects in three different classes: literature, psychology, and geology. All three assignments might require research, synthesis of ideas, and analysis – and all might be 6-8 pages in length. If you put yourself in the student’s place for a moment, it is easy to see how she might think, “Great! I can write the same kind of paper on three different topics.” That doesn’t sound terribly unreasonable. However, each of the teachers in these classes will actually be expecting some very different things in their papers: acceptable sources of research, citation style and formatting, use of the first person or passive voice (“I conducted research” versus “research was conducted”), and the kinds of analysis are very different in these three fields. Indeed, if we compared three papers from these disciplines we would see and hear writing that appeared to have almost nothing in common.

So what is a student to do? Or, how can we help students anticipate and navigate these differences? The fields of writing studies and metacognition have some answers for us. Although the two disciplines are not commonly brought together, a close examination of the overlap in their most basic concepts can offer teachers (and students) some very useful ways to understand the disciplinary differences between writing assignments.

Rhetorical constructs are at the intersection of the fields of writing studies and metacognition because they offer us the most clear illustration of the overlap between the way metacognitive theorists and writing researchers conceptualize potential learning situations. Both fields begin with the basic understanding that learners need to be able to respond to novel learning situations and both fields have created terminology to abstractly describe the characteristics of those situations. Metacognitive theorists describe those learning situations as “problem-solving” situations; they say that in order for a student to negotiate the situation well, she needs to understand the relationship between herself, the task, and the strategies available for the task. The three kinds of problem-solving knowledge – self, task, and strategy knowledge – form an interdependent, triangular relationship (Flavell, 1979). All three elements are present in any problem-solving situation and a change to one of the three requires an adjustment of the other two (i.e., if the task is an assignment given to whole class, then the task will remain the same; however, since each student is different, each student will need to figure out which strategies will help him or her best accomplish the task).

Metacognitive Triangle

The field of writing studies describes these novel learning situations as “rhetorical situations.” Similarly, the basic framework for the rhetorical situation is comprised of three elements – the writer, the subject, and the audience – that form an interdependent triangular relationship (Rapp, 2010). Writers then make strategic persuasive choices based upon their understanding of the rhetorical situation.
Rhetorical vs Persuasive

 In order for a writer to negotiate his rhetorical situation, he must understand his own relationship to his subject and to his audience, but he also must understand the audience’s relationship to him and to the subject. Once a student understands these relationships, or understands his rhetorical situation, he can then conscientiously choose his persuasive strategies; in the best-case scenario, a student’s writing choices and persuasive strategies are based on an accurate assessment of the rhetorical situation. In writing classrooms, a student’s understanding of the rhetorical situation of his writing assignment is one pivotal factor that allows him to make appropriate writing choices.

Theorists in metacognition and writing studies both know that students must be able to understand the elements of their particular situation before choosing strategies for negotiating the situation. Writing studies theorists call this understanding the rhetorical situation while metacognitive theorists call it task knowledge, and this is where two fields come together: the rhetorical situation of a writing assignment is a particular kind of problem-solving task.

When the basic concepts of rhetoric and metacognition are brought together it is clear that the rhetorical triangle fits inside the metacognitive triangle and creates the meta-rhetorical triangle.

Meta-Rhetorical Triangle

The meta-rhetorical triangle offers a concrete illustration of the relationship between the basic theoretical frameworks in metacognition and rhetoric. The subject is aligned with the task because the subject of the writing aligns with the guiding task and the writer is aligned with the self because the writerly identity is one facet of a larger sense of self or self-knowledge. However, audience does not align with strategy because audience is the other element a writer must understand before choosing a strategy; therefore, it is in the center of the triangle rather than the right side. In the strategy corner, however, the meta-rhetorical triangle includes the three Aristotelian strategies for persuasion, logos, ethos, and pathos (Rapp, 2010). When the conceptual frameworks for rhetoric and metacognition are viewed as nested triangles this way, it is possible to see that the rhetorical situation offers specifics about how metacognitive knowledge supports a particular kind problem-solving in the writing classroom.

So let’s come back to our student who is looking at her three assignments for 6-8 papers that require research, synthesis of ideas, and analysis. Her confusion comes from the fact that although each requires a different subject, the three tasks are appear to be the same. However, the audience for each is different, and although she, as the writer, is the same person, her relationship to each of the three subjects will be different, and she will bring different interests, abilities, and challenges to each situation. Finally, each assignment will require different strategies for success. For each assignment, she will have to figure out whether or not personal opinion is appropriate, whether or not she needs recent research, and – maybe the most difficult for students – she will have to use three entirely different styles of formatting and citation (MLA, APA, and GSA). Should she add a cover page? Page numbers? An abstract? Is it OK to use footnotes?

These are big hurdles for students to clear when writing in various disciplines. Unfortunately, most faculty are so immersed in our own fields that we come to see these writing choices as obvious and “simple.” Understanding the way metacognitive concepts relate to rhetorical situations can help students generalize their metacognitive knowledge beyond individual, specific writing situations, and potentially reduce confusion and improve their ability to ask pointed questions that will help them choose appropriate writing strategies. As teachers, the meta-rhetorical triangle can help us offer the kinds of assignment details students really need in order to succeed in our classes. It can also help us remember the kinds of challenges students face so that we can respond to their missteps not with irritation, but with compassion and patience.

References

Flavell, J.H. (1979). Metacognition and cognitive monitoring: A new era cognitive development inquiry. American Psychologist, 34, 906-911.

Rapp, C. (2010). Aristotle’s Rhetoric. In E. Zalta (Ed.), The stanford encyclopedia of

philosophy. Retrieved from http://plato.stanford.edu/archives/spr2010/entries/aristotle-rhetoric/


Quantifying Metacognition — Some Numeracy behind Self-Assessment Measures

Ed Nuhfer, Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029

Early this year, Lauren Scharff directed us to what might be one of the most influential reports on quantification of metacognition, which is Kruger and Dunning’s 1999 “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments.” In the 16 years that since elapsed, a popular belief sprung from that paper that became known as the “Dunning-Kruger effect.” Wikipedia describes the effect as a cognitive bias in which relatively unskilled individuals suffer from illusory superiority, mistakenly assessing their ability to be much higher than it really is. Wikipedia thus describes a true metacognitive handicap in a lack of ability to self-assess. I consider Kruger and Dunning (1999) as seminal because it represents what may be the first attempt to establish a way to quantify metacognitive self-assessment. Yet, as time passes, we always learn ways to improve on any good idea.

At first, quantifying the ability to self-assess seems simple. It appears that comparing a direct measure of confidence to perform taken through one instrument with a direct measure of demonstrated competence taken through another instrument should do the job nicely. For people skillful in self-assessment, the scores on both self-assessment and performance measures should be about equal. Seriously large differences can indicate underconfidence on one hand or “illusory superiority” on the other.

The Signal and the Noise

In practice, measuring self-assessment accuracy is not nearly so simple. The instruments of social science yield data consisting of the signal that expresses the relationship between our actual competency and our self-assessed feelings of competency and significant noise generated by our human error and inconsistency.

In analogy, consider signal as your favorite music on a radio station, the measuring instrument as your radio receiver, the noise as the static that intrudes on your favorite music, and the data as the actual sound mix of noise and signal that you hear. The radio signal may truly exist, but unless we construct suitable instruments to detect it, we will not be able to generate convincing evidence that the radio signal even exists. Failures can lead to the conclusions that metacognitive self-assessment is no better than random guessing.

Your personal metacognitive skill is analogous to an ability to tune to the clearest signal possible. In this case, you are “tuning in” to yourself—to your “internal radio station”—rather than tuning the instruments that can measure this signal externally. In developing self-assessment skill, you are working to attune your personal feelings of competence to reflect the clearest and most accurate self-assessment of your actual competence. Feedback from the instruments has value because they help us to see how well we have achieved the ability to self-assess accurately.

Instruments and the Data They Yield

General, global questions such as: “How would you rate your ability in math?” “How well can you express your ideas in writing?” or “How well do you understand science?” may prove to be crude, blunt self-assessment instruments. Instead of single general questions, more granular instruments like knowledge surveys that elicit multiple measures of specific information seem needed.

Because the true signal is harder to detect than often supposed, researchers need a critical mass of data to confirm the signal. Pressures to publish in academia can cause researchers to rush to publish results from small databases obtainable in a brief time rather than spending the time, sometimes years, needed to generate the database of sufficient size that can provide reproducible results.

Understanding Graphical Depictions of Data

Some graphical conventions that have become almost standard in the self-assessment literature depict ordered patterns from random noise. These patterns invite researchers to interpret the order as produced by the self-assessment signal. Graphing of nonsense data generated from random numbers in varied graphical formats can reveal what pure randomness looks like when depicted in any graphical convention. Knowing the patterns of randomness enables acquiring the numeracy needed to understand self-assessment measurements.

Some obvious questions I am anticipating follow: (1) How do I know if my instruments are capturing mainly noise or signal? (2) How can I tell when a database (either my own or one described in a peer-reviewed publication) is of sufficient size to be reproducible? (3) What are some alternatives to single global questions? (4) What kinds of graphs portray random noise as a legitimate self-assessment signal? (5) When I see a graph in a publication, how can I tell if it is mainly noise or mainly signal? (6) What kind of correlations are reasonable to expect between self-assessed competency and actual competency?

Are There Any Answers?

Getting some answers to these meaty questions requires more than a short blog post, but some help is just a click or two away. This blog directs readers to “Random Number Simulations Reveal How Random Noise Affects the Measurements and Graphical Portrayals of Self-Assessed Competency” (Numeracy, January 2016) with acknowledgments to my co-authors Christopher Cogan, Steven Fleisher, Eric Gaze and Karl Wirth for their infinite patience with me on this project. Numeracy is an open-source journal, and you can download the paper for free. Readers will likely see self-assessment literature in different ways way after reading the article.


Two forms of ‘thinking about thinking’: metacognition and critical thinking

by John Draeger (SUNY Buffalo State)

In previous posts, I have explored the conceptual nature of metacognition and shared my attempts to integrate metacognitive practices into my philosophy courses. I am also involved in a campuswide initiative that seeks to infuse critical thinking throughout undergraduate curricula. In my work on both metacognition and critical thinking, I often find myself using ‘thinking about thinking’ as a quick shorthand for both. And yet, I believe metacognition and critical thinking are distinct notions. This post will begin to sort out some differences.

My general view is that the phrase ‘thinking about thinking’ can be the opening move in a conversation about either metacognition or critical thinking. Lauren Scharff and I, for example, took this tack when we explored ways of unpacking what we mean by ‘metacognition’ (Scharff & Draeger, 2014). We considered forms of awareness, intentionality, and the importance of understanding of various processes. More specifically, metacognition encourages us to monitor the efficacy of our learning strategies (e.g., self-monitoring) and prompts us to use that understanding to guide our subsequent practice (e.g., self-regulation). It is a form of thinking about thinking. We need to think about how we think about our learning strategies and how to use our thinking about their efficacy to think through how we should proceed. In later posts, we have continued to refine a more robust conception of metacognition (e.g., Scharff 2015, Draeger 2015), but ‘thinking about thinking’ was a good place to start.

Likewise, the phrase ‘thinking about thinking’ can be the opening move in conversations about critical thinking. Given the wide range of program offerings on my campus, defining ‘critical thinking’ has been a challenge. Critical thinking is a collection of skills that can vary across academic settings and how these skills are utilized often requires disciplinary knowledge. For example, students capable of analyzing how factors such as gender, race, and sexuality influence governmental policy may have difficulty analyzing a theatrical performance or understanding the appropriateness of a statistical sampling method. Moreover, it isn’t obvious how the skills learned in one course will translate to the course down the hall. Consequently, students need to develop a variety of critical thinking skills in a variety of learning environments. As we began to consider how to infuse critical thinking across the curriculum, the phrase ‘thinking about thinking’ was something that most everyone on my campus could agree upon. It has been a place to start as we move on to discuss what critical thinking looks like in various domains of inquiry (e.g., what it means to think like an artist, biologist, chemist, dancer, engineer, historian, or psychologist).

‘Thinking about thinking’ captures the idea students need to think about the kind of thinking skills that they are trying to master, and teachers need to be explicit about those skills that if their students will have any hope of learning them. This applies to both metacognition and critical thinking. For example, many students are able to solve complex problems, craft meaningful prose, and create beautiful works of art without understanding precisely how they did it. Such students might be excellent thinkers, but unless they are aware of how they did what they did, it is also possible that they got just lucky. Both critical thinking and metacognition help ensure that students can reliably achieve desired learning outcomes. Both require practice and both require the explicit awareness of the relevant processes. More specifically, however, critical thinkers are aware of what they are trying to do (e.g., what it means to think like an artist, biologist, chemist, dancer, engineer, historian, psychologist), while metacognitive thinkers are aware of whether their particular strategies are effective (e.g., whether someone is an effective artist, biologist, chemist, dancer, engineer, historian, psychologist). Critical thinking and metacognition, therefore, differ in the object of awareness. Critical thinking involves an awareness of mode of thinking within a domain (e.g., question assumptions about gender, determine the appropriateness of a statistical method), while metacognition involves an awareness of the efficacy of particular strategies for completing that task.

‘Thinking about thinking’ is a good way to spark conversation with our colleagues and our students about a number of important skills, including metacognition and critical thinking. In particular, it is worth asking ourselves (and relaying to our students) what it might mean for someone to think like an artist or a zoologist (critical thinking) and how we would know whether that artist or zoologist was thinking effectively (metacognition). As these conversations move forward, we should also think through the implications for our courses and programs of study. How might this ongoing conversation change course design or methods of instruction? What might it tell us about the connections between courses across our campuses? ‘Thinking about thinking’ is a great place to start such conversations, but we must remember that it is only the beginning.

References

Draeger, John (2015). “Exploring the relationship between awareness, self-regulation, and metacognition.” Retrieved from https://www.improvewithmetacognition.com/exploring-the-relationship-between-awareness-self-regulation-and-metacognition/

Scharff, Lauren & Draeger, John (2014). “What do we mean when we say “Improve with metacognition”? (Part One) Retrieved from https://www.improvewithmetacognition.com/what-do-mean-when-we-say-improve-with-metacognition/

Scharff, Lauren (2015). “What do we mean by ‘metacognitive instruction?” Retrieved from https://www.improvewithmetacognition.com/what-do-we-mean-by-metacognitive-instruction/Thinking about two forms of thinking about thinking: Metacognition and critical thinking Share on X


Metacognition and Learning: Conceptual and Methodological Considerations

This is the first issue of the new international journal Metacognition and Learning. Journal provides “A kaleidoscopic view on research into metacognition.” It is a great introduction to metacognition and includes ten issues “Which are by no means exhaustive.”

Metacognition and Learning, 2006, Volume 1, Number 1, Page 3. Marcel V. J. Veenman, Bernadette H. A. M. Hout-Wolters, Peter Afflerbach

Metacognition and Learning: Conceptual and Methodological Considerations