Distributed Metacognition: Insights from Machine Learning and Human Distraction

by Philip Beaman, Ph.D., University of Reading, UK

Following the success of Google’s AlphaGo programme in competition with a human expert over five games, a result previously considered beyond the capabilities of mere machines (https://deepmind.com/alpha-go), there has been much interest in machine learning. Broadly speaking, machine learning comes in two forms: supervised learning (where the machine is trained by means of examples and errors it makes are corrected) or unsupervised learning (where there is no error signal to indicate previous failures). AlphaGo, as it happens, used supervised learning based upon examples of human expert-level games and it is this type of learning which looks very much like meta-cognition, even though the meta-cognitive monitoring and correction of the machine’s performance is external to the system itself, although not necessarily to the machine which is running the system. For example: an artificial neural network (perhaps of the kind which underpins AlphaGo) is trained to output Y when presented with X by means of a programme which stores training examples – and calculates the error signal from the neural network’s first attempts – outside the neural network software itself but on the same hardware. This is of interest because it illustrates the fluid boundary between a cognitive system (the neural network implemented on computer hardware) and its environment (other programmes running on the same hardware to support the neural network) and demonstrates that metacognition, like first-order cognition, is often a form of situated activity. Here, the monitoring and the basis for correction of performance is (like all supervised learning) external to the learning system itself.

In contrast, when psychologists talk about metacognition, we tend to assume that all the processing is going on internally (in the head), whereas in fact it is usually only partly in the head and partly in the world. This is not news to educationalists or to technologists: learners are encouraged to make effective use of external aids which help manage work and thought, but external aids to cognition are often overlooked by psychological theories and investigations. This was not always the case. In the book “Plans and the Structure of Behaviour” which introduced the term “working memory” to psychology, Miller, Galantner and Pribram (1960) spoke of working memory as being a “special state or place” used to track the execution of plans where the place could be in the frontal lobes of the brain (a prescient suggestion for the time!) or “on a sheet of paper”. This concept that was originally defined wholly functionally has, in subsequent years, morphed into a cognitive structure with a specific locus, or loci, of neural activity (e.g., Baddeley, 2007; D’Esposito, 2007; Henson, 2001; Smith, 2000).

We have come across the issue of distributed metacognition in our own work on auditory distraction. For many years, our lab (along with several others) collected and reported data on the disruptive effects of noise on human cognition and performance. We carefully delineated the types of noise which cause distraction and the tasks which were most sensitive to distraction but – at least until recently – neither we nor (so far as we know) anyone else gave any thought to meta-cognitive strategies which might be employed to reduce distraction outside the laboratory setting. Our experiments all involved standardized presentation schedules of material for later recall and imposed environmental noise (usually over headphones) which participants were told to ignore but which they could not avoid. The results of recent studies which both asked participants for their judgments of learning (JoLs) concerning the material and gave them the opportunity to control their own learning or recall strategy (e.g., Beaman, Hanczakowski & Jones, 2014) are of considerable interest. Theoretically, one of three things might happen: meta-cognition might not influence ability to resist distraction in any way, meta-cognitive control strategies might ameliorate the effects of distraction, or meta-cognition might itself be affected by distraction potentially escalating the disruptive effects. For now, let’s focus on the meta-cognitive monitoring judgments since these need to be reasonably accurate in order for people to have any idea that distraction is happening and that counter-measures might be necessary.

One thing we found was that people’s judgments of their own learning was fairly well-calibrated, with judgements of recall in the quiet and noise conditions mirroring the actual memory data. This is not a surprise because earlier studies, including one by Ellermeier and Zimmer (1997) also showed that , when asked to judge their confidence in their memory, people are aware of when noise is likely to detract from their learning. What is of interest, though, is where this insight comes from. No feedback was given after the memory test (i.e., in neural network terms this was not supervised learning) so it isn’t that participants were able to compare their memory performance in the various conditions to the correct answers. Ellermeier and Zimmer (1997) included in their study a measure of participants’ confidence in their abilities before they ever took the test and this measure was less well calibrated with actual performance so this successful metacognitive monitoring does seem to be dependent upon recent experience with these particular distractors and the particular memory test used, rather than being drawn from general knowledge or past experience. What then is the source of the information used to monitor memory accuracy (and hence the effects of auditory distraction on memory)? In our studies, the same participants experienced learning trials in noise and in quiet in the same sessions and the lists of items they were required to try and recall were always of the same set length and recalled by means of entering into a physical device (either writing or typing responses). Meta-cognitive monitoring, in other words, could be achieved in many of our experiments by learning the approximate length of the list to be recalled and comparing the physical record of number of items recalled with this learned number on a trial-by-trial basis. This kind of meta-cognitive monitoring is very much distributed because it relies upon the physical record of the number of items recalled on each trial to make the appropriate comparison. Is there any evidence that something like this is actually happening? An (as yet unpublished) experiment of ours provides a tantalising hint: If you ask people to write down the words they recall but give one group a standard pen to do so and another group a pen which is filled with invisible ink (so both groups are writing their recall, but only one is able to see the results) then it appears that monitoring is impaired in the latter case – suggesting (perhaps) that meta-cognition under distraction benefits from distributing some of the relevant knowledge away from the head and into the world.

References:

Baddeley, A. D. (2007). Working memory, thought and action. Oxford: Oxford University Press.

Beaman, C. P., Hanczakowski, M., & Jones, D. M. (2014). The effects of distraction on metacognition and metacognition on distraction: Evidence from recognition memory. Frontiers in Psychology, 5, 439.

D’Esposito, M. (2007) From cognitive to neural models of working memory. Philosophical Transactions of the Royal Society B: Biological Sciences, 362, 761-772.

Ellermeier, W. & Zimmer, K. (1997). Individual differences in susceptibility to the “irrelevant sound effect” Journal of the Acoustical Society of America, 102, 2191-2199.

Henson, R. N. A. (2001). Neural working memory. In: J. Andrade (Ed.) Working memory in perspective. Hove: Psychology Press.

Miller, G. A., Galanter, E. & Pribram, K. H. (1960). Plans and the structure of behavior. New York: Holt.

Smith, E. E. (2000). Neural bases of human working memory. Current Directions in Psychological Science, 9, 45-49.


Learning to Write and Writing to Learn: The Intersection of Rhetoric and Metacognition

by Amy Ratto Parks, Ph.D., University of Montana

If I had to choose the frustration most commonly expressed by students about writing it is this: the rules are always changing. They say, “every teacher wants something different” and because of that belief, many of them approach writing with feelings ranging from nervous anxiety to sheer dread. It is true that any single teacher will have his or her own specific expectations and biases, but most often, what students perceive as a “rule change” has to do with different disciplinary expectations. I argue that metacognition can help students anticipate and negotiate these shifting disciplinary expectations in writing courses.

Let’s look at an example. As we approach the end of spring semester, one single student on your campus might hold in her hand three assignments for final writing projects in three different classes: literature, psychology, and geology. All three assignments might require research, synthesis of ideas, and analysis – and all might be 6-8 pages in length. If you put yourself in the student’s place for a moment, it is easy to see how she might think, “Great! I can write the same kind of paper on three different topics.” That doesn’t sound terribly unreasonable. However, each of the teachers in these classes will actually be expecting some very different things in their papers: acceptable sources of research, citation style and formatting, use of the first person or passive voice (“I conducted research” versus “research was conducted”), and the kinds of analysis are very different in these three fields. Indeed, if we compared three papers from these disciplines we would see and hear writing that appeared to have almost nothing in common.

So what is a student to do? Or, how can we help students anticipate and navigate these differences? The fields of writing studies and metacognition have some answers for us. Although the two disciplines are not commonly brought together, a close examination of the overlap in their most basic concepts can offer teachers (and students) some very useful ways to understand the disciplinary differences between writing assignments.

Rhetorical constructs are at the intersection of the fields of writing studies and metacognition because they offer us the most clear illustration of the overlap between the way metacognitive theorists and writing researchers conceptualize potential learning situations. Both fields begin with the basic understanding that learners need to be able to respond to novel learning situations and both fields have created terminology to abstractly describe the characteristics of those situations. Metacognitive theorists describe those learning situations as “problem-solving” situations; they say that in order for a student to negotiate the situation well, she needs to understand the relationship between herself, the task, and the strategies available for the task. The three kinds of problem-solving knowledge – self, task, and strategy knowledge – form an interdependent, triangular relationship (Flavell, 1979). All three elements are present in any problem-solving situation and a change to one of the three requires an adjustment of the other two (i.e., if the task is an assignment given to whole class, then the task will remain the same; however, since each student is different, each student will need to figure out which strategies will help him or her best accomplish the task).

Metacognitive Triangle

The field of writing studies describes these novel learning situations as “rhetorical situations.” Similarly, the basic framework for the rhetorical situation is comprised of three elements – the writer, the subject, and the audience – that form an interdependent triangular relationship (Rapp, 2010). Writers then make strategic persuasive choices based upon their understanding of the rhetorical situation.
Rhetorical vs Persuasive

 In order for a writer to negotiate his rhetorical situation, he must understand his own relationship to his subject and to his audience, but he also must understand the audience’s relationship to him and to the subject. Once a student understands these relationships, or understands his rhetorical situation, he can then conscientiously choose his persuasive strategies; in the best-case scenario, a student’s writing choices and persuasive strategies are based on an accurate assessment of the rhetorical situation. In writing classrooms, a student’s understanding of the rhetorical situation of his writing assignment is one pivotal factor that allows him to make appropriate writing choices.

Theorists in metacognition and writing studies both know that students must be able to understand the elements of their particular situation before choosing strategies for negotiating the situation. Writing studies theorists call this understanding the rhetorical situation while metacognitive theorists call it task knowledge, and this is where two fields come together: the rhetorical situation of a writing assignment is a particular kind of problem-solving task.

When the basic concepts of rhetoric and metacognition are brought together it is clear that the rhetorical triangle fits inside the metacognitive triangle and creates the meta-rhetorical triangle.

Meta-Rhetorical Triangle

The meta-rhetorical triangle offers a concrete illustration of the relationship between the basic theoretical frameworks in metacognition and rhetoric. The subject is aligned with the task because the subject of the writing aligns with the guiding task and the writer is aligned with the self because the writerly identity is one facet of a larger sense of self or self-knowledge. However, audience does not align with strategy because audience is the other element a writer must understand before choosing a strategy; therefore, it is in the center of the triangle rather than the right side. In the strategy corner, however, the meta-rhetorical triangle includes the three Aristotelian strategies for persuasion, logos, ethos, and pathos (Rapp, 2010). When the conceptual frameworks for rhetoric and metacognition are viewed as nested triangles this way, it is possible to see that the rhetorical situation offers specifics about how metacognitive knowledge supports a particular kind problem-solving in the writing classroom.

So let’s come back to our student who is looking at her three assignments for 6-8 papers that require research, synthesis of ideas, and analysis. Her confusion comes from the fact that although each requires a different subject, the three tasks are appear to be the same. However, the audience for each is different, and although she, as the writer, is the same person, her relationship to each of the three subjects will be different, and she will bring different interests, abilities, and challenges to each situation. Finally, each assignment will require different strategies for success. For each assignment, she will have to figure out whether or not personal opinion is appropriate, whether or not she needs recent research, and – maybe the most difficult for students – she will have to use three entirely different styles of formatting and citation (MLA, APA, and GSA). Should she add a cover page? Page numbers? An abstract? Is it OK to use footnotes?

These are big hurdles for students to clear when writing in various disciplines. Unfortunately, most faculty are so immersed in our own fields that we come to see these writing choices as obvious and “simple.” Understanding the way metacognitive concepts relate to rhetorical situations can help students generalize their metacognitive knowledge beyond individual, specific writing situations, and potentially reduce confusion and improve their ability to ask pointed questions that will help them choose appropriate writing strategies. As teachers, the meta-rhetorical triangle can help us offer the kinds of assignment details students really need in order to succeed in our classes. It can also help us remember the kinds of challenges students face so that we can respond to their missteps not with irritation, but with compassion and patience.

References

Flavell, J.H. (1979). Metacognition and cognitive monitoring: A new era cognitive development inquiry. American Psychologist, 34, 906-911.

Rapp, C. (2010). Aristotle’s Rhetoric. In E. Zalta (Ed.), The stanford encyclopedia of

philosophy. Retrieved from http://plato.stanford.edu/archives/spr2010/entries/aristotle-rhetoric/


Distance Graduate Programs and Metacognition

by Tara Beziat at Auburn University at Montgomery 

As enrollment in online programs and online courses continues to increase (Merriman & Bierema, 2014), institutions have recognized the importance of building quality learning experiences for their students. To accomplish this goal, colleges and universities provide professional development, access to instructional designers and videos to help faculty build these courses. The focus is on how to put the content in an online setting. What I think is lacking in this process is the “in the moment” discussions about managing learning. Students often do not get to “hear” how other students are tackling the material for the course and how they are preparing for the assignments. Activities that foster metacognition are not built into the instructional design process.

In the research on learning and metacognition, there is a focus on undergraduates (possibly because they are an easily accessible population for college researchers) and p-12 students. The literature does not discuss helping graduate students hone their metacognitive strategies. Knowing the importance of metacognition and its relationship to learning, I have incorporated activities that focus on metacognition into my online graduate courses.

Though graduate students are less likely to procrastinate than undergraduate students (Cao, 2012), learning online requires the use of self-regulation strategies (Dunn & Rakes, 2015). One argument many students have for liking distance courses is that they can do the work at their own pace and at a time that works with their schedule. What they often to do not take into account is that they need to build time into their schedule for their course work. Dunn and Rakes (2015) found that online graduate students are not always prepared to be “effective learners” but can improve their self-regulation skills in an online course. Graduate students in an online course need to use effective metacognitive strategies, like planning, self- monitoring and self-evaluation.

In addition to managing their time, which may now include family and work responsibilities, their course work may present its own set of new challenges. Graduate work asks students to engage in complex cognitive processes often in an online setting.

To help graduate students with their learning process I have built in metacognitive questions in to our discussion posts. For each module of learning, students are asked to answer a metacognitive question related to the planning, monitoring or evaluation of their learning. They are also asked to answer a content question. I have found their answers to the metacognitive questions surprising, enlightening and helpful. Additionally, these discussions have provided insights into how to preparing for the class, various resources for this course on their own classrooms and managing time, juggling “life.”

Early in the semester I ask, “How are you going to actively monitor your learning in this course?” Often students respond that they will check their grades on Blackboard (our course management system), specifically they will check to see how they did on assignments. I raise a concern with these ways of monitoring. Students need to be doing some form of self-evaluation before turning in their work. If they are waiting until they get the “grade” to know how well they are doing it may be too late. Other students have a better sense of how to monitor their knowledge during a course. Below are some examples:

  • “setting my goals with each unit and reflecting back after each reading to be sure my goals and understanding are met.”
  • “I intend on reading the required text and being able to ask myself the following questions ‘how well did I understand this’ or ‘can I explain this information to a classmate if asked to do so.’”
  • “comparing my knowledge with the course objectives”
  • “checking my work to make sure the guideline set by the rubric are being followed.”

These are posted in the discussions and their fellow classmates can see the strategies that they are using to manage and monitor their learning. In their responses they will note they had not thought about doing x but they plan to try it. By embedding a metacognitive prompt in each of the 8 modules and giving students a chance share how they monitor their learning I hope to build a better understanding of the importance of metacognition in the learning process and give them ways to foster metacognition in their own classrooms.

Later on in the class I ask the students about how things are going with their studying. Yes, this is a graduate level class. But this may be the students’ first graduate level course or this may be their first online course. Or this could be their last class in a fully online program but we can always improving our learning. Below are some example of students responses to: What confusions have you gotten clarified? What changes have you made to your study habits or learning strategies?

  • “The only changes to the study habits or strategies that I have used is to try the some of the little tips or strategies that come up in the modules or discussions.”
  • “I allow myself more time to study.”
  • “I have reduced the amount of notes I take.  Now, my focus is more on summarizing text and/or writing a “gist” for each heading.”
  • “I continue to use graphic organizers to assist me with learning and understanding new information.  This is a tactic that is working well for me.”

As educators, we need to make sure we are addressing metacognition with our graduate students and that we are providing opportunities for them to practice metacognition in an online setting. Additionally, I would be interested in conducting future research that examines online graduate students awareness of metacognitive strategies, their use of these strategies in an online learning environment and ways to improve their metacognitive strategies. If you would be interested in collaborating on a project about online graduate students metacognitive skills send me an email.

 References

Cao, L. (2012). Differences in procrastination and motivation between undergraduate and graduate students. Journal of the Scholarship of Teaching and Learning, 12(2), 39-64.

Dunn, K.E. & Rakes, G.C. (2015). Exploring online graduate students’ responses to online self-regulation training. Journal of Interactive Online Learning, 13(4), 1-21.

Merriam, S.B., & Bierema, L.L. (2014). Adult learning: Linking theory and practice. San Francisco, CA: Jossey-Bass.

 


Does Processing Fluency Really Matter for Metacognition in Actual Learning Situations? (Part Two)

By Michael J. Serra, Texas Tech University

Part II: Fluency in the Classroom

In the first part of this post, I discussed laboratory-based research demonstrating that learners judge their knowledge (e.g., memory or comprehension) to be better when information seems easy to process and worse when information seems difficult to process, even when eventual test performance is not predicted by such experiences. In this part, I question whether these outcomes are worth worrying about in everyday, real-life learning situations.

Are Fluency Manipulations Realistic?

Researchers who obtain effects of perceptual fluency on learners’ metacognitive self-evaluations in the laboratory suggest that similar effects might also obtain for students in real-life learning and study situations. In such cases, students might study inappropriately or inefficiently (e.g., under-studying when they experience a sense of fluency or over-studying when they experience a sense of disfluency). But to what extent should we be worried that any naturally-occurring differences in processing fluency might affect our students in actual learning situations?

Look at the accompanying figure. This figure presents examples of several ways in which researchers have manipulated visual processing fluency to demonstrate effects on participants’ judgments of their learning. When was the last time you saw a textbook printed in a blurry font, or featuring an upside down passage, or involving a section where pink text was printed on a yellow background? fluencyWhen you present in-person lectures, do your PowerPoints feature any words typed in aLtErNaTiNg CaSe? (Or, in terms of auditory processing fluency, do you deliver half of the lesson in a low, garbled voice and half in a loud, booming voice?). You would probably – and purposefully – avoid such variations in processing fluency when presenting to or creating learning materials for your students. Yet, even in the laboratory with these exaggerated fluency manipulations, the effects of perceptual fluency on both learning and metacognitive monitoring are often small (i.e., small differences between conditions). Put differently, it takes a lot of effort and requires very specific, controlled conditions to obtain effects of fluency on learning or metacognitive monitoring in the laboratory.

Will Fluency Effects Occur in the Classroom?

Careful examination of methods and findings from laboratory-based research suggests that such effects are unlikely to occur in the real-life situations because of how fragile these effects are in the laboratory. For example, processing fluency only seems to affect learners’ metacognitive self-evaluations of their learning when they experience both fluent and disfluent information; experiencing only one level of fluency usually won’t produce such effects. For example, participants only judge information presented in a large, easy-to-read font as better learned than information presented in a small, difficult-to-read font when they experience some of the information in one format and some in the other; when they only experience one format, the formatting does not affect their learning judgments (e.g., Magreehan et al., 2015; Yue et al., 2013). The levels of fluency – and, perhaps more importantly, disfluency – must also be fairly distinguishable from each other to have an effect on learners’ judgments. For example, consider the example formatting in the accompanying figure: learners must notice a clear difference in formatting and in their experience of fluency across the formats for the formatting to affect their judgments. Learners likely must also have limited time to process the disfluent information; if they have enough time to process the disfluent information, the effects on both learning and on metacognitive judgments disappear (cf. Yue et al., 2013; but see Magreehan et al., 2015). Perhaps most important, the effects of fluency on learning judgments are easiest to obtain in the laboratory when the learning materials are low in authenticity or do not have much natural variation in intrinsic difficulty. For example, participants will base their learning judgments on perceptual fluency when all of the items they are asked to learn are of equal difficulty, such as pairs of unrelated words (e.g., “CAT – FORK”, “KETTLE – MOUNTAIN”), but they ignore perceptual fluency once there is a clear difference in difficulty, such as when related word pairs (e.g., “FLAME – FIRE”, “UMBRELLA – RAIN”) are also part of the learning materials (cf. Magreehan et al., 2015).

Consider a real-life example: perhaps you photocopied a magazine article for your students to read, and the image quality of that photocopy was not great (i.e., disfluent processing fluency). We might be concerned that the poor image quality would lead students to incorrectly judge that they have not understood the article, when in fact they had been able to comprehend it quite well (despite the image quality). Given the evidence above, however, this instance of processing fluency might not actually affect your students’ metacognitive judgments of their comprehension. Students in this situation are only being exposed to one level of fluency (i.e., just disfluent formatting), and the level of disfluency might not be that discordant from the norm (i.e., a blurry or dark photocopy might not be that abnormal). Further, students likely have ample time to overcome the disfluency while reading (i.e., assuming the assignment was to read the article as homework at their own pace), and the article likely contains a variety of information besides fluency that students can use for their learning judgments (e.g., students might use their level of background knowledge or familiarity with key terms in the article as more-predictive bases for judging their comprehension). So, despite the fact that the photocopied article might be visually disfluent – or at least might produce some experience of disfluency – it would not seem likely to affect your students’ judgments of their own comprehension.

In summary, at present it seems unlikely that the experience of perceptual processing fluency or disfluency is likely to affect students’ metacognitive self-evaluations of their learning in actual learning or study situations. Teachers and designers of educational materials might of course strive by default to present all information to students clearly and in ways that are perceptually fluent, but it seems premature – and perhaps even unnecessary – for them to worry about rare instances where information is not perceptually fluent, especially if there are counteracting factors such as students having ample time to process the material, there only being one level of fluency, or students having other information upon which to base their judgments of learning.

Going Forward

The question of whether or not laboratory findings related to perceptual fluency will transfer to authentic learning situations certainly requires further empirical scrutiny. At present, however, the claim that highly-contrived effects of perceptual fluency on learners’ metacognitive judgments will also impair the efficacy of study behaviors in more naturalistic situations seems unfounded and unlikely.

Researchers might be wise to abandon the examination of highly-contrived fluency effects in the laboratory and instead examine more realistic variations in fluency in more natural learning situations to see if such conditions actually matter for students. For example, Carpenter and colleagues (Carpenter, et al., in press; Carpenter, et al., 2013) have been examining the effects of a factor they call instructor fluency – the ease or clarity with which information is presented – on learning and judgments of learning. Importantly, this factor is not perceptual fluency, as it does not involve purported variations in perceptual processing. Rather, instructor fluency invokes the sense of clarity that learners experience while processing a lesson. In experiments on this topic, students watched a short video-recorded lesson taught by either a confident and well-organized (“fluent”) instructor or a nervous and seemingly disorganized (“disfluent”) instructor, judged their learning from the video, and then completed a test over the information. Much as in research on perceptual fluency, participants judged that they learned more from the fluent instructor than from the disfluent one, even though test performance did not differ by condition.

These findings related to instructor fluency do not validate those on perceptual fluency. Rather, I would argue that they actually add further nails to the coffin of perceptual fluency. There are bigger problems out there besides perceptual fluency we can be worrying about in order to help our students learn and help them to accurately make metacognitive judgments. Perhaps instructor fluency is one of those problems, and perhaps it isn’t. But it seems that perceptual fluency is not a problem we should be greatly concerned about in realistic learning situations.

References

Carpenter, S. K., Mickes, L., Rahman, S., & Fernandez, C. (in press). The effect of instructor fluency on students’ perceptions of instructors, confidence in learning, and actual learning. Journal of Experimental Psychology: Applied.

Carpenter, S. K., Wilford, M. M., Kornell, N., & Mullaney, K. M. (2013). Appearances can be deceiving: instructor fluency increases perceptions of learning without increasing actual learning. Psychonomic Bulletin & Review, 20, 1350-1356.

Magreehan, D. A., Serra, M. J., Schwartz, N. H., & Narciss, S. (2015, advanced online publication). Further boundary conditions for the effects of perceptual disfluency on judgments of learning. Metacognition and Learning.

Yue, C. L., Castel, A. D., & Bjork, R. A. (2013). When disfluency is—and is not—a desirable difficulty: The influence of typeface clarity on metacognitive judgments and memory. Memory & Cognition, 41, 229-241.

 

 


Part One: Does Processing Fluency Really Matter for Metacognition in Actual Learning Situations?

By Michael J. Serra, Texas Tech University

Part I: Fluency in the Laboratory

Much recent research demonstrates that learners judge their knowledge (e.g., memory or comprehension) to be better when information seems easy to process and worse when information seems difficult to process, even when eventual test performance is not predicted by such experiences. Laboratory-based researchers often argue that the misuse of such experiences as the basis for learners’ self-evaluations can produce metacognitive illusions and lead to inefficient study. In the present post, I review these effects obtained in the laboratory. In the second part of this post, I question whether these outcomes are worth worrying about in everyday, real-life learning situations.

What is Processing Fluency?

Have you ever struggled to hear a low-volume or garbled voicemail message, or struggled to read small or blurry printed text? Did you experience some relief after raising the volume on your phone or putting on your reading glasses and trying again? What if you didn’t have your reading glasses with you at the time? You might still be able to read the small printed text, but it would take more effort and might literally feel more effortful than if you had your glasses on. Would the feeling of effort you experienced while reading without your glasses affect your appraisal of how much you liked or how well you understood what you read?

When we process information, we often have a co-occurring experience of processing fluency: the ease or difficulty we experience while physically processing that information. Note that this experience is technically independent of the innate complexity of the information itself. For example, an intricate and conceptually-confusing physics textbook might be printed in a large and easy to read font (high difficulty, perceptually fluent), while a child might express a simple message to you in a voice that is too low to be easily understood over the noise of a birthday party (low difficulty, perceptually disfluent).

Fluency and Metacognition

Certainly, we know that the innate complexity of learning materials is going to relate to students’ acquisition of new information and eventual performance on tests. Put differently, easy materials will be easy for students to learn and difficult materials will be difficult for students to learn. And it turns out that perceptual disfluency – difficulty processing information – can actually improve memory under some limited conditions (for a detailed examination, see Yue et al., 2013). But how does processing fluency affect students’ metacognitive self-evaluations of their learning?

In the modal laboratory-based examination of metacognition (for a review, see Dunlosky & Metcalfe, 2009), participants study learning materials (these might be simple memory materials or complex reading materials), make explicit metacognitive judgments in which they rate their learning or comprehension for those materials, and then complete a test over what they’ve studied. Researchers can then compare learners’ judgments to their test performance in a variety of ways to determine the accuracy of their self-evaluations (for a review, see Dunlosky & Metcalfe, 2009). As you might know from reading other posts on this website, we usually want learners to accurately judge their learning so they can make efficient decisions on how to allocate their study time or what information to focus on when studying. Any factor that can reduce that accuracy is likely to be problematic for ultimate test performance.

Metacognition researchers have examined how fluency affects participants’ judgments of their learning in the laboratory. The figure in this post includes several examples of ways in which researchers have manipulated the visual perceptual fluency of learning materials (i.e., memory materials or reading materials) to be perceptually disfluent compared to a fluent condition. fluencyThese manipulations involving visual processing fluency include presenting learning materials in an easy-to-read versus difficult-to-read typeface either by literally blurring the font (Yue et al., 2013) or by adjusting the colors of the words and background to make them easy versus difficult to read (Werth & Strack, 2003), in an upside-down versus right-side up typeface (Sungkhasettee et al., 2011), and using normal capitalization versus capitalizing every other letter (Mueller et al., 2013). (A conceptually similar manipulation for auditory perceptual fluency might include making the volume high versus low, or the auditory quality clear versus garbled.).

A wealth of empirical (mostly laboratory-based) research demonstrates that learners typically judge perceptually-fluent learning materials to be better-learned than perceptually-disfluent learning materials, even when learning (i.e., later test performance) is the same for the two sets of materials (e.g., Magreehan et al., 2015; Mueller et al., 2013; Rhodes & Castel, 2008; Susser et al., 2013; Yue et al., 2013). Although there is a current theoretical debate as to why processing fluency affects learners’ metacognitive judgments of their learning (i.e., Do the effects stem from the experience of fluency or from explicit beliefs about fluency?, see Magreehan et al., 2015; Mueller et al., 2013), it is nevertheless clear that manipulations such as those in the figure can affect how much students think they know. In terms of metacognitive accuracy, learners are often misled by feelings of fluency or disfluency that are neither related to their level of learning nor predictive of their future test performance.

As I previously noted, laboratory-based researchers argue that the misuse of such experiences as the basis for learners’ self-evaluations can produce metacognitive illusions and lead to inefficient study. But, this question has yet to receive much empirical scrutiny in more realistic learning situations. I explore the possibility that such effects will also obtain with realistic learning situations in the second part of this post.

References

Dunlosky, J., & Metcalfe, J. (2009). Metacognition. Thousand Oaks, CA US: Sage Publications, Inc.

Magreehan, D. A., Serra, M. J., Schwartz, N. H., & Narciss, S. (2015, advanced online publication). Further boundary conditions for the effects of perceptual disfluency on judgments of learning. Metacognition and Learning.

Mueller, M. L., Tauber, S. K., & Dunlosky, J. (2013). Contributions of beliefs and processing fluency to the effect of relatedness on judgments of learning. Psychonomic Bulletin & Review, 20, 378-384.

Rhodes, M. G., & Castel, A. D. (2008). Memory predictions are influenced by perceptual information: evidence for metacognitive illusions. Journal of Experimental Psychology: General, 137, 615-625.

Sungkhasettee, V. W., Friedman, M. C., & Castel, A. D. (2011). Memory and metamemory for inverted words: Illusions of competency and desirable difficulties. Psychonomic Bulletin & Review, 18, 973-978.

Susser, J. A., Mulligan, N. W., & Besken, M. (2013). The effects of list composition and perceptual fluency on judgments of learning (JOLs). Memory & Cognition, 41, 1000-1011.

Werth, L., & Strack, F. (2003). An inferential approach to the knew-it-all-along phenomenon. Memory, 11, 411-419.

Yue, C. L., Castel, A. D., & Bjork, R. A. (2013). When disfluency is—and is not—a desirable difficulty: The influence of typeface clarity on metacognitive judgments and memory. Memory & Cognition, 41, 229-241.


Unskilled and Unaware: A Metacognitive Bias

by John R. Schumacher, Eevin Akers, & Roman Taraban (all from Texas Tech University).

In 1995, McArthur Wheeler robbed two Pittsburgh banks in broad daylight, with no attempt to disguise himself. When he was arrested that night, he objected “But I wore the juice.” Because lelemonmon juice can be used as an invisible ink, Wheeler thought that rubbing his face with lemon juice would make it invisible to surveillance cameras in the banks. Kruger and Dunning (1999) used Wheeler’s story to exemplify a metacognitive bias through which relatively unskilled individuals overestimate their skill, being both unaware of their ineptitude and holding an inflated sense of their knowledge or ability. This is called the Dunning-Kruger effect, and it also seems to apply to some academic settings. For example, Kruger and Dunning found that some students are able to accurately predict their performance prior to taking a test. That is, these students predict that they will do well on the test and actually perform well on the test. Other students predict that they will do well on a test, but do poorly on the test. These students tend to have an inflated sense of how well they will do but do poorly, thus they fit the Dunning-Kruger effect. Because these students’ predictions do not match their performance, we describe them as poorly calibrated. Good calibration involves metacognitive awareness. This post explores how note taking relates to calibration and metacognitive awareness.

Some of the experiments in our lab concern the benefits of note taking. In these experiments, students were presented with a college lecture. Note takers recalled more than non-notetakers, who simply watched the video (Jennings & Taraban, 2014). The question we explored was whether good note taking skills improved students’ calibration of how much they know and thereby reduced the unskilled and unaware effect reported by Kruger and Dunning (1999).

In one experiment, participants watched a 30-minute video lecture while either taking notes (notetakers) or simply viewing the video (non-notetakers). They returned 24 hours later. They predicted the percentage of information they believed they would recall, using a scale of 0 to 100, and then took a free-recall test, without being given an opportunity to study their notes or mentally review the prior day’s video lecture. They then studied their notes (notetakers) or mentally reviewed the lecture (non-notetakers) from the previous day, for12 minutes, and took a second free-recall test. In order to assess the Dunning-Kruger effect, we subtracted the actual percent of lecture material that was recalled in each test (0 to 100) from participants’ predictions of how much they would recall on each test (0 to 100). For example, if a participant predicted he or she would correctly recall 75% of the material on a test and actually recalled 50% the calibration score would be +25 (75 – 50 = 25). Values close to +100 indicated extreme overconfidence, values close to -100 indicated extreme underconfidence, and values close to 0 indicated good calibration. To answer our question about how note taking relates to calibration, we compared the calibration scores for the two groups (note takers and non-notetakers) for the two situations (before reviewing notes or reflecting, and after reviewing notes or reflecting). These analyses indicated that the two groups did not differ in calibration for the first, free recall test. However, to our surprise, note takers became significantly more overconfident, and thus less calibrated in their predictions, than non-notetakers on the second test. After studying, notetakers’ calibration became worse.

Note taking increases test performance. So why doesn’t note taking improve calibration? Since note takers are more “skilled”, that is, have encoded and stored more information from the lecture, shouldn’t they be more “aware”, that is, better calibrated, as the Dunning-Kruger effect would imply? One possible explanation is that studying notes immediately increases the amount of information processed in working memory. The information that participants will be asked to recall shortly is highly active and available. This sense of availability produces the inflated (and false) prediction that much information will be remembered on the test. Is this overconfidence harmful to the learner? It could be harmful to the extent that individuals often self-generate predictions of how well they will do on a test in order to self-regulate their study behaviors. Poor calibration of these predictions could lead to the individual failing to recognize that he or she requires additional study time before all material is properly stored and able to be recalled.

If note taking itself is not the problem, then is there some way students can improve their calibration after studying in order to better regulate subsequent study efforts? The answer is “yes.” Research has shown that predictions of future performance improve if there is a short delay between studying information and predicting subsequent test performance (Thiede, Dunlosky, Griffin, & Wiley, 2005). In order to improve calibration after studying notes, students should be encouraged to wait, after studying their notes, before judging whether they need additional study time. In order to improve metacognitive awareness with respect to calibration, students need to understand that immediate judgments of how much they know may be inflated. They need to be aware that waiting a short time before judging whether they need more study will result in more effective self-regulation of study time.

References
Jennings, E., & Taraban, R. (May, 2014). Note-taking in the modern college classroom: Computer, paper and pencil, or listening? Paper presented at the Midwestern Psychological Association (MPA), Chicago, IL.

Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of personality and social psychology, 77(6), 1121.

Thiede, K. W., Dunlosky, J., Griffin, T. D., & Wiley, J. (2005). Understanding the delayed-keyword effect on metacomprehension accuracy. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(6), 1-25.


Teach Students How to Learn: A review of Saundra McGuire’s strategy-packed book

by Jessica Santangelo, Ph.D. Hofstra University

For those interested in helping students develop strong metacognitive skills, Dr. Saundra McGuire’s book, Teach Students How to Learn: Strategies You Can Incorporate Into Any Course to Improve Student Metacognition, Study Skills, and Motivation, is concise, practical, and much less overwhelming than trying to figure out what to do on your own. It is both a consolidation of the research surrounding metacognition, mindset, and motivation and a how-to guide for putting that research into practice.

I have been interested in metacognition for several years. Having waded through the literature on teaching metacognition (e.g., using tutors, student self-check, writing assignments, reflective writing, learning records, “wrappers”, or any number of other strategies) I found Dr. McGuire’s book to be an excellent resource. It places many of the strategies I already use in my courses in a larger context which helps me better articulate to my students and colleagues why I am teaching those strategies. I also picked up a few strategies I had not used previously.

While metacognition is the focus of the book, Dr. McGuire includes strategies for promoting a growth mindset (Chapter 4) and for boosting student motivation (Chapters 7, 8 and 9). I hadn’t expected such an explicit focus on these two topics, but the book makes clear why they are important: they increase the probability of success. If students (and faculty) have a growth mindset, believing that success is due to behaviors and actions rather than innate talent or being “smart”, they are more likely to embrace the metacognitive strategies outlined in the book. The same principle applies to a person’s emotional state. Both emotions and learning arise in the brain and affect each other. If students and faculty are motivated to learn, they are more likely to embrace the metacognitive strategies.

The part of the book that is perhaps most practically useful is Chapter 11: Teaching Learning Strategies to Groups. Dr. McGuire details an approach she has honed over many years to teach metacognitive skills to groups of students in one, 50-minute presentation (a detailed discussion of the metacognitive skills and evidence for them are provided in Chapters 3-5). Slides that can be tailored for any course are available at the book’s accompanying website, along with a video of Dr. McGuire giving the presentation throughout which she sprinkles in data and anecdotes that foster a growth mindset and increase student motivation.

Before reading Dr. McGuire’s book, I had had success using several strategies to promote student metacognition. I had a student go from failing exams to making high C’s, and other students move from C’s to B’s and A’s. However, I felt like my approach was haphazard since I had pulled ideas from different places in the literature without a cohesive framework for implementation. The book provided the framework I was missing.

This semester, I decided to use Dr. McGuire’s cohesive 50-minute session to see its impact on my students. I adapted it to be an online workshop because 1) I have limited class time this semester, and 2) an online intervention may benefit my colleagues who are interested in this approach but who aren’t able to use a class period for this purpose. In addition to the online workshop, I re-emphasize key points from the book when students come to office hours. I use phrasing and examples presented in the book to reinforce a growth mindset and boost motivation. I intentionally discuss “metacognitive learning strategies” rather than “study skills” because, as Dr. McGuire points out, many students think they have all the “study skills” they need but are often intrigued by how “metacognitive learning strategies” (which most have not heard of before) could help them.

You can jump in with both feet, as I did, or start with one or two strategies and build from there. Either way, this book allows you to take advantage of Dr. McGuire’s extensive experience as Director Emerita of the Center for Academic Success at LSU. I anticipate my copy will become dogeared with use as I continue to be metacognitive about my teaching and the strategies that work best for me, my students, and my colleagues. Stay tuned for an update on my online adaptation of Dr. McGuire’s session once the semester wraps up!


When is Metacognitive Self-Assessment Skill “Good Enough”?

Ed Nuhfer Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029 (with Steve Fleisher CSU-Channel Islands; Christopher Cogan, Independent Consultant; Karl Wirth, Macalester College and Eric Gaze, Bowdoin College)

We noted the statement by Zell and Krizan (2014, p. 111) that: “…it remains unclear whether people generally perceive their skills accurately or inaccurately” in Nuhfer, Cogan, Fleisher, Gaze and Wirth (2016) In our paper, we showed why innumeracy is a major barrier to the understanding of metacognitive self-assessment.

Another barrier to progress exists because scholars who attempt separately to do quantitative measures of self-assessment have no common ground from which to communicate and compare results. This occurs because there is no consensus on what constitutes “good enough” versus “woefully inadequate” metacognitive self-assessment skills. Does overestimating self-assessment skill by 5% allow labeling a person as “overconfident?” We do not believe so. We think that a reasonable range must be exceeded before those labels should be considered to apply.

The five of us are working now on a sequel to our above Numeracy paper. In the sequel, we interpret the data taken from 1154 paired measures from a behavioral science perspective. This extends our first paper’s describing of the data through graphs and numerical analyses. Because we had a database of over a thousand participants, we decided to use it to propose the first classification scheme for metacognitive self-assessment. It defines categories based on the magnitudes of self-assessment inaccuracy (Figure 1).

metacogmarchfig1
Figure 1. Draft of a proposed classification scheme for metacognitive self-assessment based upon magnitudes of inaccuracy of self-assessed competence as determined by percentage points (ppts) differences between ratings of self-assessed competence and scores from testing of actual competence, both expressed in percentages.

If you wonder where the “good” definition comes from in Figure 1, we disclosed on page 19 of our Numeracy paper: “We designated self-assessment accuracies within ±10% of zero as good self-assessments. We derived this designation from 69 professors self-assessing their competence, and 74% of them achieving accuracy within ±10%.”

The other breaks that designate “Adequate,” “Marginal,” “Inadequate,” and “Egregious” admittedly derive from natural breaks based upon measures expressed in percentages. By distribution through the above categories, we found that over 2/3 of our 1154 participants had adequate self-assessment skills, a bit over 21% exhibited inadequate skills, and the remainder lay within the category of “marginal.” We found that less than 3% qualified by our definition as “unskilled and unaware of it.”

These results indicate that the popular perspectives found in web searches that portray people in general as having grossly overinflated views of their own competence may be incomplete and perhaps even erroneous. Other researchers are now discovering that the correlations between paired measures of self-assessed competence and actual competence are positive and significant. However, to establish the relationship between self-assessed competency and actual competency appears to require more care in taking the paired measures than many of us researchers earlier suspected.

Do the categories as defined in Figure 1 appear reasonable to other bloggers, or do these conflict with your observations? For instance, where would you place the boundary between “Adequate” and “Inadequate” self-assessment? How would you quantitatively define a person who is “unskilled and unaware of it?” How much should a person overestimate/underestimate before receiving the label of “overconfident” or “underconfident?”

If you have measurements and data, please compare your results with ours before you answer. Data or not, be sure to become familiar with the mathematical artifacts summarized in our January Numeracy paper (linked above) that were mistakenly taken for self-assessment measures in earlier peer-reviewed self-assessment literature.

Our fellow bloggers constitute some of the nation’s foremost thinkers on metacognition, and we value their feedback on how Figure 1 accords with their experiences as we work toward finalizing our sequel paper.


Metacognition for Scholars: How to Engage in Deep Work

By Charity S. Peak, Ph.D. (Independent Consultant)

True confession: I’m addicted to shallow work. I wouldn’t say I’m a procrastinator as much as I am someone who prefers checking small things off my list or clearing my inbox over engaging in more complex tasks. I know I should be writing and researching. It’s just as much of my job as teaching or administrative duties, but I get to the end of my day and wonder why I didn’t have time for the most critical component of my promotion package – scholarship.

It turns out I’m not the only one suffering from this condition (far from it), and luckily there is a treatment plan available. It begins with metacognition about how one is spending time during the day, self-monitoring conditions that are most distracting or fruitful for productivity, and self-regulating behaviors in order to ritualize more constructive habits. Several authors offer suggestions for how to be more prolific (Goodson, 2013; Silvia, 2007), especially those providing writing prompts and 15-minute exercises, but few get to the core of the metacognitive process like Cal Newport’s (2016) recent Deep Work: Rules for Focused Success in a Distracted World. Newport, a professor of computer science at Georgetown and author of 5 books and a blog on college success, shares his strategies for becoming a prolific writer while balancing other faculty duties.

Newport claims that deep work is the ability to focus without distraction on a cognitively demanding task. It is arguably the most difficult and crucial capability of the 21st century. Creative thinking is becoming progressively rare in our distracted world, so those who can rise above shallow work are guaranteed to demonstrate value to their employers, especially colleges and universities. In order to be creative and produce new ideas, scholars must engage in deep work regularly and for significant periods of time. Instead, Newport argues that most people spend their days multitasking through a mire of shallow work like email, which is noncognitively demanding and offers little benefit to academia, let alone an individual’s promotion. In fact, he cites that “a 2012 McKinsey study found that the average knowledge worker now spends more than 60 percent of the workweek engaged in electronic communication and Internet searching, with close to 30 percent of a worker’s time dedicated to reading and answering e-mail alone” (Newport, 2016, p. 5). Sound like someone you know?

The good news is that if you carve out space for deep work, your professional career will soar. The first step is to become metacognitive about how you are spending your time during the day. One simple method is to self-monitor how you use your work days by keeping a grid near your computer or desk. At the end of every hour throughout your day, record how much time you actually spent doing your job duties of teaching (including prep and grading), writing and research, and service. Like a food diary or exercise journal, your shallow work addiction will become apparent quickly, but you will also gain metacognition about when and under which conditions you might attempt to fit in time for deep work.

Once you have a grasp of the issue at hand, you can begin to self-regulate your behavior by blocking off time in your schedule in which you can engage in a deeper level of creative thinking. Each person will gravitate toward a different modality conducive to an individual’s working styles or arrangements. The author offers a few choices for you to consider, which have been proven to be successful for other scholars and business leaders:

  • Monastic: Eliminate or radically minimize shallow obligations, such as meetings and emails, in an effort to focus solely on doing one thing exceptionally well. Put an out-of-office response on your email, work somewhere other than your workplace, or take a year-long sabbatical in order to completely separate from frivolous daily tasks that keep you away from research and writing. Most teaching faculty and academic leaders are unable to be purely monastic due to other duties.
  • Bimodal: Divide your time, dedicating some clearly defined stretches to deep pursuits and leaving the rest open to everything else. During the deep time, act monastically – seek intense and uninterrupted concentration – but schedule other time in your day for shallow work to be completed. One successful scholar shared the possibility of teaching a very full load one semester but not teaching at all during the next as an example of engaging deeply in both critical duties.
  • Rhythmic: Also called the “chain method” or “snack writing,” create a regular habit of engaging in deep work, such as every morning before going into work or at the end of each day. Blocking off one’s calendar and writing every day has been proven to be one of the most productive habits for scholars attempting to balance their research with other duties (Gardiner & Kearns, 2011).
  • Journalistic: Fit deep work into your schedule wherever you can – 15 minutes here, an hour there. Over time you will become trained to shift into writing mode on a moment’s notice. This approach is usually most effective for experienced scholars who can switch easily between shallow and deep work. Inexperienced writers may find that the multitasking yields unproductive results, so they should proceed cautiously with this method.

The key is to do something! You must ritualize whichever method you choose in order to optimize your productivity. This may take some trial and error, but with your new-found metacognition about how you work best and some alternative strategies to try, you will be more likely to self-regulate your behaviors in order to be successful in your scholarly pursuits. If you try new approaches and are still not engaging in enough deep work, consider joining a writing group or finding a colleague to hold you accountable on a regular basis. Again, like diet and exercise, others can sometimes provide the motivation and deadlines that we are unable to provide for ourselves. Over time, your addiction to shallow work will subside and your productivity will soar… or so they tell me.

Resources:

Gardiner, M., & Kearns, H. (2011). Turbocharge your writing today. Nature 475: 129-130. doi: 10.1038/nj7354-129a

Goodson, P. (2013). Becoming an academic writer: 50 exercises for paced, productive, and powerful writing. Los Angeles: Sage.

Newport, C. (2016). Deep work: Rules for focused success in a distracted world. New York: Grand Central Publishing.

Silvia, P. J. (2007). How to write a lot: A practical guide to productive academic writing. Washington, D.C.: American Psychological Association.


The Importance of Teaching Effective Self-Assessment

by Stephen Chew, Ph.D., Samford University,  slchew@samford.edu

Say we have two students who are in the same classes. For sentimental reasons, I’ll call them Goofus and Gallant[i]. Consider how they each react in the following scenarios.

In General Psychology, their teacher always gives a “clicker question” after each section. The students click in their response and the results are projected for the class to see. The teacher then explains the correct answer. Gallant uses the opportunity to check his understanding of the concept and notes the kind of question the teacher likes to use for quizzes. Goofus thinks clicker questions are a waste of time because they don’t count for anything.

In their math class, the teacher always posts a practice exam about a week before every exam. A day or two before the exam, the teacher posts just the answers, without showing how the problems were solved. Gallant checks his answers and if he gets them wrong, he finds out how to solve those problems from the book, the teacher, or classmates. Goofus checks the answers without first trying to work the problem. He tries to figure out how to work backwards from the answer. He considers that good studying. He memorizes the exact problems on the practice exam and is upset if the problems on the exam don’t match them.

In history class, the teacher returns an essay exam along with the grading rubric. Both boys were marked off for answers the teacher did not find sufficiently detailed and comprehensive. Gallant compares his answer to answers from classmates who scored well on the exam to figure out what he did wrong and how to do better next time. Goofus looks at the exam and decides the teacher gives higher scores to students who write more and use bigger words. For the next exam, he doesn’t change how he studies, but he gives long, repetitive answers and uses fancy words even though he isn’t exactly sure what they mean.

In each case, the teacher offers opportunities for improving metacognitive awareness, but the reactions of the two boys is markedly different. Gallant recognizes the opportunity and takes advantage of it, while Goofus fails to see the usefulness of these opportunities and, when given feedback about his performance, fails to take advantage of it. Just because teachers offer opportunities for improving metacognition does not mean that students recognize the importance of the activities or know how to take advantage of them. What is missing is an understanding of self-assessment, which is fundamental to developing effective metacognition.

For educational purposes, self-assessment occurs when students engage in an activity in order to gain insight into their level of understanding. The activity can be initiated either by the student or the teacher. Furthermore, to qualify as self-assessment, the student must understand and utilize the feedback from the activity. In summary, self-assessment involves students learning the importance and utility of self-assessments, teachers or students creating opportunities for self-assessment, and students learning how to use the results to improve their learning (Kostons, van Gog, & Paas, 2012).

Self-assessment is similar to formative assessment, which refers to any low-stakes activity designed to reveal student learning, but there are key differences (Angelo & Cross, 1993). First, students may undergo a formative assessment without understanding that it is an important learning opportunity. In self-assessment, the student understands and values the activity as an aid to learning. Second, students may not appreciate or use feedback from the formative assessment to improve their learning (Karpicke, Butler, & Roediger, 2009). Successful self-assessment involves using the feedback to identify misconceptions and knowledge gaps, and to hone learning strategies (Kostons et al., 2012). Third, even high stakes, summative assessments can be used for self-assessment. For example, students can use the results of an exam to evaluate how successful their learning strategies were and make modifications in preparation for the next exam. Fourth, formative assessments are usually administered by the teacher. Self-assessment can be initiated by either teachers or students. For example, students may take advantage of chapter review quizzes to test their understanding. If students do not understand the importance of self-assessment and how to do it effectively, they will not take advantage of formative assessment opportunities, and they fail to use feedback to improve their learning.

The importance of learning effective self-assessment is grounded in a sound empirical and theoretical foundation. Teaching students to conduct self-assessment will help them to become aware of and correct faulty metacognition, which in turn should contribute to more successful self-regulated learning (see Pintrich, 2004). Self-assessment also involves student recall and application of information, facilitating learning through the testing effect (see Roediger & Karpicke, 2006, for a review). The proper use of feedback has also been shown to improve student learning (Hattie & Yates, 2014). Finally, self-assessment activities can also provide feedback to teachers on the student level of understanding so that they can adjust their pedagogy accordingly.

Teachers play a critical role in both designing rich activities for self-assessment and also teaching students how to recognize valuable opportunities for self-assessment and to take advantage of them. Some activities are more conducive to self-assessment than others. In the psychology class example above, Goofus doesn’t understand the purpose of the clicker question nor the importance of the feedback. The teacher could have used a richer activity with the clicker questions to promote self-assessment (e.g. Crouch & Mazur, 2001). In the math class scenario, the teacher gives a practice exam, but only gives the correct answer for feedback. Richer feedback would model the reasoning needed to solve the problems (Hattie & Yates, 2014) and support self-assessment. And even when feedback is given, students need to learn how to use the feedback effectively and avoid misconceptions, such as in the history class example where Goofus wrongly concludes the teacher wants longer answers with fancy words.

I believe effective self-assessment is a critical link between assessment activities and improved metacognition. It is link that we teachers often fail to acknowledge. I suspect that effective teachers teach students how to carry out self-assessment on their understanding of course content. Less effective teachers may provide self-assessment opportunities, but they are either not effectively designed, or students may not recognize the importance of these opportunities or know how to take advantage of them.

There is not a lot of research on how to teach effective self-assessment. The existing research tends to focus mainly on the providing self-assessment opportunities and not how to get students to make use of them. I believe research on self-assessment would be highly valuable for teachers. Some of the key research questions are:

  • How can students be convinced of the importance of self-assessment?
  • Can self-assessment improve metacognition and self-regulation?
  • Can self-assessment improve student study strategies?
  • Can self-assessment improve long-term learning?
  • What are the best ways to design and implement self-assessments?
  • When and how often should opportunities for self-assessment be given?
  • What kind of feedback is most effective for different learning goals?
  • How can students be taught to use the feedback from self-assessments effectively?

Two fundamental learning challenges for college students, especially first-year students, are poor metacognitive awareness and poor study strategies (Kornell & Bjork, 2007; McCabe, 2011). The two problems are connected because using a poor study strategy increases false confidence without increasing learning (Bjork, Dunlosky, & Kornell, 2013). Improving both metacognitive awareness and study strategies of students is difficult to do (Susser & McCabe, 2013). I believe a promising but little studied intervention is to teach students the importance and the means of conducting effective self-assessment.

References

Angelo, T. A. and K. P. Cross (1993). Classroom Assessment Techniques: A Handbook for College Teachers, Jossey-Bass.

Bjork, R. A., Dunlosky, J., & Kornell, N. (2013). Self-regulated learning: Beliefs, techniques, and illusions. Annual Review of Psychology, 64, 417-444.

Crouch, C. H., & Mazur, E. (2001). Peer Instruction: Ten years of experience and results. American Journal of Physics, 69, 970-977.

Hattie, J. A. C., & Yates, G. C. R. (2014). Using feedback to promote learning. In V. Benassi, C. E. Overson, & C. M. Hakala (Eds.). Applying the science of learning in education: Infusing psychological science into the curriculum. Retrieved from the Society for the Teaching of Psychology web site: http://teachpsych.org/ebooks/asle2014/index.php.

Karpicke, J. D., Butler, A. C., & Roediger, H. L. III. (2009). Metacognitive strategies in student learning: Do students practise retrieval when they study on their own? Memory, 17, 471-479.

Kornell, N., & Bjork, R. A. (2007). The promise and perils of self-regulated study. Psychonomic Bulletin & Review, 6, 219-224.

Kostons, D., van Gog, T., Paas, F. (2012). Training self-assessment and task-selection skills: A cognitive approach to improving self-regulated learning. Learning and Instruction, 22, 121-132.

McCabe, J. (2011). Metacognitive awareness of learning strategies in undergraduates. Memory & Cognition, 39, 462-476.

Pintrich, P. R. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educational Psychology Review, 16, 385-407.

Roediger, H. L., III., & Karpicke, J. D. (2006). The power of testing memory: Basic research and implications for educational practice. Perspectives on Psychological Science, 1, 181-210.

Susser, J. A., & McCabe, J. (2013). From the lab to the dorm room: Metacognitive awareness and use of spaced study. Instructional Science, 41, 345-363.

[i] Goofus and Gallant are trademarked names by Highlights for Children, Inc. No trademark infringement is intended. I use the names under educational fair use. As far as I know, Goofus and Gallant have never demonstrated good and poor metacognition.


Are Academic Procrastinators Metacognitively Deprived?

By Aaron S. Richmond
Metropolitan State University of Denver

Academic Procrastinators Brief Overview

One of my favorite articles is Academic Procrastination of Undergraduates: Low Self-Efficacy to Self-Regulate Predicts Higher Levels of Procrastination by Robert M. Klassen, Lindsey. L. Krawchuk, and Sukaina Rajani (2007). Klassen and colleagues state that “…the rate for problematic academic procrastination among undergraduates is estimated to be at least 70-95% (Ellis & Knaus, 1977; Steel, 2007), with estimates of chronic or severe procrastination among undergraduates between 20% and 30%” (p. 916). Academic procrastination is, “the intentional delay of an intended course action, in spite of an awareness of negative outcomes (Steel, 2007; as cited in Klassen et al., 2006, p. 916). Based on the above stated statistics, it is obvious that academic procrastination is an issue in higher education, and that understanding what factors influence it and are related to its frequency is of utmost importance.

In their 2007 article, Klassen and colleagues conducted two studies to understand the relationship among academic procrastination and self-efficacy, self-regulation, and self-esteem and then understand this relationship within “negative procrastinators” (p. 915). In study 1, they surveyed 261 undergraduate students. They found that academic procrastination was inversely correlated to college/university GPA, self-regulation, academic self-efficacy and self-esteem. Meaning as students’ frequency of academic procrastination went down, their GPA and self-reported scores of self-efficacy, self-esteem, and self-regulation went up. They also found that self-regulation, self-esteem, and self-efficacy predicted academic procrastination.

In study 2, Klassen and colleagues (2007) they were interested in knowing whether there was a difference between negative and neutral procrastinators. That is when procrastinating caused a negative affect (e.g., grade penalty for assignment tardiness) or a neutral affect (e.g., no penalty for assignment tardiness). They surveyed 194 undergraduates and asked students to rate how academic procrastination affected, either positively or negatively, specific academic tasks (reading, research, etc.). They then, divided the sample into a group of students that self-reported that academic procrastination negatively affected them in some way or positive/neutrally affected them in some way.  What they found is that there were significant differences in GPA, daily procrastination, task procrastination, predicted class grade, actual class grade, and self-reported self-regulation between negative procrastinators and neutral procrastinators. They also found that students most often procrastinated on writing tasks.

So Where Does Metacognition Come in to Play?

Because a main factor of their focus was self-regulation, I think Klassen and colleagues study, gives us great insight and promise into the potential role (either causal or predictive) that metacognition plays in academic procrastination. First, in Study 1, they used the Motivated Strategies for Learning Questionnaire (MSLQ; Pintrich, Smith, Garcia, & MckKeachie, 1993) to measure self-efficacy for self-regulation. This MSLQ subscale assesses students’ awareness of knowledge and control of cognition (Klassen et al., 2007). It asks question like “If course materials are difficult to understand, I change the way I read the material.” or “I try to change the way I study in order to fit the course requirements and instructor’s teaching style.” (p. 920). As self-efficacy for self-regulation are a subset of metacognition, it is clear to me, that these questions indirectly, if not directly, partially measure elements of metacognition.

This makes me wonder, it would be interesting if the results of Klassen et al.’s study hold true with other forms of metacognition, such as metacognitive awareness. For example, how does it relate to metacognitive awareness factors that Schraw and Dennison (1994) suggest, such as knowledge and cognition (e.g., declarative knowledge, procedural knowledge, conditional knowledge) vs. regulation of cognition (e.g., planning, information management, monitoring, evaluation)?  Or, as Klassen et al. did not use the entire battery of measures in the MSLQ, how does academic procrastination relate to other aspects of the MSLQ like Learning Strategies, Help Seeking Scale, Metacognitive Self-Regulation, etc. (Pintrich et al., 1993). Or how might Klassen’s results relate to behavioral measures of metacognition such as calibration or, how does it relate to the Need for Cognition (Cacioppo & Petty, 1982)?  These questions suggest that metacognition could play a very prominent role in academic procrastination.

There Are Always More Questions Than Answers

To my knowledge, researchers have yet to replicate Klassen et al.’s (2007) with an eye towards investigating whether metacognitive variables predict and mediate rates of academic procrastination.  Therefore, I feel like I must wrap up this blog (as I always do) with a few questions/challenges/inspirational ideasJ

  1. What is the relationship among metacognitive awareness and academic procrastination?
  2. If there is a relationship between metacognition and academic procrastination, are there mediating and moderating variables that contribute to the relationship between metacognition and academic procrastination? For example, critical thinking? Intelligence? Past academic performance? The type of content and experience with this content (e.g., science knowledge)?
  3. Are there specific elements of metacognition (e.g., self-efficacy vs. metacognitive awareness vs. calibration, vs. monitoring, etc.) that predict the frequency of academic procrastination?
  4. Can metacognitive awareness training reduce the frequency of academic procrastination?
  5. If so, what type of training best reduces academic procrastination?

 References

Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42(1), 116.

Ellis, A., & Knaus, W. J. (1977). Overcoming procrastination. NY: New American Library

Klassen, R. M., Krawchuk, L. L., & Rajani, S. (2008). Academic procrastination of undergraduates: Low self-efficacy to self-regulate predicts higher levels of procrastination. Contemporary Educational Psychology, 33, 915-931. doi:10.1016/j.cedpsych.2007.07.001

Pintrich, P. R., Smith, D. A. F., Garcia, T., & McKeachie, W. J. (1993). Reliability and predictive validity of the motivated strategies for learning questionnaire (MSLQ). Educational and Psychological Measurement, 53, 801–813.

Schraw, G., & Dennison, R. S. (1994). Assessing metacognitive awareness. Contemporary Educational Psychology, 19, 460-475.

Steel, P. (2007). The nature of procrastination: A meta-analytic and theoretical review of quintessential self regulatory failure. Psychological Bulletin, 133, 65–94.


The Challenge of Deep Learning in the Age of LearnSmart Course Systems

by Lauren Scharff, Ph.D. (U. S. Air Force Academy)

One of my close friends and colleague can reliably be counted on to point out that students are rational decision makers. There is only so much time in their days and they have full schedules. If there are ways for students to spend less time per course and still “be successful,” they will find the ways to do so. Unfortunately, their efficient choices may short-change their long-term, deep learning.

This tension between efficiency and deep learning was again brought to my attention when I learned about the “LearnSmart” (LS) text application that automatically comes with the e-text chosen by my department for the core course I’m teaching this semester. As a plus, the publisher has incorporated learning science (metacognitive prompts and spacing of review material) into the design of LearnSmart. Less positive, some aspects of the LearnSmart design seem to lead many students to choose efficiency over deep learning.

In a nutshell, the current LS design prompts learning shortcuts in several ways. Pre-highlighted text discourages reading from non-highlighted material, and the fact that the LS quiz questions primarily come from highlighted material reinforces those selective reading tendencies. A less conspicuous learning trap results from the design of the LS quiz credit algorithm that incorporates the metacognitive prompts. The metacognition prompts not only take a bit of extra time to answer, but students only get credit for completing questions for which they indicate good understanding of the question material. If they indicate questionable understanding, even if they ultimately answer correctly, that question does not count toward the required number of pre-class reading check questions. [If you’d like more details about the LS quiz process design, please see the text at the bottom of this post.]

Last semester, the fact that many of our students were choosing efficiency over deep learning became apparent when the first exam was graded. Despite very high completion of the LS pre-class reading quizzes and lively class discussions, exam grades on average were more than a letter grade lower than previous semesters.

The bottom line is, just like teaching tools, learning tools are only effective if they are used in ways that align with objectives. As instructors, our objectives typically are student learning (hopefully deep learning in most cases). Students’ objectives might seem to be correlated with learning (e.g. grades) or not (e.g. what is the fastest way to complete this assignment?). If we instructors design our courses or choose activities that allow students to efficiently (quickly) complete them while also obtaining good grades, then we are inadvertently supporting short-cuts to real learning.

So, how do we tackle our efficiency-shortcut challenge as we go into this new semester? There is a tool that the publisher offers to help us track student responses by levels of self-reported understanding and correctness. We can see if any students are showing the majority of their responses in the “I know it” category. If many of those are also incorrect, it’s likely that they are prioritizing short-term efficiency over long-term learning and we can talk to them one-on-one about their choices. That’s helpful, but it’s reactionary.

The real question is, How do we get students to consciously prioritize their long-term learning over short-term efficiency? For that, I suggest additional explicit discussion and another layer of metacognition. I plan to regularly check in with the students, have class discussions aimed at bringing their choices about their learning behaviors into their conscious awareness, and positively reinforcing their positive self-regulation of deep-learning behaviors.

I’ll let you know how it goes.

——————————————–

Here is some additional background on the e-text and the complimentary LearnSmart (LS) text .

There are two ways to access the text. One way is an electronic version of the printed text, including nice annotation capabilities for students who want to underline, highlight or take notes. It’s essentially an electronic version of a printed text. The second way to access the text is through the LS chapters. As mentioned above, when the students open these chapters, they will find that some of the text has already been highlighted for them!

As they read through the LS chapters, students are periodically prompted with some LS quiz questions (primarily from highlighted material). These questions are where some of the learning science comes in. Students are given a question about the material. But, rather than being given the multiple choice response options right away, they are first given a metacognitive prompt. They are asked how confident they are that they know the answer to the question without seeing the response options. They can choose “I know it,” “Think so,” “Unsure,” or “No idea.” Once they answer about their “awareness” of their understanding, then they are given the response options and they try to correctly answer the question.

This next point is key: it turns out that in order to get credit for question completion in LS, students must do BOTH of the following: 1) choose “I know it” when indicating understanding, and 2) answer the question correctly. If students indicate any other level of understanding, or if they answer incorrectly, LS will give them more questions on that topic, and the effort for that question won’t count towards completion of the required number of questions for the pre-class activity.

And there’s the rub. Efficient students quickly learn that they can complete the pre-class reading quiz activity much more quickly if they chose “I know it” to all the metacognitive understanding probes prior to each question. If they guess at the subsequent question answer and get it correct, it counts toward their completion of the activity and they move on. If they answer incorrectly, LS would give them another question from that topic, but they weren’t any worse off with respect to time and effort than if they had indicated that they weren’t sure of the answer.

If students actually take the time to take advantage of rather than shortcut the LS quiz features (there are additional ones I haven’t mentioned here), their deep learning should be enhanced. However, unless they come to value deep learning over efficiency and short-term grades (e.g. quiz completion), then there is no benefit to the technology. In fact it might further undermine their learning through a false sense of understanding.


Metacognition in Academic and Business Environments

by Dr. John Draeger, SUNY Buffalo State

I gave two workshops on metacognition this week — one to a group of business professionaIMG_20160107_103443740_HDRls associated with the Organizational Development Network of Western New York and the other to faculty at Genesee Community College. Both workshops began with the uncontroversial claim that being effective (e.g., in business, learning, teaching) requires finding the right strategy for the task at hand. The conversation centered around how metacognition can help make that happen. For example, metacognition encourages us to be explicit and intentional about our planning, to monitor our progress, to make adjustments along the way, and to evaluate our performance afterwards. While not a “magic elixir,” metacognition can help us become more aware of where, when, why, and how we are or are not effective.


As I prepared for these two workshops, I decided to include one of my favorite metacognition resources as a key part of the workshop. Kimberly Tanner (2012) offers a series of questions that prompt metacognitive planning, monitoring, and evaluation. By way of illustration, I adapted Tanner’s questions to a familiar academic task, namely reading. Not all students are as metacognitive as we would like them to be. When asked to complete a reading assignment, for example, some students will interpret the task as turning a certain number of pages (e.g., read pages 8-19). They read the words, flip the page, and the task is complete when they reach the end. Savvy students realize that turning pages is not much of a reading strategy. They will reflect upon their professor’s instructions and course objectives. These reflections can help them intentionally adopt an appropriate reading strategy. In short, these savvy students are engaging in metacognition. They are guided by Tanner-style questions in table below.  

 

Table: Using metacognition to read more effectively (Adapted from Tanner, 2012)

Task Planning Monitoring Evaluating
Reading What do I already know about this topic?

How much time do I need to complete the task?

What strategies do I intend to use?

What questions are arising?

Are my strategies working?

What is most confusing?

Am I struggling with motivation or content?

What other are strategies are available?

To what extent did I successfully complete the task?

To what extent did I use the resources available to me?

What confusions do I have that still need to be clarified?

What worked well?

Big picture Why is it important to learn this material?

How does this reading align with course objectives?

To what extent has completing this reading helped me with other learning goals? What have I learned in this course that I could use in the future?

 

After considering the table with reference to student reading, I asked the business group how the table might be adapted to a business context. They pointed out that middle managers are often flummoxed by company initiatives that either lack specificity or fail to align with the company’s mission and valIMG_3929ues. This is reminiscent of students who are paralyzed by what they take to be an ill-defined assignment (e.g., “write a reflection paper on what you just read”). Like the student scrambling to write the paper the night before, business organizations can be reactionary. Like the student who tends to do what they’ve done before in there other classes (e.g., put some quotations in a reflection paper to make it sound fancy), businesses are often carried along by organizational culture and past practice. W hen facing adversity, for example, organizational structure often suggests that doing something now (anything! Just do it!) is preferable to doing nothing at all. Like savvy students, however, savvy managers recognize the importance of explicitly considering and intentionally adapting response strategies most likely to further organizational goals. This requires metacognition and adapting the Tanner-style table is a place to start.

When I discussed the Tanner-style table with the faculty at Genesee Community College, they offered a wide-variety of suggestions concerning how the table might be adapted for use in their courses. For example, some suggested that my reading example presupposed that students actually complete their rIMG_3939eading assignments. They offered suggestions concerning how metacognitive prompts could be incorporated early in the course to bring out the importance of the reading to mastery of course material. Others suggested that metacognitive questions could be used to supplement prepackaged online course materials. Another offered that the he sometimes “translates” historical texts into more accessible English, but he is not always certain whether this is good for students. In response, someone pointed out that metacognitive prompts could help the faculty member more explicitly formulate the learning goals for the class and then consider whether the “translated” texts align with those goals.

In both business and academic contexts, I stressed that there is nothing “magical” about metacognition. It is not a quick fix or a cure-all. However, it does prompt us to ask difficult and often uncomfortable questions about our own efficacy. For example, participants in both workshops reported a tendency that all of us have to want to do things “our own way” even when this is not most effective. Metacognition puts us on the road towards better planning, better monitoring, better acting, and better alignment with our overall goals.

 

Thinking about thinking in both business and academic environments Share on X

References

Tanner, K. D. (2012). Promoting student metacognition. CBE-Life Sciences Education, 11(2), 113-120.


Quantifying Metacognition — Some Numeracy behind Self-Assessment Measures

Ed Nuhfer, Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029

Early this year, Lauren Scharff directed us to what might be one of the most influential reports on quantification of metacognition, which is Kruger and Dunning’s 1999 “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments.” In the 16 years that since elapsed, a popular belief sprung from that paper that became known as the “Dunning-Kruger effect.” Wikipedia describes the effect as a cognitive bias in which relatively unskilled individuals suffer from illusory superiority, mistakenly assessing their ability to be much higher than it really is. Wikipedia thus describes a true metacognitive handicap in a lack of ability to self-assess. I consider Kruger and Dunning (1999) as seminal because it represents what may be the first attempt to establish a way to quantify metacognitive self-assessment. Yet, as time passes, we always learn ways to improve on any good idea.

At first, quantifying the ability to self-assess seems simple. It appears that comparing a direct measure of confidence to perform taken through one instrument with a direct measure of demonstrated competence taken through another instrument should do the job nicely. For people skillful in self-assessment, the scores on both self-assessment and performance measures should be about equal. Seriously large differences can indicate underconfidence on one hand or “illusory superiority” on the other.

The Signal and the Noise

In practice, measuring self-assessment accuracy is not nearly so simple. The instruments of social science yield data consisting of the signal that expresses the relationship between our actual competency and our self-assessed feelings of competency and significant noise generated by our human error and inconsistency.

In analogy, consider signal as your favorite music on a radio station, the measuring instrument as your radio receiver, the noise as the static that intrudes on your favorite music, and the data as the actual sound mix of noise and signal that you hear. The radio signal may truly exist, but unless we construct suitable instruments to detect it, we will not be able to generate convincing evidence that the radio signal even exists. Failures can lead to the conclusions that metacognitive self-assessment is no better than random guessing.

Your personal metacognitive skill is analogous to an ability to tune to the clearest signal possible. In this case, you are “tuning in” to yourself—to your “internal radio station”—rather than tuning the instruments that can measure this signal externally. In developing self-assessment skill, you are working to attune your personal feelings of competence to reflect the clearest and most accurate self-assessment of your actual competence. Feedback from the instruments has value because they help us to see how well we have achieved the ability to self-assess accurately.

Instruments and the Data They Yield

General, global questions such as: “How would you rate your ability in math?” “How well can you express your ideas in writing?” or “How well do you understand science?” may prove to be crude, blunt self-assessment instruments. Instead of single general questions, more granular instruments like knowledge surveys that elicit multiple measures of specific information seem needed.

Because the true signal is harder to detect than often supposed, researchers need a critical mass of data to confirm the signal. Pressures to publish in academia can cause researchers to rush to publish results from small databases obtainable in a brief time rather than spending the time, sometimes years, needed to generate the database of sufficient size that can provide reproducible results.

Understanding Graphical Depictions of Data

Some graphical conventions that have become almost standard in the self-assessment literature depict ordered patterns from random noise. These patterns invite researchers to interpret the order as produced by the self-assessment signal. Graphing of nonsense data generated from random numbers in varied graphical formats can reveal what pure randomness looks like when depicted in any graphical convention. Knowing the patterns of randomness enables acquiring the numeracy needed to understand self-assessment measurements.

Some obvious questions I am anticipating follow: (1) How do I know if my instruments are capturing mainly noise or signal? (2) How can I tell when a database (either my own or one described in a peer-reviewed publication) is of sufficient size to be reproducible? (3) What are some alternatives to single global questions? (4) What kinds of graphs portray random noise as a legitimate self-assessment signal? (5) When I see a graph in a publication, how can I tell if it is mainly noise or mainly signal? (6) What kind of correlations are reasonable to expect between self-assessed competency and actual competency?

Are There Any Answers?

Getting some answers to these meaty questions requires more than a short blog post, but some help is just a click or two away. This blog directs readers to “Random Number Simulations Reveal How Random Noise Affects the Measurements and Graphical Portrayals of Self-Assessed Competency” (Numeracy, January 2016) with acknowledgments to my co-authors Christopher Cogan, Steven Fleisher, Eric Gaze and Karl Wirth for their infinite patience with me on this project. Numeracy is an open-source journal, and you can download the paper for free. Readers will likely see self-assessment literature in different ways way after reading the article.