Distance Graduate Programs and Metacognition

by Tara Beziat at Auburn University at Montgomery 

As enrollment in online programs and online courses continues to increase (Merriman & Bierema, 2014), institutions have recognized the importance of building quality learning experiences for their students. To accomplish this goal, colleges and universities provide professional development, access to instructional designers and videos to help faculty build these courses. The focus is on how to put the content in an online setting. What I think is lacking in this process is the “in the moment” discussions about managing learning. Students often do not get to “hear” how other students are tackling the material for the course and how they are preparing for the assignments. Activities that foster metacognition are not built into the instructional design process.

In the research on learning and metacognition, there is a focus on undergraduates (possibly because they are an easily accessible population for college researchers) and p-12 students. The literature does not discuss helping graduate students hone their metacognitive strategies. Knowing the importance of metacognition and its relationship to learning, I have incorporated activities that focus on metacognition into my online graduate courses.

Though graduate students are less likely to procrastinate than undergraduate students (Cao, 2012), learning online requires the use of self-regulation strategies (Dunn & Rakes, 2015). One argument many students have for liking distance courses is that they can do the work at their own pace and at a time that works with their schedule. What they often to do not take into account is that they need to build time into their schedule for their course work. Dunn and Rakes (2015) found that online graduate students are not always prepared to be “effective learners” but can improve their self-regulation skills in an online course. Graduate students in an online course need to use effective metacognitive strategies, like planning, self- monitoring and self-evaluation.

In addition to managing their time, which may now include family and work responsibilities, their course work may present its own set of new challenges. Graduate work asks students to engage in complex cognitive processes often in an online setting.

To help graduate students with their learning process I have built in metacognitive questions in to our discussion posts. For each module of learning, students are asked to answer a metacognitive question related to the planning, monitoring or evaluation of their learning. They are also asked to answer a content question. I have found their answers to the metacognitive questions surprising, enlightening and helpful. Additionally, these discussions have provided insights into how to preparing for the class, various resources for this course on their own classrooms and managing time, juggling “life.”

Early in the semester I ask, “How are you going to actively monitor your learning in this course?” Often students respond that they will check their grades on Blackboard (our course management system), specifically they will check to see how they did on assignments. I raise a concern with these ways of monitoring. Students need to be doing some form of self-evaluation before turning in their work. If they are waiting until they get the “grade” to know how well they are doing it may be too late. Other students have a better sense of how to monitor their knowledge during a course. Below are some examples:

  • “setting my goals with each unit and reflecting back after each reading to be sure my goals and understanding are met.”
  • “I intend on reading the required text and being able to ask myself the following questions ‘how well did I understand this’ or ‘can I explain this information to a classmate if asked to do so.’”
  • “comparing my knowledge with the course objectives”
  • “checking my work to make sure the guideline set by the rubric are being followed.”

These are posted in the discussions and their fellow classmates can see the strategies that they are using to manage and monitor their learning. In their responses they will note they had not thought about doing x but they plan to try it. By embedding a metacognitive prompt in each of the 8 modules and giving students a chance share how they monitor their learning I hope to build a better understanding of the importance of metacognition in the learning process and give them ways to foster metacognition in their own classrooms.

Later on in the class I ask the students about how things are going with their studying. Yes, this is a graduate level class. But this may be the students’ first graduate level course or this may be their first online course. Or this could be their last class in a fully online program but we can always improving our learning. Below are some example of students responses to: What confusions have you gotten clarified? What changes have you made to your study habits or learning strategies?

  • “The only changes to the study habits or strategies that I have used is to try the some of the little tips or strategies that come up in the modules or discussions.”
  • “I allow myself more time to study.”
  • “I have reduced the amount of notes I take.  Now, my focus is more on summarizing text and/or writing a “gist” for each heading.”
  • “I continue to use graphic organizers to assist me with learning and understanding new information.  This is a tactic that is working well for me.”

As educators, we need to make sure we are addressing metacognition with our graduate students and that we are providing opportunities for them to practice metacognition in an online setting. Additionally, I would be interested in conducting future research that examines online graduate students awareness of metacognitive strategies, their use of these strategies in an online learning environment and ways to improve their metacognitive strategies. If you would be interested in collaborating on a project about online graduate students metacognitive skills send me an email.

 References

Cao, L. (2012). Differences in procrastination and motivation between undergraduate and graduate students. Journal of the Scholarship of Teaching and Learning, 12(2), 39-64.

Dunn, K.E. & Rakes, G.C. (2015). Exploring online graduate students’ responses to online self-regulation training. Journal of Interactive Online Learning, 13(4), 1-21.

Merriam, S.B., & Bierema, L.L. (2014). Adult learning: Linking theory and practice. San Francisco, CA: Jossey-Bass.

 


Does Processing Fluency Really Matter for Metacognition in Actual Learning Situations? (Part Two)

By Michael J. Serra, Texas Tech University

Part II: Fluency in the Classroom

In the first part of this post, I discussed laboratory-based research demonstrating that learners judge their knowledge (e.g., memory or comprehension) to be better when information seems easy to process and worse when information seems difficult to process, even when eventual test performance is not predicted by such experiences. In this part, I question whether these outcomes are worth worrying about in everyday, real-life learning situations.

Are Fluency Manipulations Realistic?

Researchers who obtain effects of perceptual fluency on learners’ metacognitive self-evaluations in the laboratory suggest that similar effects might also obtain for students in real-life learning and study situations. In such cases, students might study inappropriately or inefficiently (e.g., under-studying when they experience a sense of fluency or over-studying when they experience a sense of disfluency). But to what extent should we be worried that any naturally-occurring differences in processing fluency might affect our students in actual learning situations?

Look at the accompanying figure. This figure presents examples of several ways in which researchers have manipulated visual processing fluency to demonstrate effects on participants’ judgments of their learning. When was the last time you saw a textbook printed in a blurry font, or featuring an upside down passage, or involving a section where pink text was printed on a yellow background? fluencyWhen you present in-person lectures, do your PowerPoints feature any words typed in aLtErNaTiNg CaSe? (Or, in terms of auditory processing fluency, do you deliver half of the lesson in a low, garbled voice and half in a loud, booming voice?). You would probably – and purposefully – avoid such variations in processing fluency when presenting to or creating learning materials for your students. Yet, even in the laboratory with these exaggerated fluency manipulations, the effects of perceptual fluency on both learning and metacognitive monitoring are often small (i.e., small differences between conditions). Put differently, it takes a lot of effort and requires very specific, controlled conditions to obtain effects of fluency on learning or metacognitive monitoring in the laboratory.

Will Fluency Effects Occur in the Classroom?

Careful examination of methods and findings from laboratory-based research suggests that such effects are unlikely to occur in the real-life situations because of how fragile these effects are in the laboratory. For example, processing fluency only seems to affect learners’ metacognitive self-evaluations of their learning when they experience both fluent and disfluent information; experiencing only one level of fluency usually won’t produce such effects. For example, participants only judge information presented in a large, easy-to-read font as better learned than information presented in a small, difficult-to-read font when they experience some of the information in one format and some in the other; when they only experience one format, the formatting does not affect their learning judgments (e.g., Magreehan et al., 2015; Yue et al., 2013). The levels of fluency – and, perhaps more importantly, disfluency – must also be fairly distinguishable from each other to have an effect on learners’ judgments. For example, consider the example formatting in the accompanying figure: learners must notice a clear difference in formatting and in their experience of fluency across the formats for the formatting to affect their judgments. Learners likely must also have limited time to process the disfluent information; if they have enough time to process the disfluent information, the effects on both learning and on metacognitive judgments disappear (cf. Yue et al., 2013; but see Magreehan et al., 2015). Perhaps most important, the effects of fluency on learning judgments are easiest to obtain in the laboratory when the learning materials are low in authenticity or do not have much natural variation in intrinsic difficulty. For example, participants will base their learning judgments on perceptual fluency when all of the items they are asked to learn are of equal difficulty, such as pairs of unrelated words (e.g., “CAT – FORK”, “KETTLE – MOUNTAIN”), but they ignore perceptual fluency once there is a clear difference in difficulty, such as when related word pairs (e.g., “FLAME – FIRE”, “UMBRELLA – RAIN”) are also part of the learning materials (cf. Magreehan et al., 2015).

Consider a real-life example: perhaps you photocopied a magazine article for your students to read, and the image quality of that photocopy was not great (i.e., disfluent processing fluency). We might be concerned that the poor image quality would lead students to incorrectly judge that they have not understood the article, when in fact they had been able to comprehend it quite well (despite the image quality). Given the evidence above, however, this instance of processing fluency might not actually affect your students’ metacognitive judgments of their comprehension. Students in this situation are only being exposed to one level of fluency (i.e., just disfluent formatting), and the level of disfluency might not be that discordant from the norm (i.e., a blurry or dark photocopy might not be that abnormal). Further, students likely have ample time to overcome the disfluency while reading (i.e., assuming the assignment was to read the article as homework at their own pace), and the article likely contains a variety of information besides fluency that students can use for their learning judgments (e.g., students might use their level of background knowledge or familiarity with key terms in the article as more-predictive bases for judging their comprehension). So, despite the fact that the photocopied article might be visually disfluent – or at least might produce some experience of disfluency – it would not seem likely to affect your students’ judgments of their own comprehension.

In summary, at present it seems unlikely that the experience of perceptual processing fluency or disfluency is likely to affect students’ metacognitive self-evaluations of their learning in actual learning or study situations. Teachers and designers of educational materials might of course strive by default to present all information to students clearly and in ways that are perceptually fluent, but it seems premature – and perhaps even unnecessary – for them to worry about rare instances where information is not perceptually fluent, especially if there are counteracting factors such as students having ample time to process the material, there only being one level of fluency, or students having other information upon which to base their judgments of learning.

Going Forward

The question of whether or not laboratory findings related to perceptual fluency will transfer to authentic learning situations certainly requires further empirical scrutiny. At present, however, the claim that highly-contrived effects of perceptual fluency on learners’ metacognitive judgments will also impair the efficacy of study behaviors in more naturalistic situations seems unfounded and unlikely.

Researchers might be wise to abandon the examination of highly-contrived fluency effects in the laboratory and instead examine more realistic variations in fluency in more natural learning situations to see if such conditions actually matter for students. For example, Carpenter and colleagues (Carpenter, et al., in press; Carpenter, et al., 2013) have been examining the effects of a factor they call instructor fluency – the ease or clarity with which information is presented – on learning and judgments of learning. Importantly, this factor is not perceptual fluency, as it does not involve purported variations in perceptual processing. Rather, instructor fluency invokes the sense of clarity that learners experience while processing a lesson. In experiments on this topic, students watched a short video-recorded lesson taught by either a confident and well-organized (“fluent”) instructor or a nervous and seemingly disorganized (“disfluent”) instructor, judged their learning from the video, and then completed a test over the information. Much as in research on perceptual fluency, participants judged that they learned more from the fluent instructor than from the disfluent one, even though test performance did not differ by condition.

These findings related to instructor fluency do not validate those on perceptual fluency. Rather, I would argue that they actually add further nails to the coffin of perceptual fluency. There are bigger problems out there besides perceptual fluency we can be worrying about in order to help our students learn and help them to accurately make metacognitive judgments. Perhaps instructor fluency is one of those problems, and perhaps it isn’t. But it seems that perceptual fluency is not a problem we should be greatly concerned about in realistic learning situations.

References

Carpenter, S. K., Mickes, L., Rahman, S., & Fernandez, C. (in press). The effect of instructor fluency on students’ perceptions of instructors, confidence in learning, and actual learning. Journal of Experimental Psychology: Applied.

Carpenter, S. K., Wilford, M. M., Kornell, N., & Mullaney, K. M. (2013). Appearances can be deceiving: instructor fluency increases perceptions of learning without increasing actual learning. Psychonomic Bulletin & Review, 20, 1350-1356.

Magreehan, D. A., Serra, M. J., Schwartz, N. H., & Narciss, S. (2015, advanced online publication). Further boundary conditions for the effects of perceptual disfluency on judgments of learning. Metacognition and Learning.

Yue, C. L., Castel, A. D., & Bjork, R. A. (2013). When disfluency is—and is not—a desirable difficulty: The influence of typeface clarity on metacognitive judgments and memory. Memory & Cognition, 41, 229-241.

 

 


Part One: Does Processing Fluency Really Matter for Metacognition in Actual Learning Situations?

By Michael J. Serra, Texas Tech University

Part I: Fluency in the Laboratory

Much recent research demonstrates that learners judge their knowledge (e.g., memory or comprehension) to be better when information seems easy to process and worse when information seems difficult to process, even when eventual test performance is not predicted by such experiences. Laboratory-based researchers often argue that the misuse of such experiences as the basis for learners’ self-evaluations can produce metacognitive illusions and lead to inefficient study. In the present post, I review these effects obtained in the laboratory. In the second part of this post, I question whether these outcomes are worth worrying about in everyday, real-life learning situations.

What is Processing Fluency?

Have you ever struggled to hear a low-volume or garbled voicemail message, or struggled to read small or blurry printed text? Did you experience some relief after raising the volume on your phone or putting on your reading glasses and trying again? What if you didn’t have your reading glasses with you at the time? You might still be able to read the small printed text, but it would take more effort and might literally feel more effortful than if you had your glasses on. Would the feeling of effort you experienced while reading without your glasses affect your appraisal of how much you liked or how well you understood what you read?

When we process information, we often have a co-occurring experience of processing fluency: the ease or difficulty we experience while physically processing that information. Note that this experience is technically independent of the innate complexity of the information itself. For example, an intricate and conceptually-confusing physics textbook might be printed in a large and easy to read font (high difficulty, perceptually fluent), while a child might express a simple message to you in a voice that is too low to be easily understood over the noise of a birthday party (low difficulty, perceptually disfluent).

Fluency and Metacognition

Certainly, we know that the innate complexity of learning materials is going to relate to students’ acquisition of new information and eventual performance on tests. Put differently, easy materials will be easy for students to learn and difficult materials will be difficult for students to learn. And it turns out that perceptual disfluency – difficulty processing information – can actually improve memory under some limited conditions (for a detailed examination, see Yue et al., 2013). But how does processing fluency affect students’ metacognitive self-evaluations of their learning?

In the modal laboratory-based examination of metacognition (for a review, see Dunlosky & Metcalfe, 2009), participants study learning materials (these might be simple memory materials or complex reading materials), make explicit metacognitive judgments in which they rate their learning or comprehension for those materials, and then complete a test over what they’ve studied. Researchers can then compare learners’ judgments to their test performance in a variety of ways to determine the accuracy of their self-evaluations (for a review, see Dunlosky & Metcalfe, 2009). As you might know from reading other posts on this website, we usually want learners to accurately judge their learning so they can make efficient decisions on how to allocate their study time or what information to focus on when studying. Any factor that can reduce that accuracy is likely to be problematic for ultimate test performance.

Metacognition researchers have examined how fluency affects participants’ judgments of their learning in the laboratory. The figure in this post includes several examples of ways in which researchers have manipulated the visual perceptual fluency of learning materials (i.e., memory materials or reading materials) to be perceptually disfluent compared to a fluent condition. fluencyThese manipulations involving visual processing fluency include presenting learning materials in an easy-to-read versus difficult-to-read typeface either by literally blurring the font (Yue et al., 2013) or by adjusting the colors of the words and background to make them easy versus difficult to read (Werth & Strack, 2003), in an upside-down versus right-side up typeface (Sungkhasettee et al., 2011), and using normal capitalization versus capitalizing every other letter (Mueller et al., 2013). (A conceptually similar manipulation for auditory perceptual fluency might include making the volume high versus low, or the auditory quality clear versus garbled.).

A wealth of empirical (mostly laboratory-based) research demonstrates that learners typically judge perceptually-fluent learning materials to be better-learned than perceptually-disfluent learning materials, even when learning (i.e., later test performance) is the same for the two sets of materials (e.g., Magreehan et al., 2015; Mueller et al., 2013; Rhodes & Castel, 2008; Susser et al., 2013; Yue et al., 2013). Although there is a current theoretical debate as to why processing fluency affects learners’ metacognitive judgments of their learning (i.e., Do the effects stem from the experience of fluency or from explicit beliefs about fluency?, see Magreehan et al., 2015; Mueller et al., 2013), it is nevertheless clear that manipulations such as those in the figure can affect how much students think they know. In terms of metacognitive accuracy, learners are often misled by feelings of fluency or disfluency that are neither related to their level of learning nor predictive of their future test performance.

As I previously noted, laboratory-based researchers argue that the misuse of such experiences as the basis for learners’ self-evaluations can produce metacognitive illusions and lead to inefficient study. But, this question has yet to receive much empirical scrutiny in more realistic learning situations. I explore the possibility that such effects will also obtain with realistic learning situations in the second part of this post.

References

Dunlosky, J., & Metcalfe, J. (2009). Metacognition. Thousand Oaks, CA US: Sage Publications, Inc.

Magreehan, D. A., Serra, M. J., Schwartz, N. H., & Narciss, S. (2015, advanced online publication). Further boundary conditions for the effects of perceptual disfluency on judgments of learning. Metacognition and Learning.

Mueller, M. L., Tauber, S. K., & Dunlosky, J. (2013). Contributions of beliefs and processing fluency to the effect of relatedness on judgments of learning. Psychonomic Bulletin & Review, 20, 378-384.

Rhodes, M. G., & Castel, A. D. (2008). Memory predictions are influenced by perceptual information: evidence for metacognitive illusions. Journal of Experimental Psychology: General, 137, 615-625.

Sungkhasettee, V. W., Friedman, M. C., & Castel, A. D. (2011). Memory and metamemory for inverted words: Illusions of competency and desirable difficulties. Psychonomic Bulletin & Review, 18, 973-978.

Susser, J. A., Mulligan, N. W., & Besken, M. (2013). The effects of list composition and perceptual fluency on judgments of learning (JOLs). Memory & Cognition, 41, 1000-1011.

Werth, L., & Strack, F. (2003). An inferential approach to the knew-it-all-along phenomenon. Memory, 11, 411-419.

Yue, C. L., Castel, A. D., & Bjork, R. A. (2013). When disfluency is—and is not—a desirable difficulty: The influence of typeface clarity on metacognitive judgments and memory. Memory & Cognition, 41, 229-241.


Unskilled and Unaware: A Metacognitive Bias

by John R. Schumacher, Eevin Akers, & Roman Taraban (all from Texas Tech University).

In 1995, McArthur Wheeler robbed two Pittsburgh banks in broad daylight, with no attempt to disguise himself. When he was arrested that night, he objected “But I wore the juice.” Because lelemonmon juice can be used as an invisible ink, Wheeler thought that rubbing his face with lemon juice would make it invisible to surveillance cameras in the banks. Kruger and Dunning (1999) used Wheeler’s story to exemplify a metacognitive bias through which relatively unskilled individuals overestimate their skill, being both unaware of their ineptitude and holding an inflated sense of their knowledge or ability. This is called the Dunning-Kruger effect, and it also seems to apply to some academic settings. For example, Kruger and Dunning found that some students are able to accurately predict their performance prior to taking a test. That is, these students predict that they will do well on the test and actually perform well on the test. Other students predict that they will do well on a test, but do poorly on the test. These students tend to have an inflated sense of how well they will do but do poorly, thus they fit the Dunning-Kruger effect. Because these students’ predictions do not match their performance, we describe them as poorly calibrated. Good calibration involves metacognitive awareness. This post explores how note taking relates to calibration and metacognitive awareness.

Some of the experiments in our lab concern the benefits of note taking. In these experiments, students were presented with a college lecture. Note takers recalled more than non-notetakers, who simply watched the video (Jennings & Taraban, 2014). The question we explored was whether good note taking skills improved students’ calibration of how much they know and thereby reduced the unskilled and unaware effect reported by Kruger and Dunning (1999).

In one experiment, participants watched a 30-minute video lecture while either taking notes (notetakers) or simply viewing the video (non-notetakers). They returned 24 hours later. They predicted the percentage of information they believed they would recall, using a scale of 0 to 100, and then took a free-recall test, without being given an opportunity to study their notes or mentally review the prior day’s video lecture. They then studied their notes (notetakers) or mentally reviewed the lecture (non-notetakers) from the previous day, for12 minutes, and took a second free-recall test. In order to assess the Dunning-Kruger effect, we subtracted the actual percent of lecture material that was recalled in each test (0 to 100) from participants’ predictions of how much they would recall on each test (0 to 100). For example, if a participant predicted he or she would correctly recall 75% of the material on a test and actually recalled 50% the calibration score would be +25 (75 – 50 = 25). Values close to +100 indicated extreme overconfidence, values close to -100 indicated extreme underconfidence, and values close to 0 indicated good calibration. To answer our question about how note taking relates to calibration, we compared the calibration scores for the two groups (note takers and non-notetakers) for the two situations (before reviewing notes or reflecting, and after reviewing notes or reflecting). These analyses indicated that the two groups did not differ in calibration for the first, free recall test. However, to our surprise, note takers became significantly more overconfident, and thus less calibrated in their predictions, than non-notetakers on the second test. After studying, notetakers’ calibration became worse.

Note taking increases test performance. So why doesn’t note taking improve calibration? Since note takers are more “skilled”, that is, have encoded and stored more information from the lecture, shouldn’t they be more “aware”, that is, better calibrated, as the Dunning-Kruger effect would imply? One possible explanation is that studying notes immediately increases the amount of information processed in working memory. The information that participants will be asked to recall shortly is highly active and available. This sense of availability produces the inflated (and false) prediction that much information will be remembered on the test. Is this overconfidence harmful to the learner? It could be harmful to the extent that individuals often self-generate predictions of how well they will do on a test in order to self-regulate their study behaviors. Poor calibration of these predictions could lead to the individual failing to recognize that he or she requires additional study time before all material is properly stored and able to be recalled.

If note taking itself is not the problem, then is there some way students can improve their calibration after studying in order to better regulate subsequent study efforts? The answer is “yes.” Research has shown that predictions of future performance improve if there is a short delay between studying information and predicting subsequent test performance (Thiede, Dunlosky, Griffin, & Wiley, 2005). In order to improve calibration after studying notes, students should be encouraged to wait, after studying their notes, before judging whether they need additional study time. In order to improve metacognitive awareness with respect to calibration, students need to understand that immediate judgments of how much they know may be inflated. They need to be aware that waiting a short time before judging whether they need more study will result in more effective self-regulation of study time.

References
Jennings, E., & Taraban, R. (May, 2014). Note-taking in the modern college classroom: Computer, paper and pencil, or listening? Paper presented at the Midwestern Psychological Association (MPA), Chicago, IL.

Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of personality and social psychology, 77(6), 1121.

Thiede, K. W., Dunlosky, J., Griffin, T. D., & Wiley, J. (2005). Understanding the delayed-keyword effect on metacomprehension accuracy. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(6), 1-25.