Meaningful Reflections for Improving Student Learning

by Ashley Welsh, Postdoctoral Teaching & Learning Fellow, Vantage College

I am a course coordinator and instructor for a science communication course (SCIE 113) for first-year science students. SCIE 113 focuses on writing, argumentation and communication in science and is part of the curriculum for an enriched, first-year experience program for international, English Language Learners. Throughout the term, students provide feedback on their peers’ writing in both face-to-face and online environments. The process of providing and receiving feedback is an important skill for students, however many students do not receive explicit instruction on how to provide or use constructive feedback (Mulder, Pearce, & Baik, 2014). In order to better understand my students’ experience with peer review, I conducted a research project to explore how their use and perceptions of peer review in their writing developed over the course of the term.

Many of the data collection methods I used to assess students’ perceptions and use of peer review in SCIE 113 this past term incorporated acts of reflection. These included in-class peer review worksheets and written reflections, small and large group discussions, an end-of-term survey about peer review, and my own researcher reflections. Periodically throughout the semester, I paired up the students and they engaged in peer review of one another’s writing. They each had a worksheet that asked them to comment on what their partner did well and how that person could improve their writing. During this activity, my teaching assistant and I interacted with the pairs and answered any potential questions. Afterwards, students independently completed written reflections about the usefulness of the peer review activity and their concerns about giving and receiving feedback. Before the class finished, we discussed students’ responses and concerns as a whole group. Students’ worksheets and written reflections, as well as classroom observations, offered insight into how my pedagogy mapped to their use of and reflections about peer review.

As of late, I have been more deliberate with designing pedagogy and activities that offer students the time and space to reflect and record their strengths and weaknesses as learners. The term reflection, is often used when discussing metacognition. As Weinert (1987) describes, metacognition involves second-order cognitions such as thoughts about thoughts or the reflections of one’s actions. With respect to metacognitive regulation, Zohar and Barzilai (2013) highlight that an individual can heighten their awareness of their strengths/weaknesses and evaluate their progress via reflection. This reflection process also plays a key role in metacognition-focused data collection as most methods require students to reflect upon how their knowledge and skills influence their learning. Providing survey responses, answering interview questions, and writing in a journal require a student to appraise their personal development and experience as a learner through reflection (Kalman, 2007; Aktruk & Sahin, 2011).

While the act of reflection is an important component of metacognition and metacognitive research, its use in the classroom also presents its own set of challenges. As educators and researchers, we must be wary of not overusing the term so that it remains meaningful to students. We must also be cautious with how often we ask students to reflect. An extensive case study by Baird and Mitchell (1987) revealed that students become fatigued if they are asked to reflect upon their learning experiences too often. Furthermore, we hope these acts of reflection will help students to meaningfully evaluate their learning, but there is no guarantee that students will move beyond simplistic or surface responses. To address these challenges in my own classroom, I attempted to design activities and assessments that favoured “not only student participation and autonomy, but also their taking responsibility for their own learning” (Planas Lladó et al., p. 593).

While I am still in the midst of analyzing my data, I noticed over the course of the semester that students became increasingly willing to complete the reflections about peer review and their writing. At the beginning of the term, students wrote rather simplistic and short responses, but by the end of the term, students’ responses contained more depth and clarity. I was surprised that students were not fatigued by or reluctant to complete the weekly reflections and discussions about peer review and that this process became part of the norm of the classroom. Students also became faster with completing their written responses, which was promising given that they were all English Language Learners. As per John Draeger (personal communication, April 27, 2016), students’ practice with these activities appears to have helped them build the stamina and muscles required for successful and meaningful outcomes. It was rewarding to observe that within class discussions and their reflections, students became better aware of their strengths and weaknesses as reviewers and writers (self-monitoring) and often talked or wrote about how they could improve their skills (self-regulation).

Based on my preliminary analysis, it seems that tying the reflection questions explicitly to the peer review process allowed for increasingly meaningful and metacognitive student responses. The inclusion of this research project within my class served as an impetus for me to carefully consider and question how my pedagogy was linked to students’ perceptions and ability to reflect upon their learning experience. I am also curious as to how I can assist students with realizing that this process of reflection can improve their skills not only in my course, but also in their education (and dare I say life). This research project has served as an impetus for me to continue to explore how I can better support students to become more metacognitive about their learning in higher education.

References

Akturk, A. O., & Sahin, I. (2011). Literature review on metacognition and its measurement. Procedia Social and Behavioral Sciences, 15, 3731-3736.

Baird, J. R., & Mitchell, I. J. (1987). Improving the quality of teaching and learning. Melbourne, Victoria: Monash University Press.

Kalman, C. S. (2007). Successful science and engineering teaching in colleges and universities. Bolton, Massachusetts: Anker Publishing Company, Inc.

Mulder, R.A., Pearce, J.M., & Baik, C. (2014). Peer review in higher education: Student perceptions before and after participation. Active Learning in Higher Education, 15(2), 157-171.

Planas Lladó, A., Feliu Soley, L., Fraguell Sansbelló, R.M., Arbat Pujolras, G., Pujol Planella, J., Roura-Pascual, N., Suñol Martínez, J.J., & Montoro Moreno, L. (2014). Student perceptions of peer assessment: An interdisciplinary study. Assessment & Evaluation in Higher Education, 39(5), 592-610.

Weinert, F. E. (1987). Introduction and overview: Metacognition and motivation as determinants of effective learning and understanding. In F. E. Weinert & R. H. Kluwe (Eds.), Metacognition, motivation, and understanding (pp. 1-16). Hillsdale, New Jersey: Lawrence Erlbaum Associates, Inc.

Zohar, A., & Barzilai, S. (2013). A review of research on metacognition in science education: Current and future directions. Studies in Science Education, 49(2), 121-169.


Learning to Write and Writing to Learn: The Intersection of Rhetoric and Metacognition

by Amy Ratto Parks, Ph.D., University of Montana

If I had to choose the frustration most commonly expressed by students about writing it is this: the rules are always changing. They say, “every teacher wants something different” and because of that belief, many of them approach writing with feelings ranging from nervous anxiety to sheer dread. It is true that any single teacher will have his or her own specific expectations and biases, but most often, what students perceive as a “rule change” has to do with different disciplinary expectations. I argue that metacognition can help students anticipate and negotiate these shifting disciplinary expectations in writing courses.

Let’s look at an example. As we approach the end of spring semester, one single student on your campus might hold in her hand three assignments for final writing projects in three different classes: literature, psychology, and geology. All three assignments might require research, synthesis of ideas, and analysis – and all might be 6-8 pages in length. If you put yourself in the student’s place for a moment, it is easy to see how she might think, “Great! I can write the same kind of paper on three different topics.” That doesn’t sound terribly unreasonable. However, each of the teachers in these classes will actually be expecting some very different things in their papers: acceptable sources of research, citation style and formatting, use of the first person or passive voice (“I conducted research” versus “research was conducted”), and the kinds of analysis are very different in these three fields. Indeed, if we compared three papers from these disciplines we would see and hear writing that appeared to have almost nothing in common.

So what is a student to do? Or, how can we help students anticipate and navigate these differences? The fields of writing studies and metacognition have some answers for us. Although the two disciplines are not commonly brought together, a close examination of the overlap in their most basic concepts can offer teachers (and students) some very useful ways to understand the disciplinary differences between writing assignments.

Rhetorical constructs are at the intersection of the fields of writing studies and metacognition because they offer us the most clear illustration of the overlap between the way metacognitive theorists and writing researchers conceptualize potential learning situations. Both fields begin with the basic understanding that learners need to be able to respond to novel learning situations and both fields have created terminology to abstractly describe the characteristics of those situations. Metacognitive theorists describe those learning situations as “problem-solving” situations; they say that in order for a student to negotiate the situation well, she needs to understand the relationship between herself, the task, and the strategies available for the task. The three kinds of problem-solving knowledge – self, task, and strategy knowledge – form an interdependent, triangular relationship (Flavell, 1979). All three elements are present in any problem-solving situation and a change to one of the three requires an adjustment of the other two (i.e., if the task is an assignment given to whole class, then the task will remain the same; however, since each student is different, each student will need to figure out which strategies will help him or her best accomplish the task).

Metacognitive Triangle

The field of writing studies describes these novel learning situations as “rhetorical situations.” Similarly, the basic framework for the rhetorical situation is comprised of three elements – the writer, the subject, and the audience – that form an interdependent triangular relationship (Rapp, 2010). Writers then make strategic persuasive choices based upon their understanding of the rhetorical situation.
Rhetorical vs Persuasive

 In order for a writer to negotiate his rhetorical situation, he must understand his own relationship to his subject and to his audience, but he also must understand the audience’s relationship to him and to the subject. Once a student understands these relationships, or understands his rhetorical situation, he can then conscientiously choose his persuasive strategies; in the best-case scenario, a student’s writing choices and persuasive strategies are based on an accurate assessment of the rhetorical situation. In writing classrooms, a student’s understanding of the rhetorical situation of his writing assignment is one pivotal factor that allows him to make appropriate writing choices.

Theorists in metacognition and writing studies both know that students must be able to understand the elements of their particular situation before choosing strategies for negotiating the situation. Writing studies theorists call this understanding the rhetorical situation while metacognitive theorists call it task knowledge, and this is where two fields come together: the rhetorical situation of a writing assignment is a particular kind of problem-solving task.

When the basic concepts of rhetoric and metacognition are brought together it is clear that the rhetorical triangle fits inside the metacognitive triangle and creates the meta-rhetorical triangle.

Meta-Rhetorical Triangle

The meta-rhetorical triangle offers a concrete illustration of the relationship between the basic theoretical frameworks in metacognition and rhetoric. The subject is aligned with the task because the subject of the writing aligns with the guiding task and the writer is aligned with the self because the writerly identity is one facet of a larger sense of self or self-knowledge. However, audience does not align with strategy because audience is the other element a writer must understand before choosing a strategy; therefore, it is in the center of the triangle rather than the right side. In the strategy corner, however, the meta-rhetorical triangle includes the three Aristotelian strategies for persuasion, logos, ethos, and pathos (Rapp, 2010). When the conceptual frameworks for rhetoric and metacognition are viewed as nested triangles this way, it is possible to see that the rhetorical situation offers specifics about how metacognitive knowledge supports a particular kind problem-solving in the writing classroom.

So let’s come back to our student who is looking at her three assignments for 6-8 papers that require research, synthesis of ideas, and analysis. Her confusion comes from the fact that although each requires a different subject, the three tasks are appear to be the same. However, the audience for each is different, and although she, as the writer, is the same person, her relationship to each of the three subjects will be different, and she will bring different interests, abilities, and challenges to each situation. Finally, each assignment will require different strategies for success. For each assignment, she will have to figure out whether or not personal opinion is appropriate, whether or not she needs recent research, and – maybe the most difficult for students – she will have to use three entirely different styles of formatting and citation (MLA, APA, and GSA). Should she add a cover page? Page numbers? An abstract? Is it OK to use footnotes?

These are big hurdles for students to clear when writing in various disciplines. Unfortunately, most faculty are so immersed in our own fields that we come to see these writing choices as obvious and “simple.” Understanding the way metacognitive concepts relate to rhetorical situations can help students generalize their metacognitive knowledge beyond individual, specific writing situations, and potentially reduce confusion and improve their ability to ask pointed questions that will help them choose appropriate writing strategies. As teachers, the meta-rhetorical triangle can help us offer the kinds of assignment details students really need in order to succeed in our classes. It can also help us remember the kinds of challenges students face so that we can respond to their missteps not with irritation, but with compassion and patience.

References

Flavell, J.H. (1979). Metacognition and cognitive monitoring: A new era cognitive development inquiry. American Psychologist, 34, 906-911.

Rapp, C. (2010). Aristotle’s Rhetoric. In E. Zalta (Ed.), The stanford encyclopedia of

philosophy. Retrieved from http://plato.stanford.edu/archives/spr2010/entries/aristotle-rhetoric/


Forging connections with students through metacognitive assignments

by Diane K. Angell, St. Olaf College

We have all likely shared the experience, early in our teaching career, of a gaggle of students gathering at our office door after an exam. “I studied for so many hours!” “I came to class everyday.” “I always did well in high school.” Students also seemed to struggle ahead of exams as they tried to learn and master scientific material. “What should I study?” “Why can’t you just give us a study guide?” I was often perplexed by these frustrations. I wondered and tried to recall how I had learned material and strategized as a science student preparing for the inevitable exams in larger introductory college courses.

That same month, I found myself at a conference, the Accredited Colleges of the Midwest’s Teagle Collegium on Student Learning. The focus was very much on metacognition. Although as a biologist, I struggled to understand the details of several presentations, it all sounded very familiar. Perhaps this was what my students were missing? I appreciated the intellectual points and took copious notes, until my mind began to wander. I needed to prepare to host a large group for Thanksgiving in the coming days. How should I start? What did I need to purchase and where would I get it? What needed to be prepared and cooked when, so that all the different dishes were ready and warm when it was actually time to sit down and eat? I began to get anxious. I quickly realized two things. Focusing back on my students, I immediately appreciated the degree to which preparing a Thanksgiving meal, and preparing to take an exam are both complex metacognitive tasks. I could finally imagine what my students were feeling and understand the metacognitive challenges exams present to them. Students need to evaluate what they know, what they don’t know and how best to approach any material they are uncertain of. And unlike cooking and meal preparation, there are no clear simple sets of directions highlighting how to approach the task of taking a typical college classroom exam. Second, my own pre-Thanksgiving meal mental preparation check made me realize that I have likely been using such metacognitive skills since I was a student, but was just not aware I was using them. Perhaps I did have some wisdom to share and upon returning to campus I committed to using a metacognition approach to help students prepare for exams.

Introductory college biology courses are an excellent place to begin engaging students with a metacognitive approach to exam preparation. These classes will probably always have exams. Moreover, as students move on in biology they are likely to face even more challenging exams. In order to engage students in metacognitive practices I came up with a series of straightforward metacognitive prompts that I emailed to students before each exam. They included simple questions such as: How do you think you will start studying? What techniques will you use while studying? What was the most difficult topic in this section of the course and why was it difficult? How will you approach the material you do not yet understand?

I found their responses fascinating. Some clearly wrote as little as possible, but most wrote quite extensively sharing with me precise details of how they had studied (or not studied) to prepare for the exam. Many responses were surprisingly sincere and confessional. The assignments brought home to me two points that have left a lasting impression. First, I was reminded of the importance of establishing a connection with students as well as the importance of that connection to student learning. Their emailed responses helped me get to know them in a way that was very different than in the public arena of class or lab. They let me in on their personal narrative of test preparation. I sometimes felt as if I was reading a secret diary. They were honest with me in their emails about what their studying experiences had been, perhaps even more so than if they had come to see me in person. Perhaps the proliferation of email, texting and Facebook has made students more comfortable conversing through a keyboard with faculty than face to face. After responding to the emailed questions, many did eventually come in to chat and engage with me about study strategies and differences they were noticing between high school and college. They seemed to think they knew me better and that I knew them better. Upon arriving in my office, they would frequently refer back to their emailed responses, even though I sometimes struggled to remember exactly who had emailed me what details. The emails seemed to prompt a unique relationship and they saw me as someone who was interested in them as an individual, an attitude that likely helped them feel as if they were part of the learning community in the classroom.

I also came to understand that that the task of mastering material in order to prepare for an exam has become more complicated. In the past, we had a textbook and we had notes from class. That was it. Today this task really is fraught with complex decisions. Students in college classrooms are less likely to be taking notes in a traditional lecture format. They are more likely to be engaged during class in small group discussions and problem based learning activities. They have access to and are justly encouraged to use the online resources that come with texts and take advantage of other online resources. They are also frequently encouraged to form study groups to discuss their understanding of topics outside of class. These are great ways for students to engage with material, and prepare for exams. This diverse learning landscape can be a lifesaver for some students, but for others, when it comes time to prepare for an exam, the variety of options for studying can be overwhelming and paralyzing. As we have opened up new ways of teaching and learning, we may have left students with many different resources at their fingertips but failed to help them think metacognitively about what works for them as they master knowledge to prepare for a summative exam.

Both the stronger connections I made with my students and my better understanding of the diverse exam preparation choices they must make help me feel better prepared to mentor and advise students as they navigate their introductory biology course. By engaging students metacognitively in emails concerning their exam preparation I gained a deeper understanding about how students were learning in my class. Their sincere and thoughtful responses provided a window on their world and, in interesting ways, their metacognitive thoughts rounded out my efforts to metacognitively assess my course. As faculty, we are often reminded to step back and reflect on our goals for our class and for student learning. We need to consider what is working in our course and what is not working. It was finally clear to me that a full metacognitive consideration of my course required regular reflective feedback from my students and an understanding of what they were struggling with. Although I had always solicited such feedback, students seemed much more likely to be thinking about their learning and willing to share their assessment of that learning in an email just before an exam. Ultimately I now see their honest metacognitive feedback has meant that I have gained as much or more than the students I was initially trying to help.

Connecting with students can improve student performance Share on X

Metacognitive Judgments of Knowing

Roman Taraban, Ph.D., Dmitrii Paniukov, John Schumacher, Michelle Kiser, at Texas Tech University

“The more you know, the more you know you don’t know.” Aristotle

Students often make judgments of learning (JOLs) when studying. Essentially, they make a judgment about future performance (e.g., a test) based on a self-assessment of their knowledge of studied items. Therefore, JOLs are considered metacognitive judgments. They are judgments about what the person knows, often related to some future purpose. Students’ accuracy in making these metacognitive judgments is academically important. If students make accurate JOLs, they will apply just the right amount of time to mastering academic materials. If students do not devote enough time to study, they will underperform on course assessments. If students spend more time than necessary, they are being inefficient.

As instructors, it would be helpful to know how accurate students are in making these decisions. There are several ways to measure the accuracy of JOLs. Here we will focus on one of these measures, termed calibration. Calibration is the difference between a student’s JOL related to some future assessment and his actual performance on that assessment. In the study we describe here, college students made JOLs (“On a scale of 0 to 100, what percent of the material do you think you can recall?”) after they read a brief expository text. Actual recall was measured in idea units (IUs) (Roediger & Karpicke, 2006). Idea units are the chunks of meaningful information in the text.   Calibration is here defined as JOL – Recalled IUs, or simply, predicted recall minus actual recall. If the calibration calculation leads to a positive number, you are overconfident to some degree; if the calculation result is negative, then you are underconfident to some degree. If the result is zero, then you are perfectly calibrated in your judgment.

The suggestion from Aristotle (see quote above) is that gains in how much we know lead us to underestimate how much we know, that is, we will be underconfident. Conversely, when we know little, we may overestimate how much we know, that is, we will be overconfident. Studies using JOLs have found that children are overconfident (predicted recall minus actual recall is positive) (Lipko, Dunlosky, & Merriman, 2009; Was, 2015). Children think they know more than they know, even after several learning trials with the material. Studies with adults have found an underconfidence with practice (UWP) effect (Koriat et al., 2002), that is, the more individuals learn, the more they underestimate their knowledge. The UWP effect is consistent with Aristotle’s suggestion. The question we ask here is ‘which is it’: If you lack knowledge, do your metacognitive judgments reflect overconfidence or underconfidence, and vice versa? Practically, as instructors, if students are poorly calibrated, what can we do to improve their calibration, that is, to recalibrate this metacognitive judgment.

We addressed this question with two groups of undergraduate students, as follows. Forty-three developmental-reading participants were recruited from developmental integrated reading and writing courses offered by the university, including Basic Literacy (n = 3), Developmental Literacy II (n = 29), and Developmental Literacy for Second Language Learners (n = 11). Fifty-two non-developmental participants were recruited from the Psychology Department subject pool. The non-developmental and developmental readers were comparable in mean age (18.3 and 19.8 years, respectively) and the number of completed college credits (11.8 and 16.7, respectively), and each sample represented roughly fifteen academic majors. All participants participated for course credit. The students were asked to read one of two expository passages and to recall as much as they could immediately. The two texts used for the study were each about 250 words in length and had an average Flesch-Kincaid readability score of 8.2 grade level. The passages contained 30 idea units each.

To answer our question, we first calculated calibration (predicted recall – actual recall) for each participant. Then we divided the total sample of 95 participants into quartiles, based on the number of idea units each participant recalled. The mean proportion of correct recalled idea units, out of 30 possible, and standard deviation in each quartile for the total sample were as follows:

Q1: .13 (.07); Q2: .33 (.05); Q3: .51 (.06); Q4: .73 (.09). Using quartile as the independent variable and calibration as the dependent variable, we found that participants were overconfident (predicted recall > actual recall) in all four quartiles. However, there was also a significant decline in overconfidence from Quartile 1 to Quartile 4 as follows: Q1: .51; Q2: .39; Q3: .29; Q4: .08. Very clearly, the participants in the highest quartile were nearly perfectly calibrated, that is, they were over-predicting their actual performance by only about 8%, compared to the lowest quartile, who were over-predicting by about 51%. This monotonic trend of reducing overconfidence and improving calibration was also true when we analyzed the two samples separately:

NON-DEVELOPMENTAL: Q1: .46; Q2: .39; Q3: .16; Q4: .10;

DEVELOPMENTAL: Q1: .57; Q2: .43; Q3: .39; Q4: .13.

The findings here suggest that Aristotle may have been wrong when he stated that “The more you know, the more you know you don’t know.” Our findings would suggest that the more you know, the more you know you know. That is, calibration gets better the more you know. What is striking here is the vulnerability of weaker learners to overconfidence. It is the learners who have not encoded a lot of information from reading that have an inflated notion of how much they can recall. This is not unlike the children in the Lipko et al. (2009) research mentioned earlier. It is also clear in our analyses that typical college students as well as developmental college students are susceptible to overestimating how much they know.

It is not clear from this study what variables underlie low recall performance. Low background knowledge, limited vocabulary, and difficulty with syntax, could all contribute to poor encoding of the information in the text and low subsequent recall. Nonetheless, our data do indicate that care should be taken in assisting students who fall into the lower performance quartiles to make better calibrated metacognitive judgments. One way to do this might be by asking students to explicitly make judgments about future performance and then encouraging them to reflect on the accuracy of those judgments after they complete the target task (e.g., a class test). Koriat et al. (1980) asked participants to give reasons for and against choosing responses to questions before the participants predicted the probability that they had chosen the correct answer. Prompting students to consider the amount and strength of the evidence for their responses reduced overconfidence. Metacognitive exercises like these may lead to better calibration.

References

Koriat, A., Lichtenstein, S., Fischoff, B. (1980). Reasons for confidence. Journal of Experimental Psychology: Human Learning and Memory, 6(2), 107-118.

Koriat, A., Sheffer, L., & Ma’ayan, H. (2002). Comparing objective and subjective learning curves: Judgments of learning exhibit increased underconfidence with practice. Journal of Experimental Psychology: General, 131, 147–162.

Lipko, A. R., Dunlosky, J., & Merriman, W. E. (2009). Persistent overconfidence despite practice: The role of task experience in preschoolers’ recall predictions. Journal of Experimental Child Psychology, 102(2), 152-166.

Roediger, H., & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17(3), 249-255.

Was, C. (2015). Some developmental trends in metacognition. Retrieved from

https://www.improvewithmetacognition.com/some-developmental-trends-in-metacognition/.

 


Pausing Mid-Stride: Mining Metacognitive Interruptions In the Classroom

By Amy Ratto Parks, Ph.d., University of Montana

Metacognitive interventions are often the subject of research in educational psychology because researchers are curious about how these planned, curricular changes might impact the development of metacognitive skills over time. However, as a researcher in the fields of metacognition and rhetoric and composition, I am sometimes struck by the fact that the planned nature of empirical research makes it difficult for us to take advantage of important kairic moments in learning.

The rhetorical term kairic, taken from the Greek concept of kairos, generally represents a fortuitous window in time in which to take action toward a purpose. In terms of learning, kairic moments are those perfect little slivers in which we might suddenly gain insight into our own or our students’ learning. In the classroom, I like to think of these kairic moments as metacognitive interruptions rather than interventions because they aren’t planned ahead of time. Instead, the “interruptions” arise out of the authentic context of learning. Metacognitive interruptions are kairic moments in which we, as teachers, might be able to briefly access a point in which the student’s metacognitive strategies have either served or not served them well.

A few days ago I experienced a very typical teaching moment that turned out to be an excellent example of a fruitful metacognitive interruption: I asked the students to take out their homework and the moment I began asking discussion questions rooted in the assignment, I sensed that something was off. I saw them looking at each other’s papers and whispering across the tables, so I asked what was going on. One brave student said, “I think a bunch of us did the homework wrong.”

They were supposed to have completed a short analysis of a peer-reviewed article titled, “The Daily Show Effect: Candidate Evaluations, Efficacy, and American Youth” (Baumgartner & Morris, 2014). I got out the assignment sheet and asked the brave student, Rasa*, to read it aloud. She said, “For Tuesday, September 15. Read The Daily Show Effect: Candidate Evaluations…. oh wait. I see what happened. I read the other Jon Stewart piece in the book.” Another student jumped in and said, “I just analyzed the whole show” and a third said, “I analyzed Jon Stewart.”

In that moment, I experienced two conflicting internal reactions. The teacher in me was annoyed. How could this simple set of directions have caused confusion? And how far was this confusion going to set us back? If only half of the class had done the work, the rest of my class plan was unlikely to go well. However, the researcher in me was fascinated. How, indeed, had this simple set of instructions caused confusion? All of these students had completed a homework assignment, so they weren’t just trying to “get out of work.” Plus, they also seemed earnestly unsure about what had gone wrong.

The researcher in me won out. I decided to let the class plan go and I began to dig into the situation. By a show of hands I saw that 12 of the 22 students had done the correct assignment and 10 had completed some customized, new version of the homework. I asked them all to pause for a moment and engage in a metacognitive activity: they were to think back to moment they read the assignment and ask themselves, where did I get mixed up?

Rasa said that she just remembered me saying something about The Daily Show in class, and when she looked in the table of contents, she saw a different article, “Political Satire and Postmodern Irony in the Age of Stephen Colbert and Jon Stewart” (Colletta, 2014), and read it instead. Other students said that they must not have read closely enough, but then another student said something interesting. She said, “I did read the correct essay, but it sounded like it was going to be too hard to analyze and I figured that you hadn’t meant for this to be so hard, so I just analyzed the show.” Other students nodded in agreement. I asked the group to raise their hands if had read the correct essay. Many hands went up. Then I asked if they thought that the analysis they chose to do was easier than the one I assigned. All of them raised their hands.

Again, I was fascinated. In this very short conversation I had just watched rich, theoretical research play out before me. First, here was an example of the direct effect of power browsing (Kandra, Harden, & Babbra, 2012) mistakenly employed in the academic classroom. Power browsing is a relatively recently coined term that describes “skimming and scanning through text, looking for key words, and jumping from source to source” (Kandra et al., 2012).  Power browsing can be a powerful overviewing strategy (Afflerbach & Cho, 2010) in an online reading environment where a wide variety of stimuli compete for the reader’s attention. Research shows that strong readers of non-electronic texts also employ pre-reading or skimming strategies (Dunlosky & Metcalfe, 2009), however, when readers mistakenly power browse in academic settings, it may result in “in missed opportunities or incomplete knowledge” (Kandra et al., 2012, par. 18). About metacognition and reading strategies, Afflerbach and Cho (2010) write, “the good strategy user is always aware of the context of reading” (p. 206); clearly, some of my students had forgotten their reading context. Some of the students knew immediately that they hadn’t thoroughly read the assignment. As soon as I described the term “power browse” their faces lit up. “Yes!” said, Rasa, “that’s exactly what I did!” Here was metacognition in action.

Second, as students described the reasoning behind choosing to read the assigned essay, but analyze something unassigned, I heard them offering a practical example of Flower and Hayes’ (1981/2011) discussion of goal-setting in the writing process. Flower and Hayes (1981/2011) said that writing includes, “not only the rhetorical situation and audience which prompts one to write, it also includes the writer’s own goals in writing” (p. 259). They went on to say that although some writers are able to “juggle all of these demands” others “frequently reduce this large set of restraints to a radically simplified problem” (p. 259). Flower and Hayes allow that this can sometimes cause problems, but they emphasize that “people only solve the problems they set for themselves” (p. 259).

Although I had previously seen many instances of students “simplifying” larger writing assignments in my classroom, I had never before had a chance to talk with students about what had happened in the moment when they realized something hadn’t worked. But here, they had just openly explained to me that the assignment had seemed too difficult, so they had recalibrated, or “simplified” it into something they thought they could do well and/or accomplish during their given timeframe.

This metacognitive interruption provided an opportunity to “catch” students in the moment when their learning strategies had gone awry, but my alertness to the kairic moment only came as a result of my own metacognitive skills: when it became clear that the students had not completed the work correctly, I paused before reacting and that pause allowed me to be alert to a possible metacognitive learning opportunity. When I began to reflect on this class period, I realized that my own alertness came as a result of my belief in the importance of teachers being metacognitive professionals so that we can interject learning into the moment of processing.

There is yet one more reason to mine these metacognitive interruptions: they provide authentic opportunities to teach students about metacognition and learning. The scene I described here could have had a very different outcome. It can be easy to see student behavior in a negative light. When students misunderstand something we thought we’d made clear, we sometimes make judgments about them being “lazy” or “careless” or “belligerent.” In this scenario it seems like it would have been justifiable to have gotten frustrated and lectured the students about slowing down, paying attention to details, and doing their homework correctly.

Instead, I was able to model the kind of cognitive work I would actually want to teach them: we slowed down and studied the mistake in a way that led the class to a conversation about how our minds work when we learn. Rather than including a seemingly-unrelated lecture on “metacognition in learning” I had a chance to teach them in response to a real moment of misplaced metacognitive strategy. Our 15-minute metacognitive interruption did not turn out to be a “delay” in the class plan, but an opening into a kind of learning that might sometimes just have to happen when the moment presents itself.

References

Baumgartner, J., & Morris, J., (2014). The Daily Show effect: Candidate evaluations, efficacy, and American youth. In C. Cucinella (Ed.), Funny. Southlake, Fountainhead Press. (Reprinted from American Politics Journal, 34(3), (2006), pp.341-67).

Colletta, L. (2014). Political satire and postmodern irony in the age of Stephen Colbert and Jon Stewart. In C. Cucinella (Ed.), Funny. Southlake, Fountainhead Press. (Reprinted from The Journal of Popular Culture, 42(5), (2009), pp. 856-74).

Dunlosky, J., & Metcalfe, J. (2009). Metacognition. Thousand Oaks, CA: Sage.

Flower, L., & Hayes, J. (2011). A cognitive process theory of writing. In V. Villanueva & K. Arola (Eds.), Cross-talk in comp theory: A reader, (3rd ed.), (pp. 253-277). Urbana, IL: NCTE. (Reprinted from College Composition and Communication, 32(4), (Dec., 1981), pp. 365-387).

Kandra, K. L., Harden, M., & Babbra, A. (2012). Power browsing: Empirical evidence at the college level. National Social Science Journal, 2, article 4. Retrieved from http://www.nssa.us/tech_journal/volume_2-2/vol2-2_article4.htm

Waters, H. S., & Schneider, W., (Eds.). (2010). Metacognition, strategy use, and instruction. New York, NY: The Guilford Press.

* Names have been changed to protect the students’ privacy.


Exploring the potential impact of reciprocal peer tutoring on higher education students’ metacognitive knowledge and regulation

Backer, Keer and Valcke’s study “explores the potential of reciprocal peer tutoring to promote both university students’ metacognitive knowledge and their metacognitive regulation skills. The study was conducted in a naturalistic higher education setting, involving 67 students tutoring each other during a complete semester.”

Backer, Liesje De. (May 2012) . Exploring the potential impact of reciprocal peer tutoring on higher education students’ metacognitive knowledge and regulation. Instructional Science, Volume 40, issue 3, pp 559-588. http://link.springer.com/article/10.1007/s11251-011-9190-5

Exploring the potential impact of reciprocal peer tutoring on higher education students’ metacognitive knowledge and regulation


Student Motivation and Self-Regulated Learning in the College Classroom

This chapter talks about the problems in students’ motivation to learn and how self-regulated learning can provide some insights to issues such as, how come students care more about their grades than learning the disciplinary content of their courses?, why do students wait until the last minute to fulfill the obligations of their courses such as studying for an exam or writing a paper?

R.P. Perry and J.C. Smart (eds.), The Scholarship of Teaching and Learning in Higher Education: An Evidence-Based Perspective, 731–810. Pintrich and Zusho: Student Motivation and Self-Regulated Learning in the Classroom

Student Motivation and Self-Regulated Learning in the College Classroom


Some Developmental Trends in Metacognition

By Chris Was, PhD; Kent State University

Recently, I have conducted some experiments with K – 6 grade students related to children’s ability to predict their ability to recall simple items. Although a simple measure, this form of calibration is a measure of a child’s knowledge of the own memory abilities. This is, at its most basic level, metacognition.

The work in which my collaborators and I are currently engaged builds on the work of Amanda Lipko and colleagues (e.g., Lipko, Dunlosky, & Merriman, 2009). What was most striking about Lipko’s work was the robust overconfidence displayed by preschool children. Granted, there is a large body of literature that demonstrates young children are overconfident in both their physical abilities (e.g., Plumert, 1995) as well as their cognitive abilities (e.g., Cunningham & Weaver, 1989; Flavell, Friedrichs, & Hoyt, 1970). Much of this work indicates that with preschool children this overconfidence is quite persistent. But Lipko et al.’s (2009) work found that even following repeated practice and feedback, specifically salient feedback when children recalled their own previous performance, this overconfidence remained.

There are several hypotheses, both tested and untested, as to why this overconfidence exists and why it is robust against correction. Perhaps it is wishful thinking (a hypothesis test by Lipko et al.), perhaps it is a developmental issue, or perhaps it serves as a learning mechanism (children who give up the first time they fail may not learn to do succeed at much). In any case, I became interested in circumstances in which young children are capable of making accurate predictions of their cognitive abilities.

A review of the experimental methodology used by Lipko et al. is warranted. In their 2009 study Lipko et al. presented young children (mean age of approximately 5 years 0 months) with pictures of common items. As children were presented with pictures they were asked to name them. If correctly named the picture was placed on a board until 10 pictures were on the board. The experimenter then said to the children, “I am going to cover up the pictures,“ and asked, “how many do you think you will remember after I cover them?” The children then made a prediction of how many pictures they would remember. Finally, the children attempted to recall the pictures. In a series of experiments, children were overconfident in their ability even after repeated trials and even after correctly recalling their poor performance on previous trials.

Are there circumstances when children are more accurate? The simple answer is, “yes.” In a recent experiment (Was & Al-Harthy, 2015) we found that when children complete the Lipko task with unfamiliar items, their predictions of how many items they might remember are significantly lower than for familiar items. This familiarity overconfidence bias is likely due something similar to the fluency effect. That is, when the pictures are familiar to children, they seem easy to remember, but when the pictures are unfamiliar, children understand that they might be hard to recall later.

We are also investigating the developmental trends of the ability to predict recall. Our most interesting finding to date, is calibration (accuracy of recall predictions) is strongly related to the increase in working memory capacity. Put differently, as the number of items children are able to recall increases, so does their ability to accurately predict the number of items they will recall. Some will argue that this is not an unsuspected finding. The argument being that as working memory capacity increases, the ability to think about one’s own memory should also increase. My response is that it is not clear if metacognition is directly related to working memory or executive functions. Perhaps a mediating relationship exists. Recent investigations have suggested that performance on many measures of working memory are more dependent on strategy than they are on cognitive ability. Perhaps, metacognition is just good strategy use, or perhaps it is a cognitive ability.

The finding of the relationship between recall performance and calibration (the difference between predicted performance and actual performance) supports the hypothesis that metacognition is not a single skill that children have or not, but rather it is a complex of many skills and processes the children acquire through experiences and maturation. I suggest that developmental research in metacognition need focus on aptitude-by-treatment interactions. Questions such as, “What variety of academic activities contribute to the development of metacognition at different stages or levels of cognitive development?” will not only forward our understanding of metacognition, but perhaps also how to help young students develop metacognitive strategies and perhaps metacognitive performance.

Cunningham, J. G., & Weaver, S. L. (1989). Young children’s knowledge of their memory             span: Effects of task and experience. Journal of Experimental Child Psychology, 48,   32–44.

Flavell, J. H., Friedrichs, A. G., & Hoyt, J. D. (1970). Developmental changes in memorization    processes. Cognitive Psychology, 1,324–340.

Lipko, A. R., Dunlosky, J., & Merriman, W. E. (2009). Persistent overconfidence despite practice: The role of task experience in preschoolers’ recall predictions. Journal of Experimental Child Psychology103(2), 152-166.

Plumert, J. M. (1995). Relations between children’s overestimation of their physical abilities and accident proneness. Developmental Psychology31(5), 866-876. doi: http://dx.doi.org/10.1037/0012-1649.31.5.866

Was, C. A., & Al-Harthy, I. (2015). Developmental differences in overconfidence: When do children understand that attempting to recall predicts memory performance? The Researcher, 27(1), 1-5, Conference Proceedings of the 32nd Annual Conference of the Northern Rocky Mountain Education Research Association.


Making sense of how I learn: Metacognitive capital and the first year university student

By Lodge and Larmar, This article focuses on how significant it is to encourage metacognitive processing as a means of increasing student retention, enhancing university engagement and lifelong learning.

Larmar, S. & Lodge, J. (2014). Making sense of how I learn: Metacognitive capital and the first year

university student. The International Journal of the First Year in Higher Education, 5(1). 93-105. doi:

10.5204/intjfyhe.v5i1.193

Lodge and Larmar article


Meta-Studying: Teaching Metacognitive Strategies to Enhance Student Success

“Elizabeth Yost Hammer, PhD, of Xavier University of Louisiana, discusses why psychology teachers are uniquely positioned not only to teach the content of psychology but also to teach students how to learn. Hammer presents some strategies to teach metacognitive skills in the classroom to enhance learning and improve study skills and encourages teachers to present students with information about Carol Dweck’s model of the “Fixed Intelligence Mindset.””

Dr. Elizabeth Yost Hammer’s Presentation (45 Minutes)


Dr. Derek Cabrera – How Thinking Works

“Dr. Derek Cabrera is an internationally recognized expert in metacognition (thinking about thinking), epistemology (the study of knowledge), human and organizational learning, and education. He completed his PhD and post-doctoral studies at Cornell University and served as faculty at Cornell and researcher at the Santa Fe Institute. He leads the Cabrera Research Lab, is the author of five books, numerous journal articles, and a US patent. Derek discovered DSRP Theory and in this talk he explains its benefits and the imperative for making it part of every students’ life.”

DSRP consists of four interrelated structures (or patterns), each structure has two opposing elements. The structures and their elements are:

  • Making Distinctions – which consist of an identity and an other
  • Organizing Systems – which consist of part and whole
  • Recognizing Relationships – which consist of action and reaction
  • Taking Perspectives – which consist of point and view

https://youtu.be/dUqRTWCdXt4  (15 minutes)


Metacognition in Psychomotor Development and Positive Error Cultures

Ed Nuhfer, Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029

All of us experience the “tip of the tongue” phenomenon. This state occurs when we truly do know something, such as the name of a person, but we cannot remember the person’s name at a given moment. The feeling that we do know is a form of metacognitive awareness that confirms the existence of a real neural network appropriate to the challenge. It is also an accurate knowing that carries confidence that we can indeed retrieve the name given the right memory trigger.

In “thinking about thinking” some awareness of the connection between our psychomotor domain and our efforts to learn can be useful. The next time you encounter a tip-of-the-tongue moment, try clenching your left hand. Ruth Propper and colleagues confirmed that left hand clenching activates the right hemisphere of the brain and can enhance recall. When learning names, clenching of the right hand activates the left hemisphere and can enhance encoding (http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0062474). Not all connections between the psychomotor domain and intellectual development are this direct, but it is very useful to connect efforts to develop intellectually with established ways that promote psychomotor development.

Young people are active, so many things that excite them to initiate their learning have a heavy emphasis on psychomotor development. Examples are surfing, snowboarding, dance, tennis, martial arts, yoga, or a team sport. We can also include the hand-eye coordination and learning patterns involved in many addictive video games as heavy on kinesthetic learning, even though these do not offer health benefits of endurance, strength, flexibility, balance, etc. It is rare that anyone who commits to learning any of these fails to achieve measurably increased proficiency.

K-12 teacher Larry Ferlazzo uses the act of missing a wastebasket with a paper wad to help students understand how to value error and use it to inform strategies for intellectual development (http://larryferlazzo.edublogs.org/2011/10/31/an-effective-five-minute-lesson-on-metacognition). His students begin to recognize how the transfer of practices that they already accept as valid from their experiences may likely improve their mastery in less familiar challenges during intellectual development.

College teachers also know that the most powerful paths to high-level thinking engage the psychomotor domain. Visualization that involves explaining to self by diagram and developing images of the knowledge engages psychomotor skills. Likewise, writing engages the psychomotor in developing text, tracking and explaining reasoning and in revising the work (Nuhfer, 2009, 2010 a, b).

Students already “get” that many trips down the ski trail are needed to master snowboarding; they may not “get” that writing many evaluative argument papers is necessary to master critical thinking. In the former, they learn from their most serious error and focus on correcting it first. They correctly surmise that the focused effort to correct one troublesome issue will be beneficial. In efforts to develop intellectually, students deprived of metacognitive training may not be able to recognize or prioritize their most serious errors. This state deprives them of awareness needed to do better on subsequent challenges.

It is important for educators to recognize how particular cultures engage with error. Author and neuroscientist Gerd Gigerenzer, Director of the Max Planck Institute for Human Development and  the Harding Center for Risk Literacy (2014) contrasts positive and negative error cultures. A positive error culture promotes recognition and understanding of error. They discuss error openly, and sharing of experienced error is valued as a way to learn. This culture nurtures a growth mindset in which participants speak metacognitively to self in terms of: “Not yet… change this …better next time.” Gigerenzer cites aviation as a positive error culture of learning that has managed to reduce plane crashes to one in ten million flights. Interestingly, the cultures of surfing, snowboarding, dance, tennis, martial arts and yoga all promote development through positive error cultures. Positive error cultures make development through practice productive and emotionally safe.

Gigerenzer cites the American system of medical practice as one example of a negative error culture, wherein systems of reporting, discussing and learning from serious errors are nearly nonexistent. Contrast aviation safety with the World Heath Organization report that technologically advanced hospitals harm about 10% of their patients. James (2013) deduced that hospital error likely causes over 400,000 deaths annually (http://journals.lww.com/journalpatientsafety/Fulltext/2013/09000/A_New,_Evidence_based_Estimate_of_Patient_Harms.2.aspx). Negative error cultures make it unsafe to discuss or to admit to error and therefore, they are ineffective learning organizations. In negative error cultures, error discovery results in punishment. Negative error cultures nurture fear and humiliation and thereby make learning unsafe. Error there delivers the metacognitive declaration, “I failed.”

We should think in what ways our actions in higher education support positive or negative error cultures and what kinds of metacognitive conversations we nurture in participants (colleagues, students) of the culture. We can often improve intellectual development through understanding how the positive error cultures promote psychomotor development.

 

References

Gigerenzer, G. (2014) Risk Savvy: How to Make Good Decisions. New York New York: Penguin.

Nuhfer, E.B. (2009) “A Fractal Thinker Designs Deep Learning Exercises: Learning through Languaging. Educating in Fractal Patterns XXVIII, Part 2.” The National Teaching & Learning Forum, Vol. 19, No. 1, pp. 8-11.

Nuhfer, E.B. (2010a) “A Fractal Thinker Designs Deep Learning Exercises: Acts of Writing as “Gully Washers”- Educating in Fractal Patterns XXVIII, Part 3.” The National Teaching & Learning Forum, Vol. 19, No. 3, pp. 8-11.

Nuhfer, E.B. (2010b) “A Fractal Thinker Designs Deep Learning Exercises: Metacognitive Reflection with a Rubric Wrap Up – Educating in Fractal Patterns XXVIII, Part 4.” The National Teaching & Learning Forum, Vol. 19, No. 4, pp. 8-11.


The relationship between goals, metacognition, and academic success

In this article Savia Countinho investigates the relationship between mastery goals, performance goals, metacognition (using the Metacognitive Awareness Inventory), and academic success.

Countinho, S. (2007). The relationship between goals, metacognition, and academic successEducate. 7(1), p. 39-47


Metacognition: What Makes Humans Unique

by

Arthur L. Costa, Professor Emeritus, California State University, Sacramento

And

Bena Kallick, Educational Consultant, Westport, CT

————–

 

“I cannot always control what goes on outside.But I can always control what goes on inside.”  Wayne Dyer

————–

Try to solve this problem in your head:

How much is one half of two plus two?

Did you hear yourself talking to yourself? Did you find yourself having to decide if you should take one half of the first two (which would give the answer, three) or if you should sum the two’s first (which would give the answer, two)?

If you caught yourself having an “inner” dialogue inside your brain, and if you had to stop to evaluate your own decision making/problem-solving processes, you were experiencing metacognition.

The human species is known as Homo sapiens, sapiens, which basically means “a being that knows their knowing” (or maybe it is “knows they are knowing”). What distinguishes humans from other forms of life is our capacity for metacognition—the ability to be a spectator of own thoughts while we engage in them.

Occurring in the neocortex and therefore thought by some neurologists to be uniquely human, metacognition is our ability to know what we know and what we don’t know. It is our ability to plan a strategy for producing what information is needed, to be conscious of our own steps and strategies during the act of problem solving, and to reflect on and evaluate the productiveness of our own thinking. While “inner language,” thought to be a prerequisite, begins in most children around age five, metacognition is a key attribute of formal thought flowering about age eleven.

Interestingly, not all humans achieve the level of formal operations (Chiabetta, 1976). And as Alexander Luria, the Russian psychologist found, not all adults metacogitate.

Some adults follow instructions or perform tasks without wondering why they are doing what they are doing. They seldom question themselves about their own learning strategies or evaluate the efficiency of their own performance. They virtually have no idea of what they should do when they confront a problem and are often unable to explain their strategies of decision making, There is much evidence, however, to demonstrate that those who perform well on complex cognitive tasks, who are flexible and persevere in problem solving, who consciously apply their intellectual skills, are those who possess well-developed metacognitive abilities. They are those who “manage” their intellectual resources well: 1) their basic perceptual-motor skills; 2) their language, beliefs, knowledge of content, and memory processes; and 3) their purposeful and voluntary strategies intended to achieve a desired outcome; 4) self-knowledge about one’s own leaning styles and how to allocate resources accordingly.

When confronted with a problem to solve, we develop a plan of action, we maintain that plan in mind over a period of time, and then we reflect on and evaluate the plan upon its completion. Planning a strategy before embarking on a course of action helps us keep track of the steps in the sequence of planned behavior at the conscious awareness level for the duration of the activity. It facilitates making temporal and comparative judgments; assessing the readiness for more or different activities; and monitoring our interpretations, perceptions, decisions, and behaviors. Rigney (1980) identified the following self-monitoring skills as necessary for successful performance on intellectual tasks:

  • Keeping one’s place in a long sequence of operations;
  • Knowing that a subgoal has been obtained; and
  • Detecting errors and recovering from those errors either by making a quick fix or by retreating to the last known correct operation.

Such monitoring involves both “looking ahead” and “looking back.” Looking ahead includes:

  • Learning the structure of a sequence of operations;
  • Identifying areas where errors are likely;
  • Choosing a strategy that will reduce the possibility of error and will provide easy recovery; and
  • Identifying the kinds of feedback that will be available at various points, and evaluating the usefulness of that feedback.

Looking back includes:

  • Detecting errors previously made;
  • Keeping a history of what has been done to the present and thereby what should come next; and
  • Assessing the reasonableness of the present immediate outcome of task performance.

A simple example of this might be drawn from reading. While reading a passage have you ever had your mind “wander” from the pages? You “see” the words but no meaning is being produced. Suddenly you realize that you are not concentrating and that you’ve lost contact with the meaning of the text. You “recover” by returning to the passage to find your place, matching it with the last thought you can remember, and, once having found it, reading on with connectedness.

Effective thinkers plan for, reflect on, and evaluate the quality of their own thinking skills and strategies. Metacognition means becoming increasingly aware of one’s actions and the effects of those actions on others and on the environment; forming internal questions in the search for information and meaning; developing mental maps or plans of action; mentally rehearsing before a performance; monitoring plans as they are employed (being conscious of the need for midcourse correction if the plan is not meeting expectations); reflecting on the completed plan for self- evaluation; and editing mental pictures for improved performance.

This inner awareness and the strategy of recovery are components of metacognition. Indicators that we are becoming more aware of our own thinking include:

  • Are you able to describe what goes on in your head when you are thinking?
  • When asked, can you list the steps and tell where you are in the sequence of a problem-solving strategy?
  • Can you trace the pathways and dead ends you took on the road to a problem solution?
  • Can you describe what data are lacking and your plans for producing those data?

When students are metacognitive, we should see them persevering more when the solution to a problem is not immediately apparent. This means that they have systematic methods of analyzing a problem, knowing ways to begin, knowing what steps must be performed and when they are accurate or are in error. We should see students taking more pride in their efforts, becoming self-correcting, striving for craftsmanship and accuracy in their products, and becoming more autonomous in their problem-solving abilities.

Metacognition is an attribute of the “educated intellect.” Learning to think about their thinking can be a powerful tool in shaping, improving, internalizing and habituating their thinking.

REFERENCES

Chiabetta, E. L. A. (1976). Review of piagetian studies relevant to science instruction at the secondary and college level. Science Education, 60, 253-261.

Costa, A. and Kallick B.(2008). Learning and Leading with Habits of Mind: 16 Characteristics for Success. Alexandria, VA: ASCD

Rigney, J. W. (1980). Cognitive learning strategies and qualities in information processing. In R. Snow, P. Federico & W. Montague (Eds.), Aptitudes, Learning, and Instruction, Volume 1. Hillsdale, NJ: Erlbaum.

 


How Do You Increase Your Student’s Metacognition?

Aaron S. Richmond

Metropolitan State University of Denver

 

How many times has a student come to you and said “I just don’t understand why I did so bad on the test?” or “I knew the correct answer but I thought the question was tricky.” or “I’ve read the chapter 5 times and I still don’t understand what you are talking about in class.”? What did you say or do for these students? Did it prompt you to wonder what you can do to improve your students’ metacognition? I know many of us at Improve with Metacognition (IwM), started pursuing research on metacognition because of these very experiences. As such, I have compiled a summary of some of the awesome resources IwM bloggers have posted (see below). These instructional strategies can be generally categorized into either self-contained lessons. That is a lesson that can teach some aspect of metacognition in one or two class sessions. Or metacognitive instructional strategies that require an entire semester to teach.

Self-Contained Instructional Strategies

In Stephen Chew’s Blog, Metacognition and Scaffolding Student Learning, he suggests that one way to improve metacognitive awareness is through well-designed review sessions (Chew, 2015). Chew suggests that students would metacogntively benefit by actively participate and incentivize participation in study review sessions. Second, Chew suggests that students should self-test before review so that it is truly a review. Third, have students predict their exam scores based on the review performance and have them reflect on their predictions after the exam.

Ed Nuhfer (2015) describes a way to increase metacognition through role-play. Ed suggests that we can use Edward De Bono’s Six Thinking hats method to train our students to increase their metacognitive literacy. In essence, using this method we can train our students to think in a factual way (white hat), be positive and advocate for specific positions (yellow hat), to be cautious (black hat), recognize all facets of our emotions (red hat), be provocative (green hat), and be reflective and introspective (blue hat). We can do this through several exercises where students get a turn to have different hats.

In David Westmoreland’s (2014) blog, he discusses a classroom exercise to improve metacognition. David created a “metacognitive lab that attempts to answer the question How do you know?” In the lab, he presents students in small groups a handful of “truth” statements (e.g., Eggs are fragile.). Then students must take the statement and justify (on the board) how it is true. Then the class eliminates the justifications if they know them not to be true. Then the students with one another about the process and why the statements were eliminated.

Course Long Instructional Strategies

Chris Was (2014) investigated whether “variable weight-variable difficulty tests” would improve students’ calibration (i.e., knowing when you know something and knowing when you don’t). Chris has his students take several quizzes. In each quiz, students can weight each question for varied amount of points (e.g., question 1 is easy so I will give it 5 points whereas question 4 is hard so I will only give it 2 points). Then students answer whether they believe they got the question correct or not. After each quiz is graded, a teaching assistant goes over the quiz and discusses with the students why they weighted the question the way they did and why the thought they would or would not get the question correct. Was found that this activity caused his students to become better at knowing when they knew or did not know something.

Similarly, Shumacher and Taraban (2015) discussed the use of the testing effect as a method to improve metacognition. They suggest there are mixed results of the testing method as an effective instructional method. That is, when students were repeatedly tested and were exposed to questions on multiple exams, only low achieving students metacognitively benefited.

John Draeger (2015) uses just-in-time teaching in attempt to improve metacognition. John asks students metacognitive prompting questions (e.g., What is the most challenging part of the reading?) prior to class and they submit their answers before coming to class. Although, he has not measured the efficacy of this method, students have responded positively to the process.

Parting Questions to Further this Important Conversation

There are many other instructional methods used to increase student metacognition described throughout IwM that are both self-contained and semester long. Please check them out!

But even considering all of what has been presented in this blog and available on IwM, I couldn’t help but leave you with some unanswered questions that I myself have:

  1. What other instructional strategies have you used to increase student metacognition?
  2. If you were to choose between a self-contained or semester long method, which one would you choose and why? Meaning, what factors would help you determine which method to use? Insructional goals? How closely related to course content? Time commitment? Level of student metacogntive knowledge? Level of course?
  3. Once you have chosen a self-contained or semester long method, how should implementation methods differ? That is, what are the best practices used when implementing a self-contained vs. semester long technique?
  4. Finally, often in the metacognition research in higher education, instructional strategies for improving metacognition are pulled from studies and experiments conducted in k-12 education. Are there any studies, which you can think of, that would be suitable for testing in higher education? If so, how and why?

References

Beziat, T. (2015). Goal monitoring in the classroom. Retrived from https://www.improvewithmetacognition.com/goal-monitoring-in-the-classroom/

Chew, S. (2015). Metacognition and scaffolding student learning. Retrieved from https://www.improvewithmetacognition.com/metacognition-and-scaffolding-student-learning/

Draeger, J. (2015). Using Justin-in-Time assignments to promote metacognition. Retrieved from https://www.improvewithmetacognition.com/using-just-in-time-assignments-to-promote-metacognition/

Nilson, L. B. (2015). Metacognition and specifications grading: The odd couple? Retrieved from https://www.improvewithmetacognition.com/metacognition-and-specifications-grading-the-odd-couple/

Nuhfer, E. (2015). Developing metacognitive literacy through role play: Edward De Bono’s six thinking hats. Retrieved from https://www.improvewithmetacognition.com/developing-metacognitive-literacy-through-role-play-edward-de-bonos-six-thinking-hats/

Shumbacher, J., & Traban, R. (2015). To test or not to test: That is the metacognitive question. Retrieved from https://www.improvewithmetacognition.com/to-test-or-not-to-test-that-is-the-metacognitive-question/

Was, C. (2014). Testing improves knowledge monitoring. Retrieved from https://www.improvewithmetacognition.com/testing-improves-knowledge-monitoring/

Westmoreland, D. (2014). Science and social controversy—A classroom exercise in metacognition. Retrieved from https://www.improvewithmetacognition.com/science-and-social-controversy-a-classroom-exercise-in-metacognition/

 


Metacognition and Scaffolding Student Learning

Effective scaffolding requires metacognitive awareness. #metacognition #learning Share on Xby Dr. Stephen Chew, Samford University, slchew@samford.edu

Scaffolding learning involves providing instructional support for students so that they can develop a greater understanding of a topic than they could on their own. The concept of scaffolding originated with the work of Vygotsky and was later developed by Bruner. Scaffolding is not simply giving students the answers, but helping students understand the chain of reasoning or evidence that leads to an answer. I argue that metacognition plays a crucial role in effective scaffolding. Without metacognitive awareness, attempts at scaffolding may only create overconfidence in students without any learning. Let’s examine a common scaffolding activity, review sessions for exams.

Early in my career I used to give review sessions until I realized that they weren’t being helpful to the students who needed them most. I gave students old exams to try to answer for their review. Since I change textbooks regularly, there were questions on the old exams on topics that weren’t covered in the current class. I thought the discrepancy would be obvious when students got to those questions, but only the very best students noticed. Most students answered the questions, basically by guessing, completely unaware that we had never covered the topic. In addition, many students would simply read the question and then check the answer to see if they had guessed correctly without trying to reason through the question or using it as an indicator of their degree of understanding. I realized that students hadn’t studied the material before the review session. They were using the session as a substitute for actually studying. Just going through the review session increased their (false) confidence that they had studied without increasing their learning. It was my first encounter with poor metacognition. The issue with a lot of the struggling students wasn’t the content, but their metacognition and study skills, which my review sessions weren’t addressing. So I stopped doing them.

In recent years, though, I’ve thought about bringing them back with changes to address poor metacognition. First, we know that students who most need review sessions are least likely to think they need them, so I would somehow require participation. This is one reason why I believe that brief formative assessments in class, where everyone has to participate, are better than separate, voluntary review sessions. If I were to reinstate separate review session, I might make participation worth a small portion of the exam grade. Second, I would somehow require that students had done their best to study for the exam BEFORE coming to the review session so it is truly a review. Third, the review session would have to induce students to use good study strategies, such as self-testing with feedback and reflection, or interleaving. I might require students to generate and post three good questions they want to know about the material as their entry ticket to the review session. This would require students to review material before the review session and question generation is an effective learning strategy. Finally, I would require students to utilize the feedback from the review to recognize the level of their understanding and what they need to do to improve. I might have them predict their exam grade based on their review performance. All of these changes should increase student metacognition. I’m sure I’d have to experiment with the format to try to figure it out, and my solution may not work for other classes or faculty. It’s never a simple matter of whether or not an activity such as review sessions are a good or bad idea, it’s how they are implemented.

Without metacognitive awareness, scaffolding can backfire. Consider how poor metacognition can undermine other scaffolding activities such as releasing PowerPoint slides of lectures, guided note taking, allowing a formula “cheat sheet” in STEM classes, and allowing students to discard a certain number of exam items they think they got wrong. If students lack metacognition, each of these activities can actually be counterproductive for student learning.


Supports and Barriers to Students’ Metacognitive Development in a Large Intro Chemistry Course

by Ashley Welsh, Postdoctoral Teaching & Learning Fellow, Vantage College

First off, I must admit that this blog posting has been a long time coming. I was fortunate enough to meet both John Draeger and Lauren Scharff at the ISSOTL conference in Quebec City in October of 2014. Their “Improving With Metacognition” (IWM) poster was a beacon for someone such as myself who is engaged with metacognition in both my teaching and research. I was thrilled to know there were individuals creating and contributing to a repository of literature and reflections surrounding metacognition. This past January, John asked me to contribute a blog post to the website, however I thought it best to defer my writing until after the completion of my PhD this past spring. Thus, here I am. Ready to write.

For the past 7 years I have been actively engaged with undergraduate science education and research at the University of British Columbia (UBC). Within my research and teaching, I have become increasingly aware of students’ concerns with developing and adapting the appropriate study habits/strategies for success in their introductory courses. This concern was echoed by several of my colleagues teaching large (300+ students/section) introductory math and science courses.

This growing concern led me to exploring students’ metacognitive development in two sections of a large, second year introductory organic chemistry course for biological science majors (~245 students/section). Across the literature and at UBC, this course has a reputation as a challenging, cumulative course where students often fail to develop meaningful learning strategies and fall behind in the course (Grove & Bretz, 2012; Lynch & Trujillo, 2011; Zhao et al., 2014). As a result of its reputation, the instructor with whom I was working designed several formative assessments (e.g. bi-weekly in-class quizzes, written reflections), scaffolded in-class activities (e.g. targeted study strategy readings and discussion), and workshops to improve students’ learning strategies. That is, to improve their ability to control, monitor, evaluate, and plan their learning processes (Anderson & Nashon, 2007; Thomas, 2012). Despite students’ high completion of these targeted activities/homework, many still seemed to be struggling with how to study effectively. As such, we were curious to understand the barriers and supports for students’ metacognitive development in this particular course.

My research adopted an interpretive case study approach (Creswell, 2009; Stake, 1995) with data being collected via a pre/post metacognitive instrument, a student feedback survey, classroom observations, and student interviews. At this point in time I will not get into the nitty gritty details of my thesis, but instead, will draw on a few of the main observations/themes that emerged from my work.

  1. High stakes assessments may overshadow resources designed for metacognitive development: Within this course, students’ placed considerable emphasis on high stakes assessment as a means for studying, learning, and reflection. Despite students perceiving the formative assessment measures (e.g. in-class quizzes, homework assignments, targeted study strategy activities) as useful to their learning, the majority of them attributed the midterm and final examinations as driving their studying and behaviours. The examinations were worth roughly 75% of students’ grades and as such, students expressed being more concerned with their performance on these high stakes assessments than with their own study strategies. Students indicated that because the formative activities and workshops were only worth about 15% of their grade, they rarely reflected back on these resources or implemented the advised learning strategies. While these resources were designed to provide ongoing feedback on students’ learning strategies and performance, students mentioned that their performance on the first midterm exam was the primary crossroad at which they would explicitly reflect upon their learning strategies. As one student mentioned, “The midterm is the first major point at which you realize you didn’t understand things”. Unfortunately this was often too late in the semester for most students to effectively change their strategies.
  1. The majority of students reported difficulty implementing metacognitive strategies for enhanced learning: While many students were aware of their weaknesses and lack of concentration when studying, they still struggled with effectively monitoring, evaluating and planning their learning. One student mentioned that “while I do study hard, I don’t think I study smart”. Even when students were aware of their issues, implementing change was difficult as they weren’t exactly sure what to do. Despite the instructor modeling effective strategies and providing multiple opportunities for students to reflect on their learning, several students had difficulty with acknowledging, recognizing, or implementing this advice. Students unanimously praised the efforts of the instructor and the multiple resources she created to support their learning, but outside of class, students often struggled with staying on task or changing their behaviours/attitudes. Some students mentioned they were more concerned with getting a question right than with understanding the problem solving process or with implementing the appropriate strategies for learning. The majority of students I spoke to indicated that throughout their education they had rarely received explicit advice about how to study and some even mentioned that despite writing down the advice they received in class, they were “far too lazy to change”. With learning strategies not taking a primary role in their previous and current education, it’s not surprising that most students found it difficult to implement appropriate strategies for learning.
  1. Students emphasized the importance of gaining awareness of oneself as a learner and seeking help from others: While students acknowledged that the demanding course material and high-stakes assessments were barriers to their learning, they also noted the critical influence that their own strategies and abilities as learners had on their experience and performance. Some students viewed their own stubbornness or personal issues as reasons why they were “too lazy to change” or more likely to “stick with what I already know. Like memorizing and cramming”. When asked to provide advice for incoming students, all of the students I interviewed (n=26) mentioned the necessity for students to “know yourself and what suits you best. And change it – experiment with it. Know how you study. Know that.” This comment was echoed by several students who emphasized the need for every student to be aware of their weaknesses as learners and to actively and immediately seek help from others when concerned or confused. Students who exhibited effective learning strategies were more likely to attend office hours, to create study groups, and to implement and evaluate the instructor’s study advice. Furthermore, these students could explicitly articulate the strategies they used for studying and could identify which course resources were most influential to their learning approaches.

The three themes described above are only a snapshot of some of the issues unveiled within my doctoral research. They have led me to consider more research that could explore:

  • How increasing the weight (percentage of the final grade) of the formative assessment/activities relative to the high-stakes examinations might impact students’ learning strategies/behaviours;
  • How to appropriately shift students’ fixations on grades to that of understanding and learning;
  • How we might better support students in seeing value in activities, resources, or low-stakes assessment that have been designed to support them as metacognitive, confident learners; and
  • How we might achieve these assessment and learning goals in large, introductory science courses.

I look forward to any comments/questions you have on this topic!

-Ashley

——————————–

Anderson, D., & Nashon, S. (2007). Predators of knowledge construction: Interpreting students’ metacognition in an amusement park physics program. Science Education, 91(2), 298-320. doi: 10.1002/sce.20176

Creswell, J. W. (2009). Research design, qualitative, quantitative, and mixed methods approaches (3rd ed.). Thousand Oaks, CA: Sage.

Grove, N. P., & Bretz, S. L. (2012). A continuum of learning: from rote memorization to meaningful learning in organic chemistry. Chemistry Education Research and Practice, 13, 201-208.

Lynch, D. J., & Trujillo, H. (2011). Motivational beliefs and learning strategies in organic chemistry. International Journal of Science and Mathematics Education, 9(1351- 1365).

Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA: Sage.

Thomas, G. (2012). Metacognition in science education: Past, present, and future considerations. In B. J. Fraser, K. Tobin & C. J. McRobbie (Eds.), Second International Handbook of Science Education (pp. 131-144): Springer International Handbooks of Education.

Zhao, N., Wardeska, J. G., McGuire, S. Y., & Cook, E. (2014). Metacognition: An effective tool to promote success in college science learning. Journal of College Science Teaching, 43(4), 48-54.


Habits of Mind

by Arthur L. Costa, Ed. D. (Professor Emeritus, California State University, Sacramento). This paper summarizes 16 attributes of what human beings do when they behave intelligently, referred to as Habits of Mind.  Metacognition is the 5th mentioned (see a nice summary of all 16 on the final page). Dr. Costa points out that these “Habits of Mind transcend all subject matters commonly taught in school. They are characteristic of peak performers whether they are in homes, schools, athletic fields,organizations, the military, governments, churches or corporations.”


To Test or Not to Test: That is the Metacognitive Question

by John Schumacher & Roman Taraban at Texas Tech University

In prepping for upcoming classes, we are typically interested in how to best structure the class to promote the most effective learning. Applying best-practices recommendations in the literature, we try to implement active learning strategies that go beyond simple lecturing. One such strategy that has been found to be effective from research is the use of testing. The inference to draw from the research literature is quite simple: test students frequently, informally, and creatively, over and above standard course tests, like a mid-term and final. Testing is a useful assessment tool, but research has shown that it is also a learning tool that has been found to promote learning above and beyond simply rereading material (Roediger & Karpicke, 2006a). This is called the testing effect. In controlled studies, researchers have shown testing effects with a variety of materials, including expository texts and multimedia presentations (e.g., Carrier & Pashler, 1992; Huff, Davis, & Meade, 2013; Johnson & Mayer, 2009; Roediger & Karpicke, 2006b). Testing has been found to increase learning when implemented in a classroom setting (McDaniel, Anderson, Derbish, & Morrisette, 2007) and is a useful learning tool for people of all ages (Meyer & Logan, 2013). The theoretical explanation for the benefits of testing is that testing strengthens retrieval paths to the stored information in memory more so than simply rereading the material. Therefore, later on a person can more effectively recover the information from memory.

Although implementing testing and other active learning strategies in the classroom is useful in guiding and scaffolding student learning, it is important that we develop an understanding of when and for whom these strategies are most helpful. Specifically, regarding testing, research from our lab and in others is starting to show that testing may not always be as beneficial as past research suggests. Characteristics of the students themselves may nullify or even reverse the benefits of testing. Thus, the first question we address is whether frequent classroom testing will benefit all students. Yet a more pertinent question, which is our second question, is whether frequent testing develops metacognitive practices in students. We will discuss these in turn.

In a formal study of the testing effect, or in an informal test in any classroom, one needs two conditions, a control condition in which participants study the material on their own for a fixed amount of time, and an experimental condition in which participants study and are tested over the material, for instance, in a Study-Test-Study-Test format. Both groups spend an equal amount of time either simply studying or studying and testing. All participants take a final recall test over the material. Through a series of testing-effect studies incorporating expository texts as the learning material, we have produced a consistent grade-point average (GPA) by testing-effect interaction. This means that the benefits of testing (i.e., better later retrieval of information) depend on students’ GPAs! A closer look at this interaction showed us that students with low GPAs benefited most from the implementation of testing whereas mid to high GPA students benefited just as much by simply studying the material.

While at this preliminary stage it is difficult to ascertain why exactly low GPA students benefit from testing in our experiments while others do not, a few observations can be put forth. First, at the end of the experiments, we asked participants to report any strategies they used on their own to help them learn the materials. Metacognitive reading strategies that the participants reported included focusing on specific aspects of the material, segmenting the material into chunks, elaborating on the material, and testing themselves. Second, looking further into the students’ self-reports of metacognitive strategy use, we found that participants in the medium to high GPA range used these strategies often, while low GPA students used them less often. Simply, the self-regulated use of metacognitive strategies was associated with higher GPAs and better recall of the information in the texts that the participants studied. Lower GPA students benefited when the instructor deliberately imposed self-testing.

These results are interesting because they indicate that the classroom implementation of testing may only be beneficial to low achieving students because they either do not have metacognitive strategies at their disposal or are not applying these strategies. High-achieving students may have metacognitive strategies at their disposal and may not need that extra guidance set in place by the instructor.

Another explanation for the GPA and testing-effect interaction may simply be motivation. Researchers have found that GPA correlates with motivation (Mitchell, 1992). It is possible that implementing a learning strategy may be beneficial to low GPA students because it forces them to work with the material. Motivation may also explain why GPA correlated with metacognitive strategy use. Specifically if lower GPA students are less motivated to work with the material it stands to reason that they would be less likely to employ learning strategies that take time and effort.

This leads to our second question: Does frequent testing develop metacognitive skills in students, particularly self-regulated self-testing? This is a puzzle that we cannot answer from the current studies. Higher-GPA students appear to understand the benefits of applying metacognitive strategies and do not appear to need additional coaxing from the experimenter/teacher to apply them. Will imposing self-testing, or any other strategy on lower-GPA students lead them to eventually adopt the use of these strategies on their own? This is an important question and one that deserves future attention.

While testing may be useful for bolstering learning, we suggest that it should not be blindly utilized in the classroom as a learning tool. A consideration of what is being taught and to whom will dictate the effectiveness of testing as a learning tool. As we have suggested, more research also needs to be done to figure out how to bring metacognitive strategies into students’ study behaviors, particularly low-GPA students.

References

Carrier, M., & Pashler, H. (1992). The influence of retrieval on retention. Memory & Cognition,   20(6), 633-642.

Huff, M. J., Davis, S. D., & Meade, M. L. (2013). The effects of initial testing on false recall and             false recognition in the social contagion of memory paradigm. Memory & Cognition41(6), 820-831.

Johnson, C. I., & Mayer, R. E. (2009). A testing effect with multimedia learning. Journal of          Educational Psychology, 101(3), 621-629.

McDaniel, M. A., Anderson, J. L., Derbish, M. H., & Morrisette, N. (2007). Testing the testing effect in the classroom. European Journal of Cognitive Psychology, 19(4-5), 494-513.

Meyer, A. D., & Logan, J. M. (2013). Taking the testing effect beyond the college freshman:        Benefits for lifelong learning. Psychology and Aging, 28(1), 142-147.

Mitchell Jr, J. V. (1992). Interrelationships and predictive efficacy for indices of intrinsic,                         extrinsic, and self-assessed motivation for learning. Journal of Research and       Development in Education25(3), 149-155.

Roediger, H., & Karpicke, J. D. (2006a). The power of testing memory: Basic research and           implications for educational practice. Perspectives on Psychological Science, 1(3), 181-       210.

Roediger, H., & Karpicke, J. D. (2006b). Test-enhanced learning: Taking memory tests     improves long-term retention. Psychological Science, 17(3), 249-255.


Using Just-in-Time assignments to promote metacognition

by John Draeger (SUNY Buffalo State)

In a previous post entitled “Just-in-time for metacognition,” I argued that Just-in-Time teaching techniques could be used to promote both higher-order-thinking and metacognition. Just-in-time teaching techniques require that students submit short assignments prior to class for review by the instructor before class begins (Novak 1999; Simkins & Maier, 2009; Schraff et al., 2011). In my philosophy courses, students send their answers to me electronically the night before class and I spend the morning of class using their answers to shape my pre-class planning. I’ve had success with higher-order-thinking questions, but I tended to ask students questions about their learning process only when the class had clearly gone off track. Since I’ve become convinced that developing good metacognitive habits requires practice, I’ve made metacognitive questions a regular component of my Just-in-Time assignments. In this post, I thought I would let you know how things are going.

Research shows that students learn more effectively when they are aware of their own learning process (I encourage you to surf around this site for examples). Borrowing from Tanner (2012) and Scharff (2014), I have asked students to think about why and how they engage in various learning strategies (e.g., reading, writing, reflecting). More specifically, I have asked: what was the most challenging part of the reading? Was the current reading more challenging than the last? What was the most useful piece of the reading? What was the most challenging piece of the reading? What was your reading strategy this week? How might you approach the reading differently next time? What was the most challenging part of the last writing assignment? How might you approach your next writing assignment differently? What are your learning goals for the week?

Responses from students at all levels have been remarkably similar. In particular, student responses fall into three broad categories: general student commentary (e.g., about the course, reading, particular assignment), content (e.g., students reframe the metacognition question and answer with use of course content), reflective practice (e.g., students actually reflect on their learning process).

First Type of Response: General Commentary

  • When asked to describe the most challenging part of the reading, students took the opportunity to observe that the reading too long, too boring, or it was interesting but confusing.
  • When asked to describe the most useful part of the reading, students often said that the question was difficult to answer because the reading was too long, too boring, or it was interesting but confusing.
  • When asked about their reading strategy, students observed that they did their best but the reading was too long, too boring, or interesting but confusing.
  • When asked about their learning goals for the week, students said that the question was strange, off the wall, and they had never been asked such a thing before.

Second Type of Response: Content

  • When asked to describe the most challenging part of the reading, students identified particular examples that were hard to follow and claims that seemed dubious.
  • When asked to describe the most useful part of the reading, students often restated the central question of the week (e.g., is prostitution morally permissible? should hate speech be restricted?) or summarized big issues (e.g., liberty argument for the permissibility of prostitution or hate speech).
  • When asked about their reading strategy, students often said that they wanted to understand a particular argument for that day (e.g., abortion, euthanasia, prostitution).
  • When asked their learning goal for the week, students said that they wanted to explore a big question (e.g., the nature of liberty or equality) and put philosophers into conversation (this is a major goal in all my courses).

Third Type of Response: Reflective practice

  • When asked to describe the most challenging part of the reading, students said that they didn’t give themselves enough time, they stretched it over multiple days, or they didn’t do it at all.
  • When asked about the most useful part of the reading, some students said that the reading forced them to challenge their own assumptions (e.g., “I always figured prostitution was disgusting, but maybe not”).
  • When asked about their reading strategies, some said that they had to read the material several times. Some said they skimmed the reading and hoped they could piece it together in class. Others found writing short summaries to be essential.
  • When asked about their learning goals for the week, some students reported wanting to become more open-minded and more tolerant of people with differing points of view.

Responses to the metacognitive prompts have been remarkably similar from students in my freshman to senior level courses. In contrast, I can say that there’s a marked difference by class year in responses to higher-order thinking prompts, possibly because I regularly use student responses to higher-order thinking prompts to structure class discussion. While I gave students some feedback on their metacognitive prompt responses, in the future I could be more intentional about using their responses to structure discussions of the student learning process.

I also need to refine my metacognition-related pre-class questions. For example, asking students to discuss the most challenging part of a reading assignment encourages students to reflect on roadblocks to understanding. The question is open-ended in a way that allows students to locate the difficulty in a particular bit of content, a lack of motivation, or a deficiency in reading strategy. However, if I want them to focus on their learning strategies, then I need to focus the question in ways that prompt that sort of reflection. For example, I could reword the prompt as follows: Identify one challenging passage in the reading this week. Explain why you believe it was difficult to understand. Discuss what learning strategy you used, how you know whether the strategy worked, and what you might do differently next time. Revising the questions so that they have a more explicitly metacognitive focus is especially important given that students are often unfamiliar with metacognitive reflection. If I can be more intentional about how I promote metacognition in my courses, then perhaps there can be gains in the metacognitive awareness demonstrated by my students. I’ll keep you posted.

References

Novak, G., Patterson, E., Gavrin, A., & Christian, W. (1999). Just-in-time teaching: Blending active learning with web technology. Upper Saddle River, NJ: Prentice Hall.

Scharff, L. “Incorporating Metacognitive Leadership Development in Class.” (2014). Retrieved from https://www.improvewithmetacognition.com/incorporating-metacognitive-leadership-development-in-class/.

Scharff, L., Rolf, J. Novotny, S. and Lee, R. (2011). “Factors impacting completion of pre-class assignments (JiTT) in Physics, Math, and Behavioral Sciences.” In C.

Rust (ed.), Improving Student Learning: Improving Student Learning Global Theories and Local Practices: Institutional, Disciplinary and Cultural Variations. Oxford Brookes University, UK.

Simkins, S. & Maier, M. (2009). Just-in-time teaching: Across the disciplines, across the  academy. Stylus Publishing, LLC..

Tanner, K. D. (2012). Promoting student metacognition. CBE-Life Sciences Education11(2), 113-120.