Breaking the Content Mold: The Challenge of Shaping Student Metacognitive Development

by Dr. Lauren Scharff, U. S. Air Force Academy

We all know that it’s difficult to break long-term patterns of behavior, even when we’re genuinely motivated and well intentioned. It becomes significantly more difficult when we are trying to shift behavioral patterns of groups. This is true across a spectrum of situations and behaviors, but in this post I will focus on teachers and students shifting from a focus on content and basic skills to a focus on higher-level thinking and metacognitive skills.

These musing on “breaking the content mold” have become much more salient as I look forward to a new semester and I exchange ideas with colleagues about how we will approach our upcoming classes. I refer to the “content mold” as a way of illustrating how we, both students and teachers, have been shaped, or molded, by many years of prior experiences and expectations. Due to this shaping, the natural default for both groups is to teach or learn in ways that we have been exposed to in the past, especially if those approaches have seemed successful in the past. For many of us, this default is a focus on content and on disciplinary skills closely linked with the content. With conscious effort we can break out of that molded pattern of behavior to encourage interdisciplinary thinking and higher-level thinking skills that transfer beyond our course. However, when things get tough (e.g. when there are time constraints, high cognitive load situations, or pressures to achieve success as portrayed by exam scores), we tend to revert back to the more familiar patterns of behaviors, which for many of us means a focus on content and basic skills, rather than the use of higher-level thinking or metacognitive strategies.

Similarly, in an earlier post on this site, Ed Nuhfer points out that, “When students learn in most courses, they engage in a three-component effort toward achieving an education: (1) gaining content knowledge, (2) developing skills (which are usually specific to a discipline), and (3) gaining deeper understanding of the kinds of thinking or reasoning required for mastery of the challenges at hand. The American higher educational system generally does best at helping students achieve the first two. Many students have yet to even realize how these components differ, and few ever receive any instruction on mastering Component 3.”

One of the biggest challenges to breaking this molded pattern is that it will be far more likely to be successful if both the teacher and the student are genuinely engaged in the effort. No matter how much effort is put forth by an instructor, if value is not perceived by the student, then little change will occur. Similarly, even if a student has learned the value of higher-level thinking and metacognitive approaches, if a teacher doesn’t seem to value those efforts, then a student will astutely focus on what does seem to be valued by the teacher. A further challenge is that, over the course of a semester, the effort and motivation from both groups might wax and wane in a non-synchronous manner. As I explore these challenges, I will use myself and my less-than-successful efforts last semester as an example.

I taught an upper-level majors course in vision science, and because I have taught this course many times, I knew going in that the material is often unexpectedly challenging to students and most of them find the chapter readings to be difficult. (They contain a lot of brain biology and neural communication topics, and my students are not biology majors). Thus, I decided to build in a low-threat (with a small number of points), intentional, metacognitive reflection assignment for each lesson that had a reading. Students would indicate their level of reading completion (six levels encompassing a thorough reading with annotations, skimming, not at all) and their level of understanding of the material before class. If they had problems with any of the materials, they were supposed to indicate what steps they would take to develop understanding. They would record these and turn them in at mid-semester and at the end of the semester. I had hoped that this regular reflection would prompt their awareness of their reading behaviors and their level of learning from the reading, initiate proactive behaviors if they had poor understanding, and build habits by being completed regularly. I also took time at the start of the semester to explicitly explain why I was incorporating this regular reflection assignment.

Unfortunately, except for a couple of students, I would rate this assignment as a failure. I don’t believe it did any harm, but I also don’t believe that students used it as intended. Rather, I think most of them quickly and superficially answered the questions just so they could turn in their logs at the two required times. This type of reflection is not something that they have been asked to explicitly do in the majority (all?) of their prior courses, and they already had other strategies that seemed to work for their success in other classes For example, more than half way through the semester one student informed me that it was simply easier and faster to come to the teacher’s office and get reading guide answers (or homework problem solutions in other courses), rather than deeply read and try to figure it out on his own. Thus, if he didn’t understand as he skimmed, he didn’t worry about it. This approach wasn’t working well in my course, but up to that point he’d been very successful, so he persisted in using it (although I stopped answering his questions in my office until he could demonstrate that he’d at least tried to figure them out).

In hindsight, I believe that my actions (or lack of them) also fed into the failure. I assumed that students would bring their questions to class if they had them due to their increased awareness of them and the prompt about what they would do to increase their understanding. Thus, if there were no questions (typically the case), I used the class time to connect the readings with related application examples and demonstrations rather than reiterated what was in the readings. The students seemed engaged in class and showed no indication of specific problems with the readings. Their personal application reflection writing assignments (separate from the reading logs) were fantastic. However, their poor exam performance suggested that they weren’t deeply understanding the content, and I instinctively shifted back to my prior content-focused approaches. I also did not take time in class to directly ask them about their understanding of the readings, what parts they found most challenging, and why.

Thus, although I know I wanted to support the development of student metacognitive skills, and my students also seemed accepting of that goal when I introduced it to them at the beginning of the semester, both groups of us quickly reverted to old content-focused habits that had been “successful” in the past. I am not the first to note the challenges of developing metacognitive skills. For example, Case and Gunstone (2002) state the following, “Many … authors have emphasized that metacognitive development is not easy to foster (e.g., Gunstone & Mitchell, 1998; White, 1998). Projects to enhance metacognition need to be long-term, and require a considerable energy input from both teachers and students.”

So, what will I do in the future? My plans are to more regularly and explicitly engage in discussion of the reading reflection prompts (and other metacognitive prompts) during class. By giving class time to such discussion and bringing the metacognitive processes into the open (rather than keeping them private due to completion outside of class), I hope to indicate the value of the processes and more directly support student exploration of new ways of thinking about learning. Importantly, I hope that this more public sharing will also keep me from falling back to a simple content focus when student performance isn’t what I’d like it to be. Ultimately, metacognitive development should enhance student learning, although it is likely to take longer to play out into changed learning behaviors. I need to avoid the “quick fix” of focusing on content. Thus, I plan to shape a new mold for myself and openly display it my students. We’ll all be more likely to succeed if we are “all in” together.

——–

Nuhfer, E. (15 July 2014). Metacognition for Guiding Students to Awareness of Higher-level Thinking (Part 1). Improve with Metacognition. https://www.improvewithmetacognition.com/metacognition-for-guiding-students-to-awareness-of-higher-level-thinking-part-1/

Case, J. & Gunstone, R. (2002). Metacognitive Development as a Shift in Approach to Learning: An in-depth study. Studies in Higher Education 27(4), p. 459-470. DOI: 10.1080/0307507022000011561

 

 


How Do You Increase Your Student’s Metacognition?

Aaron S. Richmond

Metropolitan State University of Denver

 

How many times has a student come to you and said “I just don’t understand why I did so bad on the test?” or “I knew the correct answer but I thought the question was tricky.” or “I’ve read the chapter 5 times and I still don’t understand what you are talking about in class.”? What did you say or do for these students? Did it prompt you to wonder what you can do to improve your students’ metacognition? I know many of us at Improve with Metacognition (IwM), started pursuing research on metacognition because of these very experiences. As such, I have compiled a summary of some of the awesome resources IwM bloggers have posted (see below). These instructional strategies can be generally categorized into either self-contained lessons. That is a lesson that can teach some aspect of metacognition in one or two class sessions. Or metacognitive instructional strategies that require an entire semester to teach.

Self-Contained Instructional Strategies

In Stephen Chew’s Blog, Metacognition and Scaffolding Student Learning, he suggests that one way to improve metacognitive awareness is through well-designed review sessions (Chew, 2015). Chew suggests that students would metacogntively benefit by actively participate and incentivize participation in study review sessions. Second, Chew suggests that students should self-test before review so that it is truly a review. Third, have students predict their exam scores based on the review performance and have them reflect on their predictions after the exam.

Ed Nuhfer (2015) describes a way to increase metacognition through role-play. Ed suggests that we can use Edward De Bono’s Six Thinking hats method to train our students to increase their metacognitive literacy. In essence, using this method we can train our students to think in a factual way (white hat), be positive and advocate for specific positions (yellow hat), to be cautious (black hat), recognize all facets of our emotions (red hat), be provocative (green hat), and be reflective and introspective (blue hat). We can do this through several exercises where students get a turn to have different hats.

In David Westmoreland’s (2014) blog, he discusses a classroom exercise to improve metacognition. David created a “metacognitive lab that attempts to answer the question How do you know?” In the lab, he presents students in small groups a handful of “truth” statements (e.g., Eggs are fragile.). Then students must take the statement and justify (on the board) how it is true. Then the class eliminates the justifications if they know them not to be true. Then the students with one another about the process and why the statements were eliminated.

Course Long Instructional Strategies

Chris Was (2014) investigated whether “variable weight-variable difficulty tests” would improve students’ calibration (i.e., knowing when you know something and knowing when you don’t). Chris has his students take several quizzes. In each quiz, students can weight each question for varied amount of points (e.g., question 1 is easy so I will give it 5 points whereas question 4 is hard so I will only give it 2 points). Then students answer whether they believe they got the question correct or not. After each quiz is graded, a teaching assistant goes over the quiz and discusses with the students why they weighted the question the way they did and why the thought they would or would not get the question correct. Was found that this activity caused his students to become better at knowing when they knew or did not know something.

Similarly, Shumacher and Taraban (2015) discussed the use of the testing effect as a method to improve metacognition. They suggest there are mixed results of the testing method as an effective instructional method. That is, when students were repeatedly tested and were exposed to questions on multiple exams, only low achieving students metacognitively benefited.

John Draeger (2015) uses just-in-time teaching in attempt to improve metacognition. John asks students metacognitive prompting questions (e.g., What is the most challenging part of the reading?) prior to class and they submit their answers before coming to class. Although, he has not measured the efficacy of this method, students have responded positively to the process.

Parting Questions to Further this Important Conversation

There are many other instructional methods used to increase student metacognition described throughout IwM that are both self-contained and semester long. Please check them out!

But even considering all of what has been presented in this blog and available on IwM, I couldn’t help but leave you with some unanswered questions that I myself have:

  1. What other instructional strategies have you used to increase student metacognition?
  2. If you were to choose between a self-contained or semester long method, which one would you choose and why? Meaning, what factors would help you determine which method to use? Insructional goals? How closely related to course content? Time commitment? Level of student metacogntive knowledge? Level of course?
  3. Once you have chosen a self-contained or semester long method, how should implementation methods differ? That is, what are the best practices used when implementing a self-contained vs. semester long technique?
  4. Finally, often in the metacognition research in higher education, instructional strategies for improving metacognition are pulled from studies and experiments conducted in k-12 education. Are there any studies, which you can think of, that would be suitable for testing in higher education? If so, how and why?

References

Beziat, T. (2015). Goal monitoring in the classroom. Retrived from https://www.improvewithmetacognition.com/goal-monitoring-in-the-classroom/

Chew, S. (2015). Metacognition and scaffolding student learning. Retrieved from https://www.improvewithmetacognition.com/metacognition-and-scaffolding-student-learning/

Draeger, J. (2015). Using Justin-in-Time assignments to promote metacognition. Retrieved from https://www.improvewithmetacognition.com/using-just-in-time-assignments-to-promote-metacognition/

Nilson, L. B. (2015). Metacognition and specifications grading: The odd couple? Retrieved from https://www.improvewithmetacognition.com/metacognition-and-specifications-grading-the-odd-couple/

Nuhfer, E. (2015). Developing metacognitive literacy through role play: Edward De Bono’s six thinking hats. Retrieved from https://www.improvewithmetacognition.com/developing-metacognitive-literacy-through-role-play-edward-de-bonos-six-thinking-hats/

Shumbacher, J., & Traban, R. (2015). To test or not to test: That is the metacognitive question. Retrieved from https://www.improvewithmetacognition.com/to-test-or-not-to-test-that-is-the-metacognitive-question/

Was, C. (2014). Testing improves knowledge monitoring. Retrieved from https://www.improvewithmetacognition.com/testing-improves-knowledge-monitoring/

Westmoreland, D. (2014). Science and social controversy—A classroom exercise in metacognition. Retrieved from https://www.improvewithmetacognition.com/science-and-social-controversy-a-classroom-exercise-in-metacognition/

 


Metacognition and Scaffolding Student Learning

Effective scaffolding requires metacognitive awareness. #metacognition #learning Share on Xby Dr. Stephen Chew, Samford University, slchew@samford.edu

Scaffolding learning involves providing instructional support for students so that they can develop a greater understanding of a topic than they could on their own. The concept of scaffolding originated with the work of Vygotsky and was later developed by Bruner. Scaffolding is not simply giving students the answers, but helping students understand the chain of reasoning or evidence that leads to an answer. I argue that metacognition plays a crucial role in effective scaffolding. Without metacognitive awareness, attempts at scaffolding may only create overconfidence in students without any learning. Let’s examine a common scaffolding activity, review sessions for exams.

Early in my career I used to give review sessions until I realized that they weren’t being helpful to the students who needed them most. I gave students old exams to try to answer for their review. Since I change textbooks regularly, there were questions on the old exams on topics that weren’t covered in the current class. I thought the discrepancy would be obvious when students got to those questions, but only the very best students noticed. Most students answered the questions, basically by guessing, completely unaware that we had never covered the topic. In addition, many students would simply read the question and then check the answer to see if they had guessed correctly without trying to reason through the question or using it as an indicator of their degree of understanding. I realized that students hadn’t studied the material before the review session. They were using the session as a substitute for actually studying. Just going through the review session increased their (false) confidence that they had studied without increasing their learning. It was my first encounter with poor metacognition. The issue with a lot of the struggling students wasn’t the content, but their metacognition and study skills, which my review sessions weren’t addressing. So I stopped doing them.

In recent years, though, I’ve thought about bringing them back with changes to address poor metacognition. First, we know that students who most need review sessions are least likely to think they need them, so I would somehow require participation. This is one reason why I believe that brief formative assessments in class, where everyone has to participate, are better than separate, voluntary review sessions. If I were to reinstate separate review session, I might make participation worth a small portion of the exam grade. Second, I would somehow require that students had done their best to study for the exam BEFORE coming to the review session so it is truly a review. Third, the review session would have to induce students to use good study strategies, such as self-testing with feedback and reflection, or interleaving. I might require students to generate and post three good questions they want to know about the material as their entry ticket to the review session. This would require students to review material before the review session and question generation is an effective learning strategy. Finally, I would require students to utilize the feedback from the review to recognize the level of their understanding and what they need to do to improve. I might have them predict their exam grade based on their review performance. All of these changes should increase student metacognition. I’m sure I’d have to experiment with the format to try to figure it out, and my solution may not work for other classes or faculty. It’s never a simple matter of whether or not an activity such as review sessions are a good or bad idea, it’s how they are implemented.

Without metacognitive awareness, scaffolding can backfire. Consider how poor metacognition can undermine other scaffolding activities such as releasing PowerPoint slides of lectures, guided note taking, allowing a formula “cheat sheet” in STEM classes, and allowing students to discard a certain number of exam items they think they got wrong. If students lack metacognition, each of these activities can actually be counterproductive for student learning.


Supports and Barriers to Students’ Metacognitive Development in a Large Intro Chemistry Course

by Ashley Welsh, Postdoctoral Teaching & Learning Fellow, Vantage College

First off, I must admit that this blog posting has been a long time coming. I was fortunate enough to meet both John Draeger and Lauren Scharff at the ISSOTL conference in Quebec City in October of 2014. Their “Improving With Metacognition” (IWM) poster was a beacon for someone such as myself who is engaged with metacognition in both my teaching and research. I was thrilled to know there were individuals creating and contributing to a repository of literature and reflections surrounding metacognition. This past January, John asked me to contribute a blog post to the website, however I thought it best to defer my writing until after the completion of my PhD this past spring. Thus, here I am. Ready to write.

For the past 7 years I have been actively engaged with undergraduate science education and research at the University of British Columbia (UBC). Within my research and teaching, I have become increasingly aware of students’ concerns with developing and adapting the appropriate study habits/strategies for success in their introductory courses. This concern was echoed by several of my colleagues teaching large (300+ students/section) introductory math and science courses.

This growing concern led me to exploring students’ metacognitive development in two sections of a large, second year introductory organic chemistry course for biological science majors (~245 students/section). Across the literature and at UBC, this course has a reputation as a challenging, cumulative course where students often fail to develop meaningful learning strategies and fall behind in the course (Grove & Bretz, 2012; Lynch & Trujillo, 2011; Zhao et al., 2014). As a result of its reputation, the instructor with whom I was working designed several formative assessments (e.g. bi-weekly in-class quizzes, written reflections), scaffolded in-class activities (e.g. targeted study strategy readings and discussion), and workshops to improve students’ learning strategies. That is, to improve their ability to control, monitor, evaluate, and plan their learning processes (Anderson & Nashon, 2007; Thomas, 2012). Despite students’ high completion of these targeted activities/homework, many still seemed to be struggling with how to study effectively. As such, we were curious to understand the barriers and supports for students’ metacognitive development in this particular course.

My research adopted an interpretive case study approach (Creswell, 2009; Stake, 1995) with data being collected via a pre/post metacognitive instrument, a student feedback survey, classroom observations, and student interviews. At this point in time I will not get into the nitty gritty details of my thesis, but instead, will draw on a few of the main observations/themes that emerged from my work.

  1. High stakes assessments may overshadow resources designed for metacognitive development: Within this course, students’ placed considerable emphasis on high stakes assessment as a means for studying, learning, and reflection. Despite students perceiving the formative assessment measures (e.g. in-class quizzes, homework assignments, targeted study strategy activities) as useful to their learning, the majority of them attributed the midterm and final examinations as driving their studying and behaviours. The examinations were worth roughly 75% of students’ grades and as such, students expressed being more concerned with their performance on these high stakes assessments than with their own study strategies. Students indicated that because the formative activities and workshops were only worth about 15% of their grade, they rarely reflected back on these resources or implemented the advised learning strategies. While these resources were designed to provide ongoing feedback on students’ learning strategies and performance, students mentioned that their performance on the first midterm exam was the primary crossroad at which they would explicitly reflect upon their learning strategies. As one student mentioned, “The midterm is the first major point at which you realize you didn’t understand things”. Unfortunately this was often too late in the semester for most students to effectively change their strategies.
  1. The majority of students reported difficulty implementing metacognitive strategies for enhanced learning: While many students were aware of their weaknesses and lack of concentration when studying, they still struggled with effectively monitoring, evaluating and planning their learning. One student mentioned that “while I do study hard, I don’t think I study smart”. Even when students were aware of their issues, implementing change was difficult as they weren’t exactly sure what to do. Despite the instructor modeling effective strategies and providing multiple opportunities for students to reflect on their learning, several students had difficulty with acknowledging, recognizing, or implementing this advice. Students unanimously praised the efforts of the instructor and the multiple resources she created to support their learning, but outside of class, students often struggled with staying on task or changing their behaviours/attitudes. Some students mentioned they were more concerned with getting a question right than with understanding the problem solving process or with implementing the appropriate strategies for learning. The majority of students I spoke to indicated that throughout their education they had rarely received explicit advice about how to study and some even mentioned that despite writing down the advice they received in class, they were “far too lazy to change”. With learning strategies not taking a primary role in their previous and current education, it’s not surprising that most students found it difficult to implement appropriate strategies for learning.
  1. Students emphasized the importance of gaining awareness of oneself as a learner and seeking help from others: While students acknowledged that the demanding course material and high-stakes assessments were barriers to their learning, they also noted the critical influence that their own strategies and abilities as learners had on their experience and performance. Some students viewed their own stubbornness or personal issues as reasons why they were “too lazy to change” or more likely to “stick with what I already know. Like memorizing and cramming”. When asked to provide advice for incoming students, all of the students I interviewed (n=26) mentioned the necessity for students to “know yourself and what suits you best. And change it – experiment with it. Know how you study. Know that.” This comment was echoed by several students who emphasized the need for every student to be aware of their weaknesses as learners and to actively and immediately seek help from others when concerned or confused. Students who exhibited effective learning strategies were more likely to attend office hours, to create study groups, and to implement and evaluate the instructor’s study advice. Furthermore, these students could explicitly articulate the strategies they used for studying and could identify which course resources were most influential to their learning approaches.

The three themes described above are only a snapshot of some of the issues unveiled within my doctoral research. They have led me to consider more research that could explore:

  • How increasing the weight (percentage of the final grade) of the formative assessment/activities relative to the high-stakes examinations might impact students’ learning strategies/behaviours;
  • How to appropriately shift students’ fixations on grades to that of understanding and learning;
  • How we might better support students in seeing value in activities, resources, or low-stakes assessment that have been designed to support them as metacognitive, confident learners; and
  • How we might achieve these assessment and learning goals in large, introductory science courses.

I look forward to any comments/questions you have on this topic!

-Ashley

——————————–

Anderson, D., & Nashon, S. (2007). Predators of knowledge construction: Interpreting students’ metacognition in an amusement park physics program. Science Education, 91(2), 298-320. doi: 10.1002/sce.20176

Creswell, J. W. (2009). Research design, qualitative, quantitative, and mixed methods approaches (3rd ed.). Thousand Oaks, CA: Sage.

Grove, N. P., & Bretz, S. L. (2012). A continuum of learning: from rote memorization to meaningful learning in organic chemistry. Chemistry Education Research and Practice, 13, 201-208.

Lynch, D. J., & Trujillo, H. (2011). Motivational beliefs and learning strategies in organic chemistry. International Journal of Science and Mathematics Education, 9(1351- 1365).

Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA: Sage.

Thomas, G. (2012). Metacognition in science education: Past, present, and future considerations. In B. J. Fraser, K. Tobin & C. J. McRobbie (Eds.), Second International Handbook of Science Education (pp. 131-144): Springer International Handbooks of Education.

Zhao, N., Wardeska, J. G., McGuire, S. Y., & Cook, E. (2014). Metacognition: An effective tool to promote success in college science learning. Journal of College Science Teaching, 43(4), 48-54.


Habits of Mind

by Arthur L. Costa, Ed. D. (Professor Emeritus, California State University, Sacramento). This paper summarizes 16 attributes of what human beings do when they behave intelligently, referred to as Habits of Mind.  Metacognition is the 5th mentioned (see a nice summary of all 16 on the final page). Dr. Costa points out that these “Habits of Mind transcend all subject matters commonly taught in school. They are characteristic of peak performers whether they are in homes, schools, athletic fields,organizations, the military, governments, churches or corporations.”


To Test or Not to Test: That is the Metacognitive Question

by John Schumacher & Roman Taraban at Texas Tech University

In prepping for upcoming classes, we are typically interested in how to best structure the class to promote the most effective learning. Applying best-practices recommendations in the literature, we try to implement active learning strategies that go beyond simple lecturing. One such strategy that has been found to be effective from research is the use of testing. The inference to draw from the research literature is quite simple: test students frequently, informally, and creatively, over and above standard course tests, like a mid-term and final. Testing is a useful assessment tool, but research has shown that it is also a learning tool that has been found to promote learning above and beyond simply rereading material (Roediger & Karpicke, 2006a). This is called the testing effect. In controlled studies, researchers have shown testing effects with a variety of materials, including expository texts and multimedia presentations (e.g., Carrier & Pashler, 1992; Huff, Davis, & Meade, 2013; Johnson & Mayer, 2009; Roediger & Karpicke, 2006b). Testing has been found to increase learning when implemented in a classroom setting (McDaniel, Anderson, Derbish, & Morrisette, 2007) and is a useful learning tool for people of all ages (Meyer & Logan, 2013). The theoretical explanation for the benefits of testing is that testing strengthens retrieval paths to the stored information in memory more so than simply rereading the material. Therefore, later on a person can more effectively recover the information from memory.

Although implementing testing and other active learning strategies in the classroom is useful in guiding and scaffolding student learning, it is important that we develop an understanding of when and for whom these strategies are most helpful. Specifically, regarding testing, research from our lab and in others is starting to show that testing may not always be as beneficial as past research suggests. Characteristics of the students themselves may nullify or even reverse the benefits of testing. Thus, the first question we address is whether frequent classroom testing will benefit all students. Yet a more pertinent question, which is our second question, is whether frequent testing develops metacognitive practices in students. We will discuss these in turn.

In a formal study of the testing effect, or in an informal test in any classroom, one needs two conditions, a control condition in which participants study the material on their own for a fixed amount of time, and an experimental condition in which participants study and are tested over the material, for instance, in a Study-Test-Study-Test format. Both groups spend an equal amount of time either simply studying or studying and testing. All participants take a final recall test over the material. Through a series of testing-effect studies incorporating expository texts as the learning material, we have produced a consistent grade-point average (GPA) by testing-effect interaction. This means that the benefits of testing (i.e., better later retrieval of information) depend on students’ GPAs! A closer look at this interaction showed us that students with low GPAs benefited most from the implementation of testing whereas mid to high GPA students benefited just as much by simply studying the material.

While at this preliminary stage it is difficult to ascertain why exactly low GPA students benefit from testing in our experiments while others do not, a few observations can be put forth. First, at the end of the experiments, we asked participants to report any strategies they used on their own to help them learn the materials. Metacognitive reading strategies that the participants reported included focusing on specific aspects of the material, segmenting the material into chunks, elaborating on the material, and testing themselves. Second, looking further into the students’ self-reports of metacognitive strategy use, we found that participants in the medium to high GPA range used these strategies often, while low GPA students used them less often. Simply, the self-regulated use of metacognitive strategies was associated with higher GPAs and better recall of the information in the texts that the participants studied. Lower GPA students benefited when the instructor deliberately imposed self-testing.

These results are interesting because they indicate that the classroom implementation of testing may only be beneficial to low achieving students because they either do not have metacognitive strategies at their disposal or are not applying these strategies. High-achieving students may have metacognitive strategies at their disposal and may not need that extra guidance set in place by the instructor.

Another explanation for the GPA and testing-effect interaction may simply be motivation. Researchers have found that GPA correlates with motivation (Mitchell, 1992). It is possible that implementing a learning strategy may be beneficial to low GPA students because it forces them to work with the material. Motivation may also explain why GPA correlated with metacognitive strategy use. Specifically if lower GPA students are less motivated to work with the material it stands to reason that they would be less likely to employ learning strategies that take time and effort.

This leads to our second question: Does frequent testing develop metacognitive skills in students, particularly self-regulated self-testing? This is a puzzle that we cannot answer from the current studies. Higher-GPA students appear to understand the benefits of applying metacognitive strategies and do not appear to need additional coaxing from the experimenter/teacher to apply them. Will imposing self-testing, or any other strategy on lower-GPA students lead them to eventually adopt the use of these strategies on their own? This is an important question and one that deserves future attention.

While testing may be useful for bolstering learning, we suggest that it should not be blindly utilized in the classroom as a learning tool. A consideration of what is being taught and to whom will dictate the effectiveness of testing as a learning tool. As we have suggested, more research also needs to be done to figure out how to bring metacognitive strategies into students’ study behaviors, particularly low-GPA students.

References

Carrier, M., & Pashler, H. (1992). The influence of retrieval on retention. Memory & Cognition,   20(6), 633-642.

Huff, M. J., Davis, S. D., & Meade, M. L. (2013). The effects of initial testing on false recall and             false recognition in the social contagion of memory paradigm. Memory & Cognition41(6), 820-831.

Johnson, C. I., & Mayer, R. E. (2009). A testing effect with multimedia learning. Journal of          Educational Psychology, 101(3), 621-629.

McDaniel, M. A., Anderson, J. L., Derbish, M. H., & Morrisette, N. (2007). Testing the testing effect in the classroom. European Journal of Cognitive Psychology, 19(4-5), 494-513.

Meyer, A. D., & Logan, J. M. (2013). Taking the testing effect beyond the college freshman:        Benefits for lifelong learning. Psychology and Aging, 28(1), 142-147.

Mitchell Jr, J. V. (1992). Interrelationships and predictive efficacy for indices of intrinsic,                         extrinsic, and self-assessed motivation for learning. Journal of Research and       Development in Education25(3), 149-155.

Roediger, H., & Karpicke, J. D. (2006a). The power of testing memory: Basic research and           implications for educational practice. Perspectives on Psychological Science, 1(3), 181-       210.

Roediger, H., & Karpicke, J. D. (2006b). Test-enhanced learning: Taking memory tests     improves long-term retention. Psychological Science, 17(3), 249-255.


Using Just-in-Time assignments to promote metacognition

by John Draeger (SUNY Buffalo State)

In a previous post entitled “Just-in-time for metacognition,” I argued that Just-in-Time teaching techniques could be used to promote both higher-order-thinking and metacognition. Just-in-time teaching techniques require that students submit short assignments prior to class for review by the instructor before class begins (Novak 1999; Simkins & Maier, 2009; Schraff et al., 2011). In my philosophy courses, students send their answers to me electronically the night before class and I spend the morning of class using their answers to shape my pre-class planning. I’ve had success with higher-order-thinking questions, but I tended to ask students questions about their learning process only when the class had clearly gone off track. Since I’ve become convinced that developing good metacognitive habits requires practice, I’ve made metacognitive questions a regular component of my Just-in-Time assignments. In this post, I thought I would let you know how things are going.

Research shows that students learn more effectively when they are aware of their own learning process (I encourage you to surf around this site for examples). Borrowing from Tanner (2012) and Scharff (2014), I have asked students to think about why and how they engage in various learning strategies (e.g., reading, writing, reflecting). More specifically, I have asked: what was the most challenging part of the reading? Was the current reading more challenging than the last? What was the most useful piece of the reading? What was the most challenging piece of the reading? What was your reading strategy this week? How might you approach the reading differently next time? What was the most challenging part of the last writing assignment? How might you approach your next writing assignment differently? What are your learning goals for the week?

Responses from students at all levels have been remarkably similar. In particular, student responses fall into three broad categories: general student commentary (e.g., about the course, reading, particular assignment), content (e.g., students reframe the metacognition question and answer with use of course content), reflective practice (e.g., students actually reflect on their learning process).

First Type of Response: General Commentary

  • When asked to describe the most challenging part of the reading, students took the opportunity to observe that the reading too long, too boring, or it was interesting but confusing.
  • When asked to describe the most useful part of the reading, students often said that the question was difficult to answer because the reading was too long, too boring, or it was interesting but confusing.
  • When asked about their reading strategy, students observed that they did their best but the reading was too long, too boring, or interesting but confusing.
  • When asked about their learning goals for the week, students said that the question was strange, off the wall, and they had never been asked such a thing before.

Second Type of Response: Content

  • When asked to describe the most challenging part of the reading, students identified particular examples that were hard to follow and claims that seemed dubious.
  • When asked to describe the most useful part of the reading, students often restated the central question of the week (e.g., is prostitution morally permissible? should hate speech be restricted?) or summarized big issues (e.g., liberty argument for the permissibility of prostitution or hate speech).
  • When asked about their reading strategy, students often said that they wanted to understand a particular argument for that day (e.g., abortion, euthanasia, prostitution).
  • When asked their learning goal for the week, students said that they wanted to explore a big question (e.g., the nature of liberty or equality) and put philosophers into conversation (this is a major goal in all my courses).

Third Type of Response: Reflective practice

  • When asked to describe the most challenging part of the reading, students said that they didn’t give themselves enough time, they stretched it over multiple days, or they didn’t do it at all.
  • When asked about the most useful part of the reading, some students said that the reading forced them to challenge their own assumptions (e.g., “I always figured prostitution was disgusting, but maybe not”).
  • When asked about their reading strategies, some said that they had to read the material several times. Some said they skimmed the reading and hoped they could piece it together in class. Others found writing short summaries to be essential.
  • When asked about their learning goals for the week, some students reported wanting to become more open-minded and more tolerant of people with differing points of view.

Responses to the metacognitive prompts have been remarkably similar from students in my freshman to senior level courses. In contrast, I can say that there’s a marked difference by class year in responses to higher-order thinking prompts, possibly because I regularly use student responses to higher-order thinking prompts to structure class discussion. While I gave students some feedback on their metacognitive prompt responses, in the future I could be more intentional about using their responses to structure discussions of the student learning process.

I also need to refine my metacognition-related pre-class questions. For example, asking students to discuss the most challenging part of a reading assignment encourages students to reflect on roadblocks to understanding. The question is open-ended in a way that allows students to locate the difficulty in a particular bit of content, a lack of motivation, or a deficiency in reading strategy. However, if I want them to focus on their learning strategies, then I need to focus the question in ways that prompt that sort of reflection. For example, I could reword the prompt as follows: Identify one challenging passage in the reading this week. Explain why you believe it was difficult to understand. Discuss what learning strategy you used, how you know whether the strategy worked, and what you might do differently next time. Revising the questions so that they have a more explicitly metacognitive focus is especially important given that students are often unfamiliar with metacognitive reflection. If I can be more intentional about how I promote metacognition in my courses, then perhaps there can be gains in the metacognitive awareness demonstrated by my students. I’ll keep you posted.

References

Novak, G., Patterson, E., Gavrin, A., & Christian, W. (1999). Just-in-time teaching: Blending active learning with web technology. Upper Saddle River, NJ: Prentice Hall.

Scharff, L. “Incorporating Metacognitive Leadership Development in Class.” (2014). Retrieved from https://www.improvewithmetacognition.com/incorporating-metacognitive-leadership-development-in-class/.

Scharff, L., Rolf, J. Novotny, S. and Lee, R. (2011). “Factors impacting completion of pre-class assignments (JiTT) in Physics, Math, and Behavioral Sciences.” In C.

Rust (ed.), Improving Student Learning: Improving Student Learning Global Theories and Local Practices: Institutional, Disciplinary and Cultural Variations. Oxford Brookes University, UK.

Simkins, S. & Maier, M. (2009). Just-in-time teaching: Across the disciplines, across the  academy. Stylus Publishing, LLC..

Tanner, K. D. (2012). Promoting student metacognition. CBE-Life Sciences Education11(2), 113-120.


Executive Function: Can Metacognitive Awareness Training Improve Performance?

by Antonio Gutierrez, Georgia Southern University

In a recent meta-analysis of 67 research studies that utilize an intervention targeted at enhancing metacognitive awareness, Jacob and Parkinson (in press) argue that metacognitive interventions aimed at improving executive function processes are not as effective at improving student achievement as once believed by scholars and practitioners alike. In essence, the evidence in support of robust effects of these types of interventions in improving achievement is inconclusive. While descriptive research studies continue to report high associations between metacognitive awareness and performance or achievement measures, Jacob and Parkinson argue that the experimental evidence supporting a strong role of metacognitive training in improving student performance is scant. I have recently pondered a similar dilemma with research on the effect of metacognitive monitoring training on students’ performance, confidence judgments but especially calibration. The literature on these topics converges on the finding that metacognitive monitoring training improves performance and confidence in performance judgments but not necessarily calibration (see e.g., Bol et al., 2005; Gutierrez & Schraw, 2015; Hacker et al., 2008).

While Jacob and Parkinson’s meta-analysis is illuminating, I wonder whether, like the calibration literature, the conclusion that executive function interventions are not as effective at improving achievement may be due to very different conceptualizations of the constructs under investigation. In the case of calibration, the mixed findings may be due to the fact that the metacognitive monitoring interventions were not likely targeting the same thing. For instance, some interventions may have been targeting a reduction in calibration errors (overconfidence and underconfidence), others may have been targeting improvement in calibration accuracy, whereas yet others may have been targeting both, whether intentionally or unintentionally. Because these interventions were targeting different aspects of calibration, it could be that the inconclusive findings were due to a confounding of these various dimensions of calibration … comparing apples to oranges, if you will. Could the lack of robust effects of executive function interventions on achievement be due to a similar phenomenon? What if these studies were not targeting the same executive function processes, in which case they would not be as directly comparable as at first glance? Jacob and Parkinson’s (in press) study may lead some to believe that there is little to be gained in investing time and effort in executive function interventions. However, before we abandon these interventions, perhaps we should consider developing executive function interventions that are more specific and finer grained such as by targeting very specific aspects of the executive function rather than a more general approach.

References
Bol, L., Hacker, D. J., O’Shea, P., & Allen, D. (2005). The influence of overt practice, achievement level, and explanatory style on calibration accuracy, and performance. The Journal of Experimental Education, 73, 269-290.

Gutierrez, A. P., & Schraw, G. (2015). Effects of strategy training and incentives on students’ performance, confidence, and calibration. The Journal of Experimental Education: Learning, Instruction, and Cognition. Advance online publication. doi: 10.1080/00220973.2014.907230

Hacker, D. J., Bol, L., & Bahbahani, K. (2008). Explaining calibration accuracy in classroom contexts: The effects of incentives, reflection, and explanatory style. Metacognition Learning, 3, 101-121.

Jacob, R., & Parkinson, J. (in press). The potential for school-based interventions that target executive function to improve academic achievement: A review. Review of Educational Research. Advance online publication. doi: 10.3102/0034654314561338


Who says Metacognition isn’t Sexy?

By Michael J. Serra at Texas Tech University

This past Sunday, you might have watched “The 87th Academy Awards” (i.e., “The Oscars”) on television. Amongst the nominees for the major awards were several films based on true events and real-life people, including two films depicting key events in the lives of scientists Stephen Hawking (The Theory of Everything) and Alan Turing (The Imitation Game).

There are few things in life that I am sure of, but one thing I can personally guarantee is this: No film studio will ever make a motion picture about the life of your favorite metacognition researcher. Believe it or not, the newest issue of Entertainment Weekly does not feature leaked script details about an upcoming film chronicling how J. T. Hart came up with the idea to study people’s feelings of knowing (Hart, 1967), and British actors are not lining up to depict John Flavell laying down the foundational components for future theory and research on metacognition (Flavell, 1979). Much to my personal dismay, David Fincher hasn’t returned my calls regarding the screenplay I wrote about that time Thomas Nelson examined people’s judgments of learning at extreme altitudes on Mt. Everest (Nelson et al., 1990).

Just as film studios seem to lack interest in portraying metacognition research on the big screen, our own students sometimes seem uninterested in anything we might tell them about metacognition. Even the promise of improving their grades sometimes doesn’t seem to interest them! Why not?

One possibility, as I recently found out from a recent blog post by organic-chemistry professor and tutor “O-Chem Prof”, is that the term “metacognition” might simply not be sexy to our students (O-Chem Prof, 2015). He suggests that we instead refer to the concept as “sexing up your noodle”.

Although the idea of changing the name of my graduate course on the topic to “PSY 6969: Graduate Seminar in Sexing-up your Noodle” is highly tempting, I do not think that the problem is completely one of branding or advertising. Rather, regardless of what we call metacognition (or whether or not we even put a specific label on it for our students), there are other factors that we know play a crucial role in whether or not students will actually engage in self-regulated learning behaviors such as the metacognitive monitoring and control of their learning. Specifically, Pintrich and De Groot (1990; see Miltiadou & Savenye, 2003 for a review) identified three major factors that determine students’ motivation to learn that I suggest will also predict their willingness to engage in metacognition: value, expectancy, and affect.

The value component predicts that students will be more interested and motivated to learn about topics that they see value in learning. If they are struggling to learn a valued topic, they should be motivated to engage in metacognition to help improve their learning about it. A wealth of research demonstrates that students’ values and interest predict their motivation, learning, and self-regulation behaviors (e.g., Pintrich & De Groot, 1990; Pintrich et al., 1994; Wolters & Pintrich, 1998; for a review, see Schiefele, 1991). Therefore, when students do not seem to care about engaging in metacognition to improve their learning, it might not be that metacognition is not “sexy” to them; it might be that the topic itself (e.g., organic chemistry) is not sexy to them (sorry, O-Chem Prof!).

The expectancy component predicts that students will be more motivated to engage in self-regulated learning behaviors (e.g., metacognitive control) if they believe that their efforts will have positive outcomes (and won’t be motivated to do so if they believe their efforts will not have an effect). Some students (entity theorists) believe that they cannot change their intelligence through studying or practice, whereas other students (incremental theorists) believe that they can improve their intelligence (Dweck et al., 1995; see also Wolters & Pintrich, 1998). Further, entity theorists tend to rely on extrinsic motivation and to set performance-based goals, whereas incremental theorists tend to rely on intrinsic motivation and to set mastery-based goals. Compared to entity theorists, students who are incremental theorists earn higher grades and are more likely to persevere in the face of failure or underperformance (Duckworth & Eskreis-Winkler, 2013; Dweck & Leggett, 1988; Romero et al., 2014; see also Pintrich, 1999; Sungur, 2007). Fortunately, interventions have been successful at changing students to an incremental mindset, which in turn improves their learning outcomes (Aronson et al., 2002; Blackwell et al., 2007; Good et al., 2003; Hong et al., 1999).

The affective component predicts that students will be hampered by negative thoughts about learning or anxiety about exams (e.g., stereotype threat; test anxiety). Unfortunately, past research indicates that students who experience test anxiety will struggle to regulate their learning and ultimately end up performing poorly despite their efforts to study or to improve their learning (e.g., Bandura, 1986; Pintrich & De Groot, 1990; Pintrich & Schunk, 1996; Wolters & Pintrich, 1998). These students in particular might benefit from instruction on self-regulation or metacognition, as they seem to be motivated and interested to learn the topic at hand, but are too focused on their eventual test performance to study efficiently. At least some of this issue might be improved if students adopt a mastery mindset over a performance mindset, as increased learning (rather than high grades) becomes the ultimate goal. Further, adopting an incremental mindset over an entity mindset should reduce the influence of beliefs about lack of raw ability to learn a given topic.

In summary, although I acknowledge that metacognition might not be particularly “sexy” to our students, I do not think that is the reason our students often seem uninterested in engaging in metacognition to help them understand the topics in our courses or to perform better on our exams. If we want our students to care about their learning in our courses, we need to make sure that they feel the topic is important (i.e., that the topic itself is sexy), we need to provide them with effective self-regulation strategies or opportunities (e.g., elaborative interrogation, self-explanation, or interleaved practice questions; see Dunlosky et al., 2013) and help them feel confident enough to employ them, we need to work to reduce test anxiety at the individual and group/situation level, and we need to convince our students to adopt a mastery (incremental) mindset about learning. Then, perhaps, our students will find metacognition to be just as sexy as we think it is.

ryan gosling metacog (2)

References

Aronson, J., Fried, C. B., & Good, C. (2002). Reducing the effects of stereotype threat on African American college students by shaping theories of intelligence. Journal of Experimental Social Psychology, 38, 113-125. doi:10.1006/jesp.2001.1491

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall.

Blackwell, L. S., Trzesniewski, K. H., & Dweck, C. S. (2007). Implicit theories of intelligence predict achievement across an adolescent transition: A longitudinal study and an intervention. Child Development, 78, 246-263. doi: 10.1111/j.1467-8624.2007.00995.x

Duckworth, A., & Eskreis-Winkler, L. (2013). True Grit. Observer, 26. http://www.psychologicalscience.org/index.php/publications/observer/2013/april-13/true-grit.html

Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14, 4-58. doi: 10.1177/1529100612453266

Dweck, C. S., Chiu, C. Y., & Hong, Y. Y. (1995). Implicit theories and their role in judgments and reactions: A world from two perspectives. Psychological Inquiry, 6, 267-285. doi: 10.1207/s15327965pli0604_1

Dweck, C. S., & Leggett, E. L. (1988). A social-cognitive approach to motivation and personality. Psychological Review, 95, 256-273. doi: 10.1037/0033-295X.95.2.256

Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34, 906-911. doi: 10.1037/0003-066X.34.10.906

Good, C., Aronson, J., & Inzlicht, M. (2003). Improving adolescents’ standardized test performance: An intervention to reduce the effect of stereotype threat. Applied Developmental Psychology, 24, 645-662. doi: 10.1016/j.appdev.2003.09.002

Hart, J. T. (1967). Memory and the memory-monitoring process. Journal of Verbal Learning and Verbal Behavior, 6, 685-691. doi: 10.1016/S0022-5371(67)80072-0

Hong, Y., Chiu, C., Dweck, C. S., Lin, D., & Wan, W. (1999). Implicit theories, attributions, and coping: A meaning system approach. Journal of Personality and Social Psychology, 77, 588-599. doi: 10.1037/0022-3514.77.3.588

Miltiadou, M., & Savenye, W. C. (2003). Applying social cognitive constructs of motivation to enhance student success in online distance education. AACE Journal, 11, 78-95. http://www.editlib.org/p/17795/

Nelson, T. O., Dunlosky, J., White, D. M., Steinberg, J., Townes, B. D., & Anderson, D. (1990). Cognition and metacognition at extreme altitudes on Mount Everest. Journal of Experimental Psychology: General, 119, 367-374.

O-Chem Prof. (2015, Jan 7). Our Problem with Metacognition is Not Enough Sex. [Web log]. Retrieved from http://phd-organic-chemistry-tutor.com/our-problem-with-metacognition-not-enough-sex/

Pintrich, P. R. (1999). The role of motivation in promoting and sustaining self-regulated learning. International Journal of Educational Research, 31, 459-470. doi: 10.1016/S0883-0355(99)00015-4

Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82, 33-40. doi: 10.1037/0022-0663.82.1.33

Pintrich, P. R., Roeser, R., & De Groot, E. V. (1994). Classroom and individual differences in early adolescents’ motivation and self-regulated learning. Journal of Early Adolescence, 14, 139-161. doi: 10.1177/027243169401400204

Pintrich, P. R., & Schunk D. H. (1996). Motivation in education: Theory, research, and applications. Englewood Cliffs, NJ: Merrill/Prentice Hall.

Romero, C., Master, A., Paunesku, D., Dweck, C. S., & Gross, J. J. (2014). Academic and emotional functioning in middle school: The role of implicit theories. Emotion, 14, 227-234. doi: 10.1037/a0035490

Schiefele, U. (1991). Interest, learning, and motivation. Educational Psychologist, 26, 299-323. doi: 10.1080/00461520.1991.9653136

Sungur, S. (2007). Modeling the relationships among students’ motivational beliefs, metacognitive strategy use, and effort regulation. Scandinavian Journal of Educational Research, 51, 315-326. doi: 10.1080/00313830701356166

Wolters, C. A., & Pintrich, P. R. (1998). Contextual differences in student motivation and self-regulated learning in mathematics, English, and social studies classrooms. Instructional Science, 26, 27-47. doi: 10.1023/A:1003035929216


Self-Assessment, It’s A Good Thing To Do

by Stephen Fleisher, CSU Channel Islands

McMillan and Hearn (2008) stated persuasively that:

In the current era of standards-based education, student self-assessment stands alone in its promise of improved student motivation and engagement, and learning. Correctly implemented, student self-assessment can promote intrinsic motivation, internally controlled effort, a mastery goal orientation, and more meaningful learning (p. 40).

In her study of three meta-analyses of medical students’ self-assessment, Blanch-Hartigan (2011) reported that self-assessments did prove to be fairly accurate, as well as improving in later years of study. She noted that if we want to increase our understanding of self-assessment and facilitate its improvement, we need to attend to a few matters. To understand the causes of over- and underestimation, we need to address direction in our analyses (using paired comparisons) along with our correlational studies. We need also to examine key moderators affecting self-assessment accuracy, for instance “how students are being inaccurate and who is inaccurate” (p. 8). Further, the wording and alignment of our self-assessment questions in relation to the criteria and nature of our performance questions are essential to the accuracy of understanding these relationships.

When we establish strong and clear relationships between our self-assessment and performance questions for our students, we facilitate their use of metacognitive monitoring (self-assessment, and attunement to progress and achievement), metacognitive knowledge (understanding how their learning works and how to improve it), and metacognitive control (changing efforts, strategies or actions when required). As instructors, we can then also provide guidance when performance problems occur, reflecting on students’ applications and abilities with their metacognitive monitoring, knowledge, and control.

Self-Assessment and Self-Regulated Learning

For Pintrich (2000), self-regulating learners set goals, and activate prior cognitive and metacognitive knowledge. These goals then serve to establish criteria against which students can self-assess, self-monitor, and self-adjust their learning and learning efforts. In monitoring their learning process, skillful learners make judgments about how well they are learning the material, and eventually they become better able to predict future performance. These students can attune to discrepancies between their goals and their progress, and can make adjustments in learning strategies for memory, problem solving, and reasoning. Additionally, skillful learners tend to attribute low performance to low effort or ineffective use of learning strategies, whereas less skillful learners tend to attribute low performance to an over-generalized lack of ability or to extrinsic things like teacher ability or unfair exams. The importance of the more adaptive attributions of the aforementioned skillful learners is that these points of view are associated with deeper learning rather than surface learning, positive affective experiences, improved self-efficacy, and greater persistence.

Regarding motivational and affective experiences, self-regulating learners adjust their motivational beliefs in relation to their values and interests. Engagement improves when students are interested in and value the course material. Importantly, student motivational beliefs are set in motion early in the learning process, and it is here that instructional skills are most valuable. Regarding self-regulation of behavior, skillful learners see themselves as in charge of their time, tasks, and attention. They know their choices, they self-initiate their actions and efforts, and they know how and when to delay gratification. As well, these learners are inclined to choose challenging tasks rather than avoid them, and they know how to persist (Pintrich, 2000).

McMillan and Hearn (2008) summarize the role and importance of self-assessment:

When students set goals that aid their improved understanding, and then identify criteria, self-evaluate their progress toward learning, reflect on their learning, and generate strategies for more learning, they will show improved performance with meaningful motivation. Surely, those steps will accomplish two important goals—improved student self-efficacy and confidence to learn—as well as high scores on accountability tests (p. 48). 

As a teacher, I see one of my objectives being to discover ways to encourage the development of these intellectual tools and methods of thinking in my own students. For example, in one of my most successful courses, a colleague and I worked at great length to create a full set of specific course learning outcomes (several per chapter, plus competencies we cared about personally, for instance, life-long learning). These course outcomes were all established and set into alignment with the published student learning outcomes for the course. Lastly, homework, lectures, class activities, individual and group assignments, plus formative and summative assessments were created and aligned. By the end of this course, students not only have gained knowledge about psychology, but tend to be pleasantly surprised to have learned about their own learning.

 

References

Blanch-Hartigan, D. (2011). Medical students’ self-assessment of performance: Results from three meta-analyses. Patient Education and Counseling, 84, 3-9.

McMillan, J. H., & Hearn, J. (2008). Student self-assessment: The key to stronger student motivation and higher achievement. Educational Horizons, 87(1), 40-49. http://files.eric.ed.gov/fulltext/EJ815370.pdf

Pintrich, P. R. (2000). The role of goal orientation in self-regulated learning. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.) Handbook of self-regulation. San Diego, CA: Academic.


Metacognition, Self-Regulation, and Trust

by  Dr. Steven Fleisher, CSU Channel Islands, Department of Psychology

Early Foundations

I’ve been thinking lately about my journey through doctoral work, which began with studies in Educational Psychology. I was fortunate to be selected by my Dean, Robert Calfee, Graduate School of Education at University of California Riverside, to administer his national and state grants in standards, assessment, and science and technology education. It was there that I began researching self-regulated learning.

Self-Regulated Learning

Just before starting that work, I had completed a Masters Degree in Marriage and Family Counseling, so I was thrilled to discover the relevance of the self-regulation literature. For example, I found it interesting that self-regulation studies began back in the 1960s examining the development of self-control in children. Back then the framework that evolved for self-regulation involved the interaction of personal, behavioral, and environmental factors. Later research in self-regulation focused on motivation, health, mental health, physical skills, career development, decision-making, and, most notable for our purposes, academic performance and success (Zimmerman, 1990), and became known as self-regulated learning.

Since the mid-1980s, self-regulated learning researchers have studied the question: How do students progress toward mastery of their own learning? Pintrich (2000) noted that self-regulated learning involved “an active, constructive process whereby learners set goals for their learning and then attempt to monitor, regulate, and control their cognition, motivation, and behavior, guided and constrained by their goals and the contextual features in the environment” (p. 453). Zimmerman (2001) then established that, “Students are self-regulated to the degree that they are metacognitively, motivationally, and behaviorally active participants in their own learning process” (p. 5). Thus, self-regulated learning theorists believe that learning requires students to become proactive and self-engaged in their learning, and that learning does not happen to them, but by them (see also Leamnson, 1999).

Next Steps

And then everything changed for me. My Dean invited Dr. Bruce Alberts, then President of the National Academy of Sciences, to come to our campus and lecture on science and technology education. Naturally, as Calfee’s Graduate Student Researcher, I asked “Bruce” what he recommended for bringing my research in self-regulated learning to the forefront. His recommendation was to study the, then understudied, role and importance of the teacher-student relationship. Though it required changing doctoral programs to accommodate this recommendation, I did it, adding a Doctorate in Clinical Psychology to several years of coursework in Educational Psychology.

Teacher-Student Relationships 

Well, enough about me. It turns out that effective teacher-student relationships provide the foundation from which trust and autonomy develop (I am skipping a lengthy discussion of the psychological principles involved). Suffice it to say, where clear structures are in place (i.e., standards) as well as support, social connections, and the space for trust to develop, students have increased opportunities for exploring how their studies are personally meaningful and supportive of their autonomy, thereby taking charge of their learning.

Additionally, when we examine a continuum of extrinsic to intrinsic motivation, we find the same principles involved as with a scale showing minimum to maximum autonomy, bringing us back to self-regulated learning. Pintrich (2000) included the role of motivation in his foundations for self-regulated learning. Specifically, he reported that a goal orientation toward performance arises when students are motivated extrinsically (i.e., focused on ability as compared to others); however, a goal orientation toward mastery occurs when students are motivated more intrinsically (i.e., focused on effort and learning that is meaningful to them).

The above concepts can help us define our roles as teachers. For instance, we are doing our jobs well when we choose and enact instructional strategies that not only communicate clearly our structures and standards but also provide needed instructional support. I know that when I use knowledge surveys, for example, in building a course and for disclosing to my students the direction and depth of our academic journey together, and support them in taking meaningful ownership of the material, I’m helping their development of metacognitive skill and autonomous self-regulated learning. We teachers can help improve our students’ experience of learning. For them, learning in order to get the grades pales in comparison to learning a subject that engages their curiosity, along with investigative and social skills that will last a lifetime.

References

Leamnson, R. (1999). Thinking about teaching and learning: Developing habits of learning with first year college and university students. Sterling, VA: Stylus.

Pintrich, P. R. (2000). The role of goal orientation in self-regulated learning. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.) Handbook of self-regulation. San Diego, CA: Academic.

Zimmerman, B. J. (1990). Self-regulating academic learning and achievement: The emergence of a social cognitive perspective. Educational Psychology Review, 2(2), 173-201.

Zimmerman, B. J. (2001). Theories of self-regulated learning and academic achievement: An overview and analysis. In B. J. Zimmerman & D. H. Schunk (Eds.) Self-regulated learning and academic achievement: Theoretical perspectives (2e). New York: Lawrence Erlbaum.


Self-assessment and the Affective Quality of Metacognition: Part 1 of 2

Ed Nuhfer, Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029

In The Feeling of What Happens: Body and Emotion in the Making of Consciousness(1999, New York, Harcourt), Antonio Damasio distinguished two manifestations of the affective domain: emotions (the external experience of others’ affect) and feelings(the internal private experience of one’s own affect). Enacting self-assessment constitutes an internal, private, and introspective metacognitive practice.

Benjamin Bloom recognized the importance of the affective domain’s involvement in successful cognitive learning, but for a time psychologists dismissed the importance of both affect and metacognition on learning (See Damasio, 1999; Dunlosky and Metcalfe, 2009, Metacognition, Los Angeles, Sage). To avoid repeating these mistakes, we should recognize that attempts to develop students’ metacognitive proficiency without recognizing metacognition’s affective qualities are likely to be minimally effective.

In academic self-assessment, an individual must look at a cognitive challenge and accurately decide her/his capability to meet that challenge with present knowledge and resources. Such decisions do not spring only from thinking cognitively about one’s own mental processes. Affirming that “I can” or “I cannot” meet “X” (the cognitive challenge) with current knowledge and resources draws from affective feelings contributed by conscious and unconscious awareness of what is likely to be an accurate decision.

“Blind insight” (http://pss.sagepub.com/content/early/2014/11/11/0956797614553944) is a new term in the literature of metacognition. It confirms an unconscious awareness that manifests as a feeling that supports sensing the correctness of a decision. “Blind insight” and “metacognitive self-assessment” seem to overlap with one another and with Damasio’s “feelings.”

Research in medical schools confirmed that students’ self-assessment skills remained consistent throughout medical education (http://files.eric.ed.gov/fulltext/ED410296.pdf.)  Two hypotheses compete to explain this confirmation.  One is that self-assessment skills establish early in life and cannot be improved in college. The other is that self-assessment skill remains fixed in post-secondary education only because it is so rarely taught or developed. The first hypothesis seems contradicted by the evidence supporting brain plasticity, constructivist theories of learning and motivation, metacognition theory, self-efficacy theory (http://files.eric.ed.gov/fulltext/EJ815370.pdf), and by experiments that confirm self-assessment as a learnable skill that improves with training (http://psych.colorado.edu/~vanboven/teaching/p7536_heurbias/p7536_readings/kruger_dunning.pdf).

Nursing is perhaps the discipline that has most recognized the value of developing intuitive feelings informed by knowledge and experience as part of educating for professional practice.

“At the expert level, the performer no longer relies on an analytical principle (rule, guideline, maxim) to connect her/his understanding of the situation to an appropriate action. The expert nurse, with her/his enormous background of experience, has an intuitive grasp of the situation and zeros in on the accurate region of the problem without wasteful consideration of a large range of unfruitful possible problem situations. It is very frustrating to try to capture verbal descriptions of expert performance because the expert operates from a deep understanding of the situation, much like the chess master who, when asked why he made a particularly masterful move, will just say, “Because it felt right. It looked good.” (Patricia Benner, 1982, “From novice to expert.” American Journal of Nursing, v82 n3 pp 402-407)

Teaching metacognitive self-assessment should include an aim toward improving students’ ability to clearly recognize the quality of “feels right” regarding whether one’s own ability to meet a challenge with present abilities and resources exists. Developing such capacity requires practice in committing errors and learning from them through metacognitive reflection. In such practice, the value of Knowledge Surveys (see http://profcamp.tripod.com/KS.pdf and http://profcamp.tripod.com/Knipp_Knowledge_Survey.pdf) becomes apparent.

Knowledge Surveys (Access tutorials for constructing knowledge surveys and obtaining downloadable examples at http://elixr.merlot.org/assessment-evaluation/knowledge-surveys/knowledge-surveys2.) consist of about a hundred to two hundred questions/items relevant to course learning objectives. These query individuals to self-assess by rating their present ability to meet a challenge on a three-point multiple-choice scale:

A. I can fully address this item now for graded test purposes.
B. I have partial knowledge that permits me to address at least 50% of this item.
C. I am not yet able to address this item adequately for graded test purposes.

and thereafter to monitor their mastery as the course unfolds.

In Part 2, we will examine why knowledge surveys are such powerful instruments for supporting students’ learning and metacognitive development, ways to properly employ knowledge surveys to induce measurable gains, and we will provide some surprising results obtained from pairing knowledge surveys in conjunction with a standardized assessment measure.


A Mindfulness Perspective on Metacognition

by Chris Was, Kent State University

If you have any interest in metacognition, you have likely come across the description of metacognition as thinking about one’s thinking. A number of posts to this blog (including my own) provide evidence to support the conclusion that metacognition can be “learned” and improved. Further, improved metacognition leads to improve self-regulation and positive academic outcomes. There is also a good deal of evidence that training in mindfulness improves cognitive function and attention (e.g., Chambers, Lo, & Allen, 2008). Flook, et al (2010) found that mindfulness-training program improved executive functions in young elementary school students. Zeidan, et al (2010) found that mindfulness training improved executive function and metacognitive insight. This post will focus on the relationship between metacognition and mindfulness.

Let me preface by stating that mindfulness need not refer to esoteric religious beliefs, but it is often defined as a mental state achieved by focusing one’s awareness on the present moment and acknowledging one’s feelings, thoughts and bodily sensations. Kabat-Zinn (1990) describes mindfulness as bringing attention to moment-to-moment experience. In my own work on metacognitive knowledge monitoring, I have required students to make moment-to-moment (more accurately, item-by-item) judgments of their knowledge.. Hypotheses, such as cue-familiarity, provide reasonable explanations for how students and research participants rate feelings of knowing (FOK), judgments of learning (JOLs), judgments of knowing (JOK), etc. However, the simple fact is one must attend to these feelings and thoughts to provide a judgment. In psychological and educational literature, we refer to one using metacognition to make these judgments. Clinical psychology programs such as mindfulness based stress reduction (MBSR) and cognitive behavior therapy (CBT) refer to the patient/participant as being mindful about their emotions, thoughts, and actions. Although the majority of research and application of mindfulness has occurred in clinical settings, there is a great deal of potential in examining the relationship between mindfulness and metacognition.

It isn’t clear how metacognition and mindfulness are related. Some argue that metacognition is not mindful because a true expert in mindfulness does not need to reflect upon his or her thinking, but only to attend to what they are presently doing. I am not convinced. Much of the work on metacognitive improvement has focused on semester long training to improve students knowledge monitoring. The mindfulness research has focused on training students to focus on their moment-to-moment experiences and thoughts. Clearly, there is a relationship between metacognition and executive function, but I have yet to see evidence that training in one improves the other.

One argument made to dissociate mindfulness from metacognition is that metacognitive processes are by necessity reflective or retrospective and that truly being mindful does not require reflection. For example, for a student to practice metacognition during study, she must ask herself, “Do I understand this concept?” Then, depending on the answer the student may or may not adjust the cognitive actions in which she is engaged to learn. This cycle is simply explained by the Nelson and Narens (1990). Now let’s think about a practitioner of mindfulness meditation. While meditating he may chose to focus his attention on the breath. Noticing when he is breathing in and noticing when he is breathing out. During this practice his mind may wander (this is true of even the most practiced at meditation). When this happens, he will gently bring his attention back to the breath. This process, just like that of the studying student, requires one to observe the cognitive processes and exert control over those processes when necessary. This to fits nicely into the metacognitive model offered by Nelson and Narens.

Imagine you are reading a novel on summer vacation. The book is enjoyable, but not a challenging read. Your are enjoying the sun and the sounds on the beach as you read, but suddenly notice you have not really attended to the last couple of pages and are not sure what has transpired in the plot. You choose to reread the last couple of pages and pay more attention. Imagine now you are a student. You are reading a very dull textbook chapter with the TV on and your smart phone near by. A student with little metacognitive resources (whether it be due working memory capacity, attentional control, executive function, etc.) is likely to mind wander (Hollis & Was, 2014). Students in my classes have often told me the hardest part of studying is staying focused, even when the topic is of interest. Ben Hollis and I found that students watching a video as part of an online course were often distracted by thoughts of checking the social media outlets. Not distracted by checking, but just thought of checking them. What if students were practiced at focusing attention, noticing when their minds wander and bringing the attention back to the task at hand?

It seems to me that if metacognition is knowledge and control of one’s cognitive processes and training in mindfulness increases one’s ability to focus and control awareness in a moment-by-moment manner, then perhaps we should reconsider, and investigate the relationship between mindfulness and metacognition in education and learning.

 

References

Bishop, S. R., Lau, M., Shapiro, S., Carlson, L., Anderson, N. D., Carmody, J., Segal, Z. V.,    Abbey, S., Speca, M., Velting, D. and Devins, G. (2004), Mindfulness: A Proposed Operational Definition. Clinical Psychology: Science and Practice,     11: 230–241. doi: 10.1093/clipsy.bph077

Chambers, R., Lo, B. C. Y., & Allen, N. B. (2008). The impact of intensive mindfulness training on attentional control, cognitive style, and affect. Cognitive Therapy and Research32(3), 303-322.

Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry. American psychologist34(10), 906.

Flook, L., Smalley, S. L., Kitil, M. J., Galla, B. M., Kaiser-Greenland, S., Locke, J., … &  Kasari, C. (2010). Effects of mindful awareness practices on executive functions in elementary school children. Journal of Applied School         Psychology,26(1), 70-95.

Kabat-Zinn, J. (1990). Full catastrophe living: Using the wisdom of your mind to face    stress, pain and illness. New York : Dell.

Zeidan, F., Johnson, S. K., Diamond, B. J., David, Z., & Goolkasian, P. (2010).  Mindfulness meditation improves cognition: evidence of brief mental training.Consciousness and cognition19(2), 597-605.


Just-in-Time for Metacognition

By John Draeger, SUNY Buffalo State

This post brings metacognition to an already valuable teaching tool. Just-in-time techniques require that students submit short assignments prior to class. Instructors review those answers before class and use them to shape class time. In my philosophy classes, for example, I assign two short questions via a course management system (e.g., Blackboard). At least one of the questions is directly related to the reading. Students are required to submit their answers electronically by 11:00 p.m. the night before class. When I wake up in the morning, I read through their responses and use them to make decisions about how class time will be used. If students seemed to grasp the reading, then I spend less time reviewing the basic arguments and more time exploring deeper content and connections. If student responses displayed a misunderstanding of the reading, then we spend class time carefully examining passages in the text and digging out the relevant arguments.

Just-in-Time techniques have been used in a variety of disciplines and they have been shown to increase the likelihood that students will complete their reading assignments, read more carefully, and take ownership over their learning (Novak 1999; Simkins & Maier, 2009; Schraff et al. 2011). However, just-in-time assignments are typically used to prompt students to complete their assigned reading pages and gauge their basic comprehension. While both are valuable, I argue that the technique can also be used to promote other important skills.

For example, pre-class questions can be used to develop higher-order thinking skills. Students can be asked to examine an author’s point of view, underlying assumptions, or the implications of her view. Such questions prompt students to move beyond their knowledge of what is contained in the text towards active engagement with that text. Students can be asked to apply concepts in the reading (e.g., stereotype bias) to something in the news. And students can be asked to analyze the connections between related course ideas. In a previous post, “Using metacognition to uncover the substructure of moral issues,” I argued that students begin to “think like a philosopher” when they can move beyond the surface content (e.g., hate speech and national security) and towards the underlying philosophical substructure (e.g., rights, well-being, dangers of governmental intrusion). Like other skills, developing higher-order thinking skills requires practice. Because just-in-time assignments are a regular part of a student’s week, incorporating high-order thinking questions into just-in-time assignments can give students regular opportunities to practice and hone those skills.

Likewise, pre-class assignments can give students a regular outlet to practice and develop metacognition. Students can be asked to reflect on how they prepared for class and whether it was effective (Tanner 2012). Pre-class questions might include: how long did you spend with the reading? Did you finish? Did you annotate the text? Did you write a summary of the central argument? Did you formulate questions based on the reading for class discussion? Was this reading more difficult than the previous? If so, why? Did you find yourself having an emotional reaction to the reading? If so, did this help or hinder your ability to understand the central argument? Are your reading techniques adequately preparing you for class? Or, are you finding yourself lost in class discussion despite having spent time doing the reading? If pre-class questions related to higher-order thinking ask students to do more than simply “turn the pages,” then pre-class questions related to metacognition ask students to do more than simply engage with the material, but also engage with their own learning processes.

When just-in-time questions are a regular part of the ebb and flow of a course, students must regularly demonstrate how much they know and instructors can regularly use that information to guide course instruction. These techniques work because there is a consistent accountability measure built-in. I suggest that just-in-time assignments can also be used to give students regular practice developing both higher-order thinking and metacognition skills. I have been incorporating higher-ordering thinking into just-in-time assignments for years, but I confess that I have only given metacognition prompts when things have “gone wrong” (e.g., poor performance on exams, consistent misunderstanding of the reading). Responses to these questions have led to helpful conversation about the efficacy of various learning methods. Writing this blog post has prompted me to see the potential benefits of asking such questions more often. I pledge to do just that and to let you know how my students respond.

 

References

Novak, G., Patterson, E., Gavrin, A., & Christian, W. (1999). Just-in-time teaching: Blending active learning with web technology. Upper Saddle River, NJ: Prentice Hall.

Scharff, L., Rolf, J. Novotny, S. and Lee, R. (2011). “Factors impacting completion of pre-class assignments (JiTT) in Physics, Math, and Behavioral Sciences.” In C. Rust (ed.), Improving Student Learning: Improving Student Learning Global Theories and Local Practices: Institutional, Disciplinary and Cultural Variations. Oxford Brookes University, UK.

Simkins, S. & Maier, M. (2009). Just-in-time teaching: Across the disciplines, across the academy. Stylus Publishing, LLC..

Tanner, K. D. (2012). Promoting student metacognition. CBE-Life Sciences Education11(2), 113-120.


The Teaching Learning Group at CSUN

Two years ago, eight faculty at California State University, Northridge, began studying how people learn as a grassroots effort to increase student success by focusing on what instructors do in the classroom. Our website shares our efforts, Five Gears for Activating Learning, as well as supporting resources and projects developed to date (e.g., documents, videos, and a yearlong Faculty Learning Community in progress). Although all five gears interact when people learn and develop expertise, our fifth gear, the Developing Mastery gear, focuses on assisting students in developing their metacognitive skills.

http://www.csun.edu/cielo/teaching-learning-group.html


The Six Hour D… And How to Avoid It

This great essay by Russ Dewey (1997) evolved from a handout he used to give his students. He shares some common examples of poor study strategies and explains why they are unlikely to lead to deep learning (even if they are used for 6 hours…). He then shares a simple metacognitive self-testing strategy that could be tailored for courses across the disciplines.

http://www.psywww.com/discuss/chap00/6hourd.htm


Despite Good Intentions, More is Not Always Better

by Lauren Scharff, U.S. Air Force Academy*

A recent post to the PSYCHTEACH listserv got me thinking about my own evolution as a teacher trying my best to help the almost inevitable small cluster of students who struggled in my courses, often despite claiming to “have studied for hours.” The post asked “Have any of you developed a handout on study tips/skills that you give to your students after the first exam?” A wide variety of responses were submitted, all of which reflected genuinely good intentions by the teachers.

However, based on my ongoing exploration of metacognition and human learning, I believe that, despite the good intentions, some of the recommendations will not consistently lead to the desired results. Importantly, these recommendations actually seem quite intuitive and reasonable on the surface, which leads to their appeal and continued use. Most of those that fall into this less ideal category do so because they imply that “More is Better.”

For example, one respondent shared, “I did correlations of their test scores with their attendance so far, the number of online quizzes they have taken so far, and the combined number of these two things. [All correlations were positive ranging from 0.35 to 0.57.] So I get to show them how their behaviors really are related to their scores…”

This approach suggests several things that all seem intuitively positive: online quizzes are a good way to study and attending class will help them learn. I love the empowerment of students by pointing out how their choice of behaviors can impact their learning! However, the message that more quizzes and simple attendance will lead to better grades does not capture the true complexity of learning.

Another respondent shared a pre-post quiz reflection assignment in which some of the questions asked about how much of the required reading was completed and how many hours were put into studying. Other questions asked about the use of chapter outcomes when reading and studying, the student’s expected grade on the quiz, and an open-ended question requesting a summary of study approaches.

This pre-post quiz approach seems positive for many reasons. Students are forced to think about and acknowledge levels and types of effort that they put into studying for the quizzes. There is a clear suggestion that using the learning outcomes to direct their studying would be a positive strategy. They are asked to predict their grades, which might help them link their studying efforts with predicted grades. These types of activities are actually good first steps at helping students become more metacognitive (aware and thoughtful) about their studying. Yea!

However, a theme running through the questions seems to be, again, “more is better.” More hours. More reading. The hidden danger is that students may not know how to effectively use the learning outcomes, how to read, how to effectively engage during class, how to best take advantage of practice quizzes to promote self-monitoring of learning, or what to do during those many hours of studying.

Thus, the recommended study strategies may work well for some students, but not all, due to differences in how students implement the strategies. Therefore, even a moderately high correlation between taking practice quizzes and exam performance might mask the fact that there are subgroups for which the results are less positive.

For example, Kontur and Terry (2013) found the following in a core Physics course, “On average, completing many homework problems correlated to better exam scores only for students with high physics aptitude. Low aptitude physics students had a negative correlation between exam performance and completing homework; the more homework problems they did, the worse their performance was on exams.”

I’m sure you’re all familiar with students who seem to go through “all the right motions” but who still struggle, become frustrated, and sometimes give up or develop self-doubt about their abilities. Telling students to do more of what they’re already doing if it’s not effective will actually be more harmful.

This is where many teachers feel uncomfortable because they are clearly working outside their disciplines. Teaching students how to read or how to effectively take notes in class, or how to self-monitor their own learning and adjust study strategies to different types of learning expectations is not their area of expertise. Most teachers somehow figured out how to do these things well on their own, or they wouldn’t be teachers now. However, they may never have thought about the underlying processes of what they do when they read or study that allowed them to be successful. They also feel pressures to cover the disciplinary content and focus on the actual course material rather than learning skills. Unfortunately, covering material does little good if the students forget most of the content anyway. Teaching them skills (e.g., metacognitive study habits) offers the prospect of retaining more of the disciplinary content that is covered.

The good news is that there are more and more resources available for both teachers and students (check out the resources on this website). A couple great resources specifically mentioned by the listserv respondents are the How to Get the Most out of Studying videos by Stephen Chew at Samford University and the short reading (great to share with both faculty and students) called The Six Hour D… and How to Avoid it by Dewey (1997). Both of these highlighted resources focus on metacognitive learning strategies.

This reflection on the different recommendations is not meant to belittle the well-intentioned teachers. However, by openly discussing these common suggestions, and linking to what we know of metacognition, I believe we can increase their positive impact. Share your thoughts, favorite study suggestions and metacognitive activities by using the comments link below, or submitting them under the Teaching Strategies tab on this website.

References

Dewey, R. (1997, February 12) The “6 hour D” and how to avoid it. [Online]. Available: http://www.psywww.com/discuss/chap00/6hourd.htm.

Kontur, F. & Terry, N. The benefits of completing homework for students with different aptitudes in an introductory physics course. Cornell Physics Library Physics Education. arXiv:1305.2213

 

* Disclaimer: The views expressed in this document are those of the authors and do not reflect the official policy or position of the U. S. Air Force, Department of Defense, or the U. S. Govt.


Negotiating Chaos: Metacognition in the First-Year Writing Classroom

by Amy Ratto Parks, Composition Coordinator/Interim Director of Composition, University of Montana

“Life moves pretty fast. If you don’t stop and look around once in a while, you could miss it.” John Hughes, Ferris Bueller’s Day Off

Although the movie Ferris Bueller’s Day Off (Hughes, 1986) debuted long before our current first-year college students were born, the combined sentiment of the film remains relevant to them. If we combined Ferris’ sense of exuberant freedom with Cameron’s grave awareness of personal responsibility, and added Sloane’s blasé ennui we might see an accurate portrait of a typical first-year student’s internal landscape. Many of our students are thrilled to have broken out of the confines of high school but are worried about not being able to succeed in college, so they arrive in our classrooms slumped over their phones or behind computer screens, trying to seem coolly disengaged.

The life of the traditional first-year student is rife with negotiations against chaos. Even if we remove the non-academic adjustments of living away from home, their lives are full of confusion. All students, even the most successful, will likely find their learning identities challenged: what if all of their previous academic problem-solving strategies are inadequate for the new set of college-level tasks?

In the first-year writing classroom, we see vivid examples of this adjustment period play out every year. Metacognitive activities like critical reflective writing help students orient themselves because they require students to pause, assess the task at hand, and assess their strategies for meeting the demands of the task. Writing studies researchers know that reflection benefits writers (Yancey, 1998) and portfolio assessment, common in first-year program across the country, emphasizes reflection as a major component of the course (Reynolds & Rice, 2006). In addition, outcomes written by influential educational bodies such as National Council of Teacher’s of English (ncte.org), The Common Core State Standards Initiative (corestandards.org), and Council of Writing Program Administrators (wpacouncil.org) emphasize the importance of metacognitive skills and demonstrate a shared belief in its importance.

But students aren’t necessarily on board. It is the rare student who has engaged in critical reflection in the academic setting. Instead, many aren’t sure how to handle it. Is it busy work from the teacher? Are they supposed to reveal their deep, inner feelings or is it a cursory overview? Is it going to be graded? What if they give a “wrong” reflection? And, according to one group of students I had, “isn’t this, like, for junior high kids?” In this last question we again see the developing learner identity. The students were essentially wondering, “does this reflective work make us little kids or grown ups?”

If we want new college students to engage in the kind of reflective work that will help them develop transferable metacognitive skills, we need to be thoughtful about how we integrate it into the coursework. Intentionality is important because there are a number of ways teachers might accidentally perpetuate these student mindsets. In order to get the most from reflective activities in class, keep the following ideas in mind:

  1. Talk openly with students about metacognition. If we want students to become aware of their learning, then the first thing to do is draw their attention to it. We should explain to students why they might care about metacognitive skills, as well as the benefits of investing themselves in the work. If we explain that reflection is one kind of metacognitive activity that helps us retrieve, sort, and choose problem-solving strategies, then reflection ceases to be “junior high” work and instead becomes a scholarly, collegiate behavior.
  2. Design very specific reflective prompts. When in doubt, err on the side of more structure. Questions like “what did you think about the writing assignment” seem like they would open the door to many responses; actually they allow students to answer without critically examining their writing or research decisions. Instead, design prompts that require students to critically consider their work. For example, “Describe one writing choice you made in this essay. What was the impact of your decision?”
  3. Integrate reflection throughout the semester. Ask students to reflect mid-way through the processes of drafting, research, and writing. If we wait until they finish an essay they learn that reflection is simply a concluding activity. If they reflect mid-process they become aware of their ability to assess and revise their strategies more than once. Also, reflection is a metacognitive habit of mind (Tarricone, 2011; Yancey, 1998) and habits only come to us through repeated activity.

These three strategies are a very basic beginning to integrating metacognitive activities into a curriculum. Not only do they help students evaluate the effectiveness of their attempts at problem solving, but they can also direct the students’ attention toward the strategies they’ve already brought to the class, thereby creating a sense of control over their learning. In the first-year writing classroom, where students are distracted and worried about life circumstance and learner identity, the sense of control gained from metacognitive work is especially important.

 

References

Chinich, M. (Producer), & Hughes, J.H. (Director). (1986). Ferris Beuller’s day off.[Motion picture]. USA: Paramount Pictures.

Reynolds, N., & Rice, R. (2006). Portfolio teaching: A guide to instructors. Boston, MA: Bedford St, Martin’s.

Tarricone, P. (2011). The taxonomy of metacognition. New York: Psychology Press.

Yancey, K.B. (1998). Reflection in the writing classroom. Logan: Utah State University Press.

(2013). First-year writing: What good does it do? Retrieved from http://www.ncte.org/library/nctefiles/resources/journals/cc/0232-nov2013/cc0232policy.pdf

(2014). Frameworks for success in postsecondary writing. Retrieved from http://wpacouncil.org/framework

(2014). English language arts standards. Retrieved from http://www.corestandards.org/ELA-Literacy/introduction/key-design-consideration/


Faculty Metacognition of Verbal Questioning

by Charity Peak, U.S. Air Force Academy*

Few faculty would argue that teaching requires asking questions of students, but rarely do instructors consider the what, how, or why of their verbal questioning behavior.  Without metacognition of questioning strategies, this foundational instructional technique can be wasted on habit rather than design.

Faculty question students for a variety of reasons.  Surprisingly, most faculty use verbal questioning as a classroom management technique.  This might look something like a machine gun approach, firing question after question in multiple directions in an effort to keep the class engaged.  See a student dozing? Fire!  Someone checking Facebook? Fire!  Some researchers estimate that teachers ask as many as 120 questions per hour—a question every 30 seconds (Vogler, 2005)!While this strategy may keep students on their toes, it does not necessarily aid student learning.  Often these questions are low level cognitive questions, requiring mainly recall of factual knowledge.  If teachers wish to develop deeper levels of thinking, they must stimulate their students’ own evaluation of the content rather than merely requesting regurgitation of the basics.

At the other end of the spectrum is a master teacher’s approach to instruction that utilizes a specific questioning taxonomy proven to be effective for a variety of disciplines.  Rather than using the run-and-gun approach, this faculty member masterfully leads students from one point to another through a series of thoughtfully derived questions.  He or she might start with the big picture and lead to a specific point or, in contrast, begin with minutia but guide students to one main relevant theme by the end of class.  Watching these instructors in action is often humbling.  However, even these most masterful teachers are often not cognitively aware of the strategies they are using.  They have figured out what works over time, but they likely can’t point to a specific methodology they were using to support their instruction.  Rather than shooting in the dark over many years, faculty would be wise to understand the metacognition behind verbal questioning if they wish to be effective in creating higher order thinking in their students.

Moving beyond simple recall in questioning is certainly good advice for creating more opportunities in thinking, but it’s easier said than done.  Faculty often report feeling uncomfortable trying new questioning strategies.  Asking higher order thinking questions for application, analysis, and synthesis often creates extensive dead air time in the classroom.  More difficult questions require more time to think, often in silence.  Also, students are reluctant to change the very well-established classroom culture of “getting the answer right.”  Based on years of classroom experience, students will often fire answers back, playing the game of “Guess what’s in the teacher’s head.”

Despite these cultural norms, it is possible through metacognition to improve verbal questioning.  Some scholars argue that faculty should understand some of the basic questioning taxonomies that exist and how they influence learning.  For example, asking open-ended versus closed-ended questions will alter the cognitive level of thinking and response (Rothstein & Santana, 2011).  Open-ended questions tend to achieve thinking which is higher on Bloom’s Taxonomy.  Students are required to generate thoughtful answers to questions as opposed to firing one to three word facts.  For example, instead of asking, “What is an adverb?” faculty might ask students to apply their learning by identifying an adverb in a sentence or even creating their own sentences using adverbs.  Better yet, The Right Question Institute (Rothstein & Santana, 2011) encourages faculty to get students to ask their own questions rather than teachers doing all the work.  After all, the person generating the questions is arguably the person who is learning the most.

Other scholars suggest that faculty should consider the sequencing and patterns that are possible when asking questions (Vogler, 2005).  For example, cognitive psychologists often suggest a funneling or convergent questioning technique, which leads students from big picture to details because it mirrors the cognitive functioning of the brain.  However, depending on the subject area, faculty may find success in guiding students from narrow to broad thinking (divergent) by first asking low-level, general questions followed by higher-level, specific questions.  Some disciplines lend themselves to using a circular path to force critical thinking in students.  This pattern asks a series of questions which eventually lead back to the initial position or question (e.g., “What is justice?”).  While students often find these patterns frustrating, it emphasizes to students the value of thinking rather than correctly identifying the right answer.

Ultimately, though, faculty would be wise to spend less energy on the exact strategy they plan to use and instead focus on the main goals of their questioning.  In Making Thinking Visible (Ritchhart, Church, & Morrison, 2011), the authors propose that the purpose of questioning is really to make our students’ thinking visible by understanding our own expert-level thinking—aka metacognition.   To do this, the authors suggest that instead of complex taxonomies and patterns, we should focus our efforts on three main purposes for questioning in our classes:

  1. Modeling our interest in the ideas being explored
  2. Helping students to construct understanding
  3. Facilitating the illumination of students’ own thinking to themselves (i.e., metacognition)

By asking authentic questions – that is, questions to which the teacher does not already know the answer or to which there are not predetermined answers – instructors create a classroom culture that feels intellectually engaging, fosters a community of inquiry, and allows students to see teachers as learners (31).  Faculty must frame learning as a complex communal activity rather than the process of merely accumulating information.  Thoughtful questioning creates this classroom climate of inquiry, but only if faculty are metacognitive about their purpose and approach to using this critical pedagogical strategy.  Without metacognition, faculty risk relying on the machine gun approach to questioning, wasting valuable class time on recall of factual information rather than elevating and revealing students’ thinking.

Ritchhart, R., Church, M., and Morrison, K. (2011). Making thinking visible: How to promote engagement, understanding, and independence for all learners. San Francisco: Jossey-Bass.

Rothstein, D., and Santana, L. (2011). Make just one change: Teach students to ask their own questions. Boston: Harvard Education Press.

Vogler, K. E. (2005). Improve your verbal questioning. The Clearing House, 79(2): 98-103.

 

* Disclaimer: The views expressed in this document are those of the authors and do not reflect the official policy or position of the U. S. Air Force, Department of Defense, or the U. S. Govt.