Metacognitive links connecting the Arts and STEM

by Jessica Santangelo and Ilona Pierce, Hofstra University

We may be an unlikely pair at first glance – an actor and a biologist. We met after Jess gave a talk about the role of metacognition in supporting student learning in biology. Ilona realized during the talk that, though unfamiliar with the term metacognition, what she does with theatre students is inherently metacognitive. This has led to rich conversations about metacognition, the role of metacognition in teaching, and the overlap between the arts and STEM (Science, Technology, Engineering and mathematics).

Here we offer a condensed version of one of our conversations in which we explored the overlap between metacognition in the arts and STEM (STEAM).

Ilona: In actor training, (or voice/speech training, which is my specialty) self-reflection is the core part of an actor’s growth. After a technique is introduced and application begins, we start to identify each student’s obstacles. In voice work, we examine different ways we tighten our voices and bodies then explore pathways to address the tension. As tension is released, I’ll typically ask, “What do you notice? How are things different than they were when we began?”  This is what hooked me in at your lecture….you worked with the students, uncovering their shortcomings (their version of TENSION) and you watched their test scores go up. It was a great thing to see, but I sat there thinking, “doesn’t every teacher do that?”

Jess: In my experience, most STEM courses do not intentionally or explicitly support students reflecting on themselves, their performance, or their learning strategies. I’m not entirely sure why that is. It may be a function of how we (college-level STEM educators) were “brought up,” that many of us never had formal training in pedagogy, and/or that many STEM educators don’t feel they have time within the course to support students in this way.

When you contacted me after the lecture, I had an “aha!” moment in which I thought “Woah! She does this every day as an inherent part of what she does with her students. It’s not something special, it’s just what everyone does because it’s essential to the students’ learning and to their ability to grow as actors.” Though you hadn’t been aware of the term “metacognition” before the talk, what you are having your students do IS metacognitive.

Ilona: Of course, the students have to be taught to notice, and prodded into verbalizing their observations. In the beginning, when I ask, “What do you notice?” I’m typically met with silence. They don’t know what they notice. I have to guide them: “How has your breathing changed? Are you standing differently? What emotions arose?” As the course goes on, I’ll ask for deeper observations like, “How does your thinking/behavior during class help you/hinder you? What patterns are arising?” It’s not unusual to hear things like, “I realized I talk fast so that people don’t have the chance to interrupt me,” or “If I speak loudly, I’m afraid people will think I’m rude.”

Jess: I think that highlights a difference in the approach that educators within our respective fields take to our interactions with students. Your class is entirely focused on the student, the student’s experience, and having the student reflect on their experience so they can adjust/adapt as necessary.

In contrast, for many years, the design of STEM courses and our interactions with students focused on the conveyance of content and concepts to students. Some STEM classes are becoming more focused on having the students DO something with content/concepts in the classroom (i.e., active learning and flipped classrooms), but that hasn’t always been the case. Nor does having an active learning or flipped classroom mean that the course intentionally or explicitly supports student metacognitive development.

Ilona: Principles and content are an important part my coursework as well, but most of it is folded into the application of the skills they’re learning. The environment helps to support this kind of teaching. My students are hungry young artists  and the class size is 16 – 18 max. This allows me to begin by “teaching” to the group at large, and then transition to doing one-on-one coaching.

When you work with your students, do you work individually or in small groups?

Jess: I am pretty constrained in terms of what I can do in the classroom as I generally have 44-66 students/section (and faculty at other institutions are even more constrained with 100+ students/section!). However, even with my class size, I generally try to minimize whole-group lecture by having students work in small groups in the classroom, prompting them to discuss how they came to a conclusion and to make their learning visible to each other. One-on-one “coaching” generally occurs during office hours.

I’m really drawn to the word “coaching” here. I feel like you literally coach students – that you work with them, meeting them wherever they are in that moment, and help them gain awareness and skills to get to some endpoint. Does that accurately capture how you view yourself and your role? How does that play out in terms of your approach to your classes and to your interactions with students?

Ilona: I think it’s “teacher” first and then I transition to “coach”.  But I also use one-on-one coaching to teach the entire class. For example, one student gets up to share a monologue or a poem.  Afterwards, I ask a question, maybe a couple: ”What did you notice about your breathing? Your body? Your emotions?” If the student has difficulty answering, I’ll guide them to what I noticed: “Did you notice… i.e. your hands were in fists the whole time?” I might turn to the class and say, “Did you guys notice his hands?” The class typically will notice things the performer doesn’t. I’ll ask the class, “As an audience member, how did his clenched hands make you feel (emotionally, physically)? Did you want him to let them go, or did it help the piece?”  So the coaching bounces from the individual to the group, asking for self-reflection from everyone.

Jess: It sounds like we do something similar in that, as I prompt one student in a small group to explain how they arrived at a conclusion, I’m using that as an opportunity to model a thought process for the rest of the group. Modeling the thought process alone isn’t necessarily metacognitive, but I take it a step farther by asking students to articulate how the thought process influenced their ability to come to an accurate conclusion and then asking them to apply a similar process in other contexts. I’m essentially coaching them towards using thought process that is inquisitive, logical, and evidence-based – I’m coaching them to think like a scientist.

When I reflect on my title: professor/teacher/instructor/educator versus coach, I’m struck that the title brings up very different ideas for me about my role in the classroom – it shifts my perspective. When I think of professor/teacher/instructor/educator, I think of someone who is delivering content. When I think of a coach, I think of someone standing on the sidelines, observing an athlete perform, asking the athlete to do various exercises/activities/drills to improve various aspects of their performance. You seem to fit squarely in the “coach” category to me – you are watching the students perform, asking students to reflect on that performance, and then doing exercises to improve performance via the release of tension.

Ilona: I definitely do both. Coaching to me implies individualized teaching that is structured in a way to foster independence. Eventually, a coach may just ask questions or offers reminders. It’s the last stop before students leave to handle things on their own. Like parenting, right? We start with “hands on”, and over time we teach our children to become more and more independent, until they don’t need us anymore.

Jess: I wonder how often STEM educators think of themselves at coaches? How does viewing oneself as a coach alter what one does in the classroom? Is there a balance to be struck between “teaching” and “coaching”? How much overlap exists between those approaches?

In thinking about myself, I can wear both hats depending on the circumstance. I can “teach” content and “coach” to help students become aware of their level of content mastery. When I think of myself as a teacher, I feel responsible for getting students to the right answer. When I think of myself as a coach, I feel more responsible for helping them be aware of what they know/don’t know and supporting their use of strategies to help them be successful. Isn’t that the point of an athletic coach? To help an athlete be aware of their bodies and their abilities and then to push an athlete to do and achieve more within their sport? The academic analogy then would be to push a student to be aware of what they know or don’t know and to effectively utilize strategies to increase their knowledge and understanding. The goal is to get students doing this on their own, without guidance from an instructor.   

The other piece to this is how the students respond and use the metacognitive skills we are trying to help them develop. I wonder: Are your students, who are being encouraged to develop strong metacognitive skills in their theatre classes, naturally transferring those skills and using them in other disciplines (like in their bio class!)? If not, and if they were prompted to do so, would they be more likely to do so (and do so successfully) than non-theatre students who haven’t been getting that strong metacognitive practice?

Ilona: One would hope so. My guess is that when they get into non-acting classes, they revert to the student skills they depended on in high school. Although, I often get “metacognitive success stories” after summer break. Students will report that during their lifeguard or food-service gig, they realized their growing skills of self-awareness helped them to do everything from using their voices differently to giving them greater insight into their own behavior. If they can make connections like this during a summer job, perhaps they can apply these skills in their bio class.

 


Metacognition and Teacher-Student-Curriculum Relationships

by Steven Fleisher, Ph.D., California State University Channel Islands

I have heard many express that teacher-student relationships have nothing in common with families. But while teacher-student relationships are best described as collegial, at least within higher-education, this author believes that much can be learned from family theories and research. In particular, family research provides insights into how to support the development of trust in this context rather than relationships based principally on compliance. In other words, a classroom “is” a family, whether it’s a good one or a bad one. In this posting, we will explore metacognitive processes involved in building and maintaining stable relationships between students and the curriculum, teachers and the curriculum, and between teachers and students.

Family systems theory (Kerr & Bowen, 1988), though originally developed for clinical practice, offers crucial insights into not only teacher-student relationships but teaching and learning as well (Harrison, 2011). While there are many interlocking principles within family systems theory, we will concentrate on emotional stability, differentiation of self, and triangles.

The above triangle provides a representation for the following relationships: students-curriculum, teacher-curriculum, and teacher-students. Although any effective pedagogy would work for this discussion, we will focus specifically on the usefulness of knowledge surveys in this context (http://elixr.merlot.org/assessment-evaluation/knowledge-surveys/knowledge-surveys2) and their role in building metacognitive self-assessment skills.[1] Thus, what are some of the metacognitive processes involved in the relationships on each leg of our triangle? And, what are some of the metacognitive processes that would support those relationships in becoming increasingly stable?

Student-Curriculum Relationships

Along one leg of the triangle, students would increase the stability of their relationships with the curriculum as a function of becoming ever more aware of their learning processes. Regarding the use of knowledge surveys, students would self-assess their confidence to respond to given challenges, compare those responses with their developed competencies, and follow with reflective exercises to discover and understand any gaps between the two. As their self-assessment accuracy improves, their self-regulation skills would improve as well, i.e., adjusting, modifying, or deepening learning strategies or efforts as needed. So, the more students are aware of competencies in the curriculum and the more aware they are of their progress towards those competencies, the better off students will be.

As part of a course, instructors can also guide students in exploring how the material is useful to them personally. Activities can be designed to support exploration and discovery of ways in which course material relates, for example, to career interests, personal growth, interdisciplinary objectives, fostering of purpose, etc. In so doing, the relationships students have with the material can gain greater stability. Ertmer and Newby (1996) noted that expertise in learning involves becoming “strategic, self-regulated, and reflective”, and by bringing these types of exercises into the course, students are supported in the development of all these competencies.

Teacher-Curriculum Relationships

These relationships involve teachers becoming more aware of their practices, their student’s learning, and the connection between their practices and their student’s learning. In other words, the teacher is trying to ensure fit between student understanding and curriculum. Regarding knowledge surveys, teachers would know they are providing a pedagogical tool that supports learning and offers needed visibility for students.

In addition, once teachers have laid out course content in their knowledge surveys, they can look ahead and anticipate which learning strategies would be the best match for upcoming material. Realizing ahead of time the benefits of, let’s say, using structured group work for a particular learning module, teachers could prepare themselves and their students for that type of activity.

Teacher-Student Relationships

These relationships involve the potential for the development of trust. When trust develops in a classroom, students not only know what the expectations involve but are set more at ease to explore creatively their understanding and ways of understanding the material. For instance, students may well become aware of the genuine and honest help being provided by chosen learning strategies. Knowledge surveys are particularly useful in this regard as students have a roadmap for the course and a tool structured to facilitate the improvement of their learning skills.

Teachers also have an interpersonal role in supporting the development of student trust. Family systems theory (Bowen & Kerr, 1988) holds that we all vary in our levels of self-differentiation, which involves how much we, literally, realize that we are separate from others, especially during emotional conflict. In other words, people vary in their abilities to manage emotional reactivity (founded in anxiety) with being able to use one’s intellect to compose chosen and valued responses. Harrison (2011), in applying these principles in a classroom, noted that when teachers are aware of becoming emotionally reactivity (i.e., defensive), but are also aware of using their intellect, as best as possible, to manage the situation (i.e., remaining thoughtful and unbiased in their interactions with students), they are supporting emotional stability and trust.

Kerr and Bowen (1988) also reported that self-differentiation involves distinguishing between thoughts and feelings. This principle gives us another metacognitive tool. When we are aware, for example, that others do not “make” us feel a certain way (i.e., frustrated), but that it involves also our thinking (i.e., students are just being lazy), this affects our ability to manage reactivity. If we are aware of becoming reactive, and aware of distinguishing thoughts and feelings, we can notice and reframe our thoughts (i.e., students are just doing what they need to do), and validate and own our emotions (i.e., okay I’m frustrated), then we are better positioned to respond in ways that attune to our needs as well as those of our students. In this way, we would increase our level of self-differentiation by moving toward less blaming and more autonomy.

Final Note

Kerr and Bowen (1988) also said that supporting stability along all the relationships represented by our triangle not only increases the emotional stability of the system, but provides a cushion for the naturally arising instabilities along individual legs of the triangle. This presence of this stability also serves to further enhance the impact of effective pedagogies. So, when teachers are aware of maintaining the efficacy of their learning strategies, and are aware of applying the above principles of self-differentiation, i.e. engaging in metacognitive instruction, they are better positioned to be responsive and attuned to the needs of their students, thus supporting stability, trust, and improved learning.

References

Ertmer, P. A. & Newby, T. J. (1996). The expert learner: Strategic, self-regulated, and reflective. Instructional Science, 24(1), 1-24. Retrieved from https://link.springer.com/journal/11251

Harrison, V. A. (2011). Live learning: Differentiation of self as the basis for learning. In O.C. Bregman & C. M. White (Eds.), Bringing systems thinking to life: Expanding the horizons for Bowen family systems theory (pp. 75-87). New York, NY: Routledge.

Kerr, M. E. & Bowen, M. (1988). Family evaluation: An approach based on Bowen theory. New York, NY: W.W. Norton & Company.

Image from: https://www.slideshare.net/heatherpanda/essay-2-for-teaching-course-4

[1] Knowledge surveys are comprised of a detailed listing of all learning outcomes for a course (perhaps 150-250 items). Each item begins with an affective root (“I can…”) followed by a cognitive or ability challenge expressed in measurable terms (“…describe at least three functions of the pituitary gland.”). These surveys provide students with a roadmap for the course and a tool structured for building their confidence and accuracy in learning skills.


Hate-Inspired Webforums, PTSD, and Metacognition

by Roman Taraban, Texas Tech University

In linguistics, a register is a variety of speech used for distinct purposes in particular social settings. In a manner consistent with that terminology, I am here using the term discourse register to refer to sets of specific terms and meanings, and to specific vocabularies used by groups in order to achieve distinct purposes. Unlike a dictionary, a register is not so much concerned with the meanings of words as it is with their association with cognitions, affects, and behaviors. A discourse register can link together such disparate phenomena as hate speech, PTSD, and metacognition by virtue of the fact that each has a distinct discourse register, that is, each applies a specific vocabulary and manner of speech. The purpose of this blog post is to suggest that these disparate phenomena are similar by virtue of the way that they operate. The second purpose is to suggest a way of increasing our understanding of metacognitive processing by beginning to implement some of the technology that has already been extensively applied to hate-inspired webforums and trauma-related therapies.

Regarding hate speech, the internet has provided radical right groups the means to organize networks that often promote bias, bigotry, and violence. An example is Stormfront (https://www.stormfront.org/forum/), which was established by white supremacist and ex-felon Don Black in 1996. (Figea, 2015). Right-wing extremists use the internet to build identity and unity with “like-minded” individuals. This has prompted researchers and government analysts to analyze extremist communications in order to gain an understanding of these groups. Importantly, key indicators in the communications are sought out that could indicate future events (Figea, 2015; Figea et al., 2016).

What are the key indicators in extremist communications? The answer lies in part in the concept of a discourse register. It consists of the specific vocabulary and ways of communicating that characterize the shared conversations and practices of a group. For example, Figea (2015) applied machine learning to analyze Stormfront forum exchanges in an attempt to assess the level of three affects: aggression towards an outgroup, racism, and worries about the present or future. A sample of forum posts was classified by humans for the affects, then a machine was trained on the human classifications and tested on a new sample of forum posts. Key indicators for the racism affect were black, race, Jew, protest, and Zionist, corresponding to topics in the forums associated with Black inferiority, Jewish conspiracy, and government corruption (Figea, 2015).

The idea of a shared discourse among a group of individuals provides the theoretical glue that allows binding the activities, speech, and shared identity of groups of individuals. In some cases, the analysis of discourse has provided insights into the motivations and behaviors of extremist and terrorist groups, as described by Figea and colleagues (2015; Figea et al., 2016). In other cases, researchers have applied the idea of discourse and discourse analysis to prosocial activities involving counseling and therapy. Pennebaker and King (1999) proposed that “the way people talk about things reveals important information about them” (p. 1297). In order to assist them in their analyses, Pennebaker and colleagues developed and tested the LIWC (Linguistic Inquiry and Word Count) software. This software has been successfully applied to the analysis of texts in a variety of contexts and applied to a wide range of dimensions. These include analyses of emotionality, social status, group processes, close relationships, deception and honesty, thinking styles, and individual differences (Tausczik & Pennebaker, 2010).

Jaeger et al. (2014) examined the associations between trauma-related experiences (e.g., PTSD, depression, anxiety) and the content of the narratives written by trauma patients. The researchers found significant differences between daily vs trauma-related narratives in the use of cognitive-mechanism words (e.g., cause, know, ought) and negative emotion words (e.g., hate, worthless, enemy). There were also strong associations between the words that patients used and the severity of their trauma. The approach and outcomes in Jaeger et al. was similar to that employed by Figea and colleagues.

A perk of the LIWC software is that it allows individuals to develop their own specialized dictionaries and to import those dictionaries into LIWC to analyze language use for evidence of the target constructs. When individuals express sadness, they use words like sad, loss, cry, alone (Pennebaker & King, 1999). Sadness is part of a person’s emotion register. Can we apply this analytic approach to metacognition and ask, What is the discourse of metacognition? As instructors, how do the ways we talk about teaching reflect a metacognitive register – i.e., words that reflect an understanding of cognitive functioning, learning, limitations, self-regulation, monitoring, scaffolding, and so on. How do the ways we talk about students, classrooms, homework, and student collaboration mirror metacognitive understanding and processing? Current technology allows us to begin exploring these questions. Following the model provided in Figea (2015; Figea et al., 2016), one place to start might be this Improve With Metacognition (IWM) forum. The analysis of published scholarship on metacognition would be another source of texts to use to train and analyze a machine to detect key metacognitive indicators in texts. Human coders would code sentences in a sample of the texts as involving or not involving metacognition. These classification would be used to train a machine. After training, the machine would be tested on a new sample of texts.

Development of a metacognitive register is subject to the same constraints as any good scholarship. The developers need to be experts in the area of metacognition, and they need to have a clear grasp of how metacognition works. The linguistic analysis dictionary that they develop needs to be accurate and comprehensive. It needs to be a team effort – one individual cannot do it alone. The dictionary needs to be tested for construct validity, internal consistency, and for reliable test results across a variety of participants and contexts. In spite of the challenges inherent in the task, the prospect of a ready analytic tool for metacognition could help in advancing the application of the powerful cognitive suite of metacognitive processes in classrooms.

 

References

Figea, L. (2016). Machine learning for affect analysis on white supremacy forum. Downloaded from https://uu.diva-portal.org/smash/get/diva2:955841/FULLTEXT01.pdf .

Figea, L., Kaati, L, & Scrivens, R. (2016). Measuring online affects in a white supremacy forum. In IEEE Xplore. DOI: 10.1109/ISI.2016.7745448

Pennebaker, J. W., & King, L. A. (1999). Linguistic styles: Language Use as an individual difference. Journal of Personally and Social Psychology, 77(6), 1296-1312.

Tausczik, Y. R., & Pennebaker, J. W. (2010). The psychological meaning of words: LIWC and computerized text analysis methods. Journal of Language and Social Psychology, 29(1), 24-54.


The Mutual Benefits of Metacognition for Faculty and Students

by Dr. Marc Napolitano, U. S. Air Force Academy

I recently hosted a faculty discussion circle that was meant to serve as a capstone to the 2016-17 school year. As such, I thought that structuring this discussion around the theme of “reflection” would be most appropriate; after all, what better time than the end of term to reflect upon one’s teaching? Still, even as I announced and planned this event, I wrestled with the question of whether I was doing a disservice to the all-important process of reflection by framing it as an “end of term” activity (as opposed to framing it as a perpetual activity that one undertakes on a continuous basis over the course of a long period of time.) Reflection invariably involves turning inward, but it does not necessarily involve looking backward (despite the fact that the literal meaning of “reflect” is “to bend or turn (something) back.”)

Upon reviewing several readings to share with the faculty in the discussion circle, I noted an excellent blog post by Maryellen Weimer on Faculty Focus. In this piece, Weimer stresses how carving out time for reflection benefits college faculty in four overarching ways:

  • Integration: reflection can help us to “connect the dots” between different experiences that define our teaching.
  • Taking stock: reflection can help us to put things in perspective (especially in the case of challenging experiences).
  • Lifelong learning: reflection is intimately connected with lifelong learning and allows us to continue growing.
  • Private space: reflection allows us to turn inward and carve out a place/space for ourselves.

None of these four examples is restricted to contemplating the past (indeed, 2 and 4 deal with the present, and 3 looks toward the future.)

In light of my concerns, I was very pleased when the faculty who constituted our discussion circle steered the conversation toward sustained metacognition, as opposed to restricting the discussion to retrospective reflections. Not only did this strategy allow for a more dynamic discussion about past, present, and future, but it likewise reinforced the importance of taking a process approach to both teaching and learning. As a group, we agreed that the by taking the time to plan out what we are doing in the classroom, monitoring our own progress, and assessing the results of our endeavors, we invariably grow as teachers; students who take a similarly deliberate approach to learning often cultivate a parallel sense of progress and development.

Indeed, as our conversation progressed, I was fascinated to note how what was being said about metacognition seemed to apply equally to both faculty and students – teachers and learners. For example, on the most basic level, one faculty member pointed out how metacognition prompts him to strive toward better teaching because it promotes his cultivating an analytical insight into what he does well and what he needs to improve upon. Such insight is likewise vital to students regarding their learning processes, though as a group, we agreed that it is oftentimes necessary for faculty to carve out time for (and to model) metacognition for our pupils. Students rarely gravitate toward it instinctively. One professor noted that she frequently asks her students to explain to her “how are you trying to learn?”, and that most of her pupils are struck dumb the first time they consider the matter, for they have never before taken the time to consider learning as a process. The professor’s insightful “how?” question again made us think about the overlap between teaching and learning, and we discussed the value of asking ourselves “how do you explain this subject to someone with no familiarity or understanding of it?” Again, if we do not turn inward and think about processes, we run the risk of skipping vital steps that our students will need to take – steps that we, as experts, may take for granted – as they begin their journey in the discipline (and their journey toward lifelong learning).

One final parallel that we noted during our discussion was that metacognition promoted a vitalizing adaptability in both us and our students. By utilizing metacognition and considering how different contexts require us to employ different processes, we can develop a wide repertoire of pedagogical skills and methods for imparting the knowledge/aptitudes that constitute our disciplines. Similarly, students must be given opportunities to consider how different contexts shape and reshape the methods and processes that define their learning; they too should be encouraged to develop a broad set of learning strategies that they can utilize in a variety of contexts.

References

Weimer, M. (2011). Making time for reflection. Faculty Focus. Retrieved from https://www.facultyfocus.com/articles/teaching-professor-blog/making-time-for-reflection/


The First Instinct Fallacy: Metacognition Helps You Decide to Stick With It or Revise Your Answer

By Aaron S. Richmond, Ph. D., Metropolitan State University of Denver

When giving guidance to students on how to take tests in your class, do you tell your students to always go with their first answer (go with their gut), or to always revise their answers, or that it depends on the question?  Because many of you are fans of metacognition, likely you are wise and you choose the latter—it depends—and you would be correct. However, most students andmany teachers would choose “go with your gut instinct”, otherwise known as the First Instinct Fallacy (Kruger, Wirtz, & Miller, 2005). In this well-known article by Kruger and colleagues, they found (in 4 separate experiments) that when students change their answers, they typically change from incorrect to correct answers, they underestimate the number of changes from incorrect to correct answers, and overestimate the number of changes from incorrect to correct. Ironically, but not surprisingly, because students like to “go-with-their-gut”, they also tend to be very hesitant to switch their answers and regret doing so, even though they get the correct answer. However, what Kruger and colleagues did not investigate was the role that metacognition may play in the First Instinct Fallacy.

The [First] Instinct Fallacy: The Metacognition of Answering and Revising During College Exams

In two recent studies by Couchman et al. (2016), they investigated the mediating effects that metacognition may have on the First Instinct Fallacy. The procedure of both studies required students to complete a normal multiple-choice exam, indicate their confidence in their answers(whether they knew it or guessed the answer), and to indicate whether or not they changed their initial answer. Consistent with Kruger et al. (2005) results, Couchman and colleagues found that students more often change their initial response from incorrect to correct answers than the reverse. What was interesting is that when students thought they knew the answer and didn’t change their answer, they were significantly more likely to get the answer correct (indicating higher metacognition).  When students guessed, and didn’t change their answer, they were significantly more likely to get the answer incorrect (indicating low metacognition). Moreover, when compared to questions students thought they knew, when students revised guessed questions, they choose the correct answer significantly more often than when they didn’t change their answer. In other words, students did better on questions when they guessed and changed their answer to when they thought they knew the answer and changed their answer. These results suggested that students were using the metacognitive construct of cognitive monitoring to deliberately choose when to revise their answers or when to stick with their gut on a question-by-question basis.

Moral of the Story: Real-Time Metacognitive Monitoring is Key to Falling Prey to the First-Instinct Fallacy

As demonstrated in Couchman and colleagues’ results, when student metacognitively monitor their knowledge and performance on a question-by-question basis, they will perform better. Metcalfe (2002) called this adaptive control—focusing on process that you can control in order to improve performance. Koriat et al. (2004) suggests that instead of reflective thinking in general on performance, in-the-moment and item-by-item assessment of performance may be more productive and effective.

So, you were correct in telling your students that “it depends”, but as a practitioner, what do you do to facilitate students’ ability to increase the metacognitive skills of adaptive control and monitoring? Couchman and colleagues suggested that teachers instruct their students to simplyindicate a judgment of confidence for each question on the test (either use a categorical judgment such as low vs. medium vs. high confidence or use a 0-100 confidence scale). Then, if students are low in their confidence, instructors should encourage them to change or revise their answer. However, if student confidence is high, they should consider not changing or revising their answer. Interestingly enough, this must be done in real-time, because if students make this confidence judgment at post-assessment (i.e., at a later time), they tend to be overconfident and inaccurate in their confidence ratings. Thus, the answer to the First Instinct Fallacy is—like most things—complicated. However, don’t just respond with a simple “it depends”—even though you are correct in this advice. Go the step further and explain and demonstrate how to improve adaptive control and cognitive monitoring.

References

Couchman, J. J., Miller, N. E., Zmuda, S. J., Feather, K., & Schwartzmeyer, T. (2016). The instinct fallacy: The metacognition of answering and revising during college exams. Metacognition and Learning, 11(2), 171-185. doi:10.1007/s11409-015-9140-8

Kruger, J., Wirtz, D., & Miller, D. T. (2005). Counterfactual thinking and the first instinct fallacy. Journal of Personality and Social Psychology, 88(5), 725–35.

Koriat, A., Bjork, R. A., Sheffer, L., & Bar, S. K. (2004). Predicting one’s own forgetting: the role of experience based and theory-based processes. Journal of Experimental Psychology: General, 133, 643–656.

Metcalfe, J. (2002). Is study time allocated selectively to a region of proximal learning? Journal of Experimental Psychology: General, 131, 349–363.


Keep Calm and Improve with Metacognition: reflecting on three years of reflecting

John Draeger, SUNY Buffalo State

As Lauren and Aaron have recently noted, Improve with Metacognition (IwM) is now three years old. The site has become a space for collaboration and conversation around a range of issues loosely coming under the heading of ‘metacognition.’ My thinking about the nature of metacognition has shifted since we launched the site. I began thinking about thinking and reflecting on reflecting, but because of conversations on the site, I have come to use the term ‘metacognition’ to refer to awareness of a process (self-monitoring) and the use of that awareness to make changes in behavior (self-regulation). I’d like to take a moment to reflect on how IwM has helped me improve in three areas of my life with greater self-monitoring and self-regulation.

First, I like to think that I’ve always been the sort of teacher that encourages his students to think about their thinking. I confess, however, that my involvement with IwM has made me aware of my shortcomings with respect to developing my students’ metacognition. While I had been pretty good at nudging students to think carefully about content, I had also consistently missed opportunities to invite students to explicitly reflect on the efficacy of these strategies. For example, I took time in class to help students learn to annotate their reading, but I did not often teach them how to monitor whether these strategies were working and find alternatives when they did not. My efforts to adapt my Just-in-Time teaching strategies to be more metacognitive (Draeger, 2014, 2015, 2016) represent one of my attempts to make meaningful adjustments based on a growing awareness of my teaching practice.

Second, I am an everyday writer. I am up early most mornings working on one project or another. From that point of view, writing a blog post of 500-1000 words should have been a piece of cake. As I started blogging, however, I quickly became aware of the need to think about audience, style, and accessibility in ways that I had not thought about these considerations before. I have learned some lessons in the last three years and I am still making adjustments as I work to find “blog-sized” topics and refine my “blog voice.” I have grown as a writer because blogging for IwM has forced me to think more carefully about my craft. Further, I have found joy in writing in this short format. Much like taking a day trip to recharge your batteries, my excursions into the blogging space take me off my normal beat and path in ways that rejuvenate my other scholarly endeavors and bring fresh perspective.

Third, I had not initially thought through the role of blog space editor prior to IwM, but I’ve been delighted by regular interactions with metacognitive bloggers from around the United States (and indeed the world). Lauren, Aaron, and I regularly offer feedback to site contributors. I enjoy the opportunity to kick around ideas each week. This is, in part, because I am a nerd and relish indulging in new ideas. It is, in part, because I enjoy the writing process and this role gives me a front row seat as I watch scholars mold their ideas. It is, in part, because I enjoy the back and forth of intellectual banter. And it is, in part, because I like knowing that I am part of a growing community of metacognitive scholars. I find that my work with the IwM community crops up in all sorts of places and informs my interactions with others, both professionally and personally.

As I reflect on the last three years, I believe there will always be room for me to grow as a teacher, writer, and scholar. But I want to thank the IwM community for prompting me to think more carefully about these areas of my life. Improved awareness has led me to make subtle changes and these changes have led to improved performance. As we move into our fourth year together as an IwM community, I am coming to trust that I can keep calm, carry on, and improve with metacognition.

 

References

Draeger, J. (2014). “Just-in-Time for Metacognition.” Retrieved from https://www.improvewithmetacognition.com/just-in-time-for-metacognition.

Draeger, J. (2015). “Using Just-in-Time assignments to promote metacognition.” Retrieved from https://www.improvewithmetacognition.com/using-just-in-time-assignments-to-promote-metacognition.

Draeger, J. (2016). “Fine-tuning Just-in-Time assignments to encourage metacognition.” Retrieved from  https://www.improvewithmetacognition.com/fine-tuning-just-time-assignments-encourage-metacognition/

 


Glimmer to Glow: Creating and Growing the Improve with Metacognition Site

by Lauren Scharff, Ph.D., U. S. Air Force Academy *

It’s been three years since Improve with Metacognition (IwM) went live, but the glimmer of the idea started more than a year prior to that, and we still consider it a work in progress. The adventure started with a presentation on metacognition that Aaron Richmond and I gave at the Southwestern Psychological Association (SWPA) convention in 2013. We both had independently been working on projects related to metacognition, and decided to co-present in the teaching track of the conference. We had good attendance at the session and an enthusiastic response from the audience. I made the suggestion of forming some sort of online community in order to continue the exchange of ideas, and passed around a sign-up sheet at the end of the session.

I have to say that my initial idea of an online community was very limited in scope: some sort of online discussion space with the capability to share documents. I thought it would be super quick to set up. Well, the reality was not quite so easy (lol) and our ambitions for the site grew as we discussed it further, but with help from some friends we got it going just in time to unveil it at the SWPA 2014 convention. Along the way I pulled in our third co-creator, John Draeger, who helped shape the site and presented with us at the 2014 convention.

As Aaron mentioned in his reflection last week, during the past three years we have shared information about the site at a variety of conferences both within the United States and beyond. The response has always been positive, even if not as many people go the next step and sign up for updates or write guest contributions as we’d like. One common line of questioning has been, “This is fantastic! I am interested in doing something similar on the topic of X. How did you get it going?”

We do hope that IwM can serve as a model for other collaboration sites, so here are a few things that stand out for me as I reflect on our ongoing efforts and the small glow we have going so far.

  • Partnerships are essential! John, Aaron, and I have some different skill sets and areas of expertise relevant to running the site, and our professional networks reach different groups. Further, with three of us running it, when life gets nuts for one of us, the others can pick up the slack. I can’t imagine trying to set up and maintain a site like IwM all on my own.
  • Practice metacognition! The three of us periodically join together in a Skype session to reflect on what seems to be working (or not), and share ideas for new features, collaboration projects, etc. We use that reflection to self-regulate our plans for the site (awareness plus self-regulation –> metacognition). Sometimes we’ve had to back off on our initiatives and try new strategies because the initial effort wasn’t working as we’d hoped. A long-time saying I’m fond of is, “the only way to coast is downhill.” Any endeavor, even if wildly successful at first, will require some sort of ongoing effort to keep it from coasting downhill.
  • Be open and provide an environment that supports professional development! (And realize this requires time and effort.) We want to encourage broad involvement in the site and provide opportunities for a wide variety of people interested in metacognition to share their ideas and efforts. We also hope to have a site that is viewed as being legitimate and professional. This balancing act has been most apparent with respect to the blog posts, because not everyone has strong writing skills. And, we believe that even those with strong writing skills can benefit from feedback. Thus, we provide feedback on every submitted post, sometimes suggesting only minor tweaks and sometimes suggesting more substantial revisions. The co-creators even review each other’s drafts before they are posted. As anyone who provides feedback on writing assignments or reviews journal articles knows, this process is a labor of love. We learn a lot from our bloggers – they share new ideas and perspectives that stimulate our own thinking. But, providing the appropriate level of feedback so as to clearly guide the revisions without squashing enthusiasm is sometimes a challenge. Almost always, at least two of the co-creators review each blog submission, and we explicitly communicate with each other prior to sending the feedback, sometimes combined and sometimes separate. That way we can provide a check on the tone and amount of feedback we send. Happily, we have received lots of thanks from our contributors and we don’t have any cases where a submission was withdrawn following receipt of our feedback.

Upon further reflection, my overall point is that maintaining a quality blog, resource, and collaboration site requires more than just getting people to submit pieces and posting articles and other resources. We hadn’t fully realized the level of effort required when we started, and we have many new ideas that we still hope to implement. But, on so many levels all the efforts have been worthwhile. We believe we have a fantastic (and growing) collection of blogs and resources, and we have had several successful collaboration projects (with more in the works).

We welcome your suggestions, and if you have the passion and time to help us glow even brighter, consider joining us as either a collaboration-consultant or as a guest blogger.

Lauren

* Disclaimer: The views expressed in this document are those of the authors and do not reflect the official policy or position of the U. S. Air Force, Department of Defense, or the U. S. Govt.


The Great, The Good, The Not-So-Good of Improve with Metacognition: An Exercise in Self-Reflection

By Aaron S. Richmond, Ph. D., Metropolitan State University of Denver

Recently, Lauren, John, and I reflected on and discussed our experiences with Improve with Metacognition (IwM). We laughed and (no crying) found (at least I did) that our experiences were rich and rewarding. As such, we decided that each of us would write a blog on our experience and self-reflection with IwM. Therefore, I’m up. When thinking about IwM, the theme that kept surfacing in my mind is that we are Great, Good, and on a few things—Not-So-Great.

The Great

Oh, how can I count the ways of how IwM is Great. Well, by counting. In my reflection on what we have accomplished, it came apparent that here at IwM, we have been highly productive in our short existence. Specifically, we have published over 200 blogs, resources about metacognition measures, videos, instruction, curated research articles, and teaching metacognition (see our new call for Teaching with Metacognition). We have created a space for collaborators to gather and connect. We have engaged in our own research projects. We have had over 35 contributors from all over North America and a few from beyond, who have ranged from preeminent scholars in the field of metacognition and SoTL to graduate students writing their first blog. Speaking for Lauren and John, I can only hope that the explosion in productivity and high quality research and writing continues with IwM.

The Good

Ok, it is not just Good—this is just another thing that is great. IwM has produced some amazing blogs. I can’t review them all because, this time I will keep to my word count, but I would like to highlight a few insightful blogs that resonated with me. First, Edn Nuhfer recently wrote a blog titled, Collateral Metacognitive Damage (2017, February). The title is amazing in itself, but Ed extolls the use of self-assessments, why approach and perspective of self-assessment matters most (be the little engine that could vs. little engine who couldn’t), and provides a marvelous self-assessment tool (http://tinyurl.com/metacogselfassess ). I have already shared this with my students and colleagues. Second, one of the topics I would never have thought of, was Stephen Chew’s blog on Metacognition and Scaffolding Learning (2015, July). I have used scaffolding (and still do) throughout all of my courses, however, I never considered that by over-scaffolding, that I could reduce my student’s ability to calibrate (know when you know or don’t know something). That is, by providing too much scaffolding, it may cause students to be highly over confident and overestimate their knowledge and skill. Third, Chris Was wrote about A Mindfulness Perspective on Metacognition (2014, October). I have been begrudgingly and maybe curmudgeonly resistant to mindfulness. As such,  I was skeptical even though I know how great Chris’ research is. Well, Chris convinced me of the value of mindfulness and its connection to metacognition. Chris said it best, “It seems to me that if metacognition is knowledge and control of one’s cognitive processes and training in mindfulness increases one’s ability to focus and control awareness in a moment-by-moment manner, then perhaps we should reconsider, and investigate the relationship between mindfulness and metacognition in education and learning.” There are literally dozens and dozens of other blogs that I have incorporate into both my teaching and research. The work done at IwM is not merely good, it is great!

The Not-So-Good

IwM has been a labor of love. Speaking for myself, the work that has been done is amazing, exhausting, invigorating, productive, and fulfilling.  However, what I believe we have been “Not Great” at is getting the word out. That is, considering that there are over 200 blogs, resources, curated research articles, collaborations, etc. I believe that one of the things we are struggling with is spreading the gospel of metacognition.  Also, despite the fact that Lauren, John, and I have travelled across the globe (literally) promoting IwM at various conferences, so few people know about the good work being done. Moreover, notwithstanding that we have 258 email subscribers, I feel (passionately) that we can do better. I want and desire for other researchers and practitioners to not only benefit from the work we’ve done but to contribute to new IwM blogs, resources, research, and collaboration.

As I do with all my blogs, I will leave you with an open-ended question: What can we do to spread the word of the Great and Good work here at IwM?

Please give me/us some strategies or go out and help spread the word for us.

References

Chew, S. (2015, July). Metacognition and scaffolding student learning. Retrieved from https://www.improvewithmetacognition.com/metacognition-and-scaffolding-student-learning/

Nuhfer, E. (2017, February). Collateral metacognitive damage. Retrieved from https://www.improvewithmetacognition.com/collateral-metacognitive-damage/

Was, C. (2015, October). A mindfulness perspective on metacognition. Retrieved from https://www.improvewithmetacognition.com/a-mindfulness-perspective-on-metacognition/


What I Learned About Metacognition from Cooking Farro

by Stephen Chew, Ph.D., Samford University,  slchew@samford.edu

I like to take my wife out for dinner, but sometimes she insists on going to a place that doesn’t feature a drive through lane. That’s fine with me because It gives us a chance to see what is trendy in the food world. A few years ago, my wife ordered a salad made with quinoa. We’d vaguely heard about quinoa before, but had never tried it. We really liked it for its nutty taste. If you don’t know, Quinoa (typically pronounced KEEN-wah in English and kee-NO-ah in Spanish) is a grain that was first cultivated in the Andes several thousand years ago, and has become quite popular for its nutritional value. After we tried it, I decided to buy some and cook it myself. I found it in the store and next to it was another grain I had only vaguely heard of, farro. Farro (pronounced either FAY-roh or FAR-oh) is also an ancient grain, but it originated in the Mediterranean region around 10,000 years ago. I figured if I was going to try one ancient grain I might as well try another, so I bought them both. Little did I know that cooking them would be an adventure in good and bad metacognition.

First I cooked the quinoa, and that turned out fine. Next, I tried the farro, and that is where I ran into problems. I followed the directions on the package, but then I realized I had no idea how to tell if the farro was properly cooked. Unlike quinoa, I’d never eaten it before and I had no concept of what the desired end result was supposed to look or taste like. Was it supposed to have a mushy, al dente, or crunchy texture? I had no idea. Looking at photos and videos of cooked farro didn’t help much. There was nothing in the instructions about how to tell if it was done. For quinoa, I had already eaten some that was, presumably, expertly prepared. Furthermore, the cooking instructions had the helpful note that cooked quinoa becomes translucent and the germ bursts out in a spiral pattern. I had been able to check for that when I cooked it. No such luck with farro. As a result, my wife and I had to decide if we liked farro based on whatever version of it that I had cooked.

Now how does this story relate to metacognition? For effective metacognition, students must accurately judge how close or far their understanding is from the desired end goal. How can they do that if they have no concept (or an inaccurate concept) of the desired end goal? Consider self-regulated learning, which incorporates metacognition (Zimmerman, 1990). Pintrich (2004) makes explicit the necessity of students understanding the desired outcome for successful learning when he states that all models of self-regulated learning “assume that there is some type of goal, criterion, or standard against which comparisons are made in order to assess whether the learning process should continue as is or if some type of change is necessary” (p. 387). I’ve certainly made the mistake of believing students understood what the desired outcome of an assignment or activity was only to find out later (usually on the exam or final paper) that they did not understand the goal at all. I know what I mean when I tell them to use critical thinking or employ sound research methods or develop sound arguments, but I can’t assume that they know it unless I teach them what I mean and how to recognize when they have achieved it.

Failure to teach the desired level of understanding to students is a consequence of the curse of expertise. Because of our expertise, we tend to overestimate our ability to explain concepts thoroughly (Fisher & Keil, 2015) and we underestimate the difficulty for students to learn the concepts (Hinds, 1999). Fortunately, demonstrating to students what the desired understanding or end goal is for a concept is something we can accomplish through formative assessments such as think-pair-shares, “clicker” questions, and worked examples. We can assess their understanding of a concept using a low stakes activity before the high stakes assessment and demonstrate both the end result we are looking for and the strategies we use to achieve it. Not only are such formative assessments useful for students to monitor their understanding, they are also useful for helping us calibrate our teaching according to their understanding.

Recently I read the autobiography of Eric Ripert, a renowned chef. He makes the same point about the importance of understanding the desired outcome in recounting his development as a master chef.

Through repetition and determination to be great (or at least better than good), I began to understand the sauces I was preparing. I started to allow myself to feel my way through them, not just assemble them be rote. I knew when a sauce I had made was delicious—perfectly balanced and deeply flavored. (Ripert & Chambers, 2016, p. 215)

We must make sure students know what the desired goal is and how to recognize when they have achieved it in order to enable effective metacognition. It is a lesson I learned from cooking farro that I now apply to my teaching.

References

Fisher, M., & Keil, F. C. (2016). The curse of expertise: When more knowledge leads to miscalibrated explanatory insight. Cognitive Science, 40, 1251-1269. doi: 10.1111/cogs.12280

Hinds, P. J. (1999). The curse of expertise: The effects of expertise and debiasing methods on predictions of novice performance. Journal of Experimental Psychology: Applied, 5, 205-221. doi: 10.1037/1076-898X.5.2.205

Pintrich, P. R. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educational Psychology Review, 16, 385-407. doi: 10.1007/s10648-004-0006-x

Ripert, E., & Chambers, V. (2016). 32 yolks: From my mother’s table to working the line. New York: Random House.

Zimmerman, B. J. (1990). Self-regulated learning and academic achievement: An overview. Educational Psychologist, 25, 3-17. doi: 10.1207/s15326985ep2501_2


Joining Forces: The Potential Effects of Team-Based Learning and Immediate Feedback Assessment Technique on Metacognition

by Aaron S. Richmond, Ph. D., Metropolitan State University of Denver

As a standalone assessment tool, the Immediate Feedback Assessment Technique (IF-AT) has been demonstrated to affect student learning and students’ perceptions of the teacher (e.g., Brosvic et al. 2006; Slepkov & Sheil, 2014) and possibly improve metacognition (see Richmond, 2017). However, can IF-AT be combined with a cooperative learning activity such as Team-Based Learning (TBL) to enhance metacognition as well?

To partially answer this question, several researchers suggest that the IF-AT may be used effectively with TBL (Carmichael, 2009; Hefley & Tyre, 2012; Ives, 2014). For example, you could first form teams, give them an exam to discuss and debate the correct answer, and then have the teams decide on the correct answer. If students within a team cannot come to a consensus on the response to a question, you may allow them to write an “appeal” to turn in a separate answer. Click on Figure 1 for a video on how to use IF-AT combined with TBL.  IF-AT may also be used in dyads to allow students to discuss correct and incorrect answers. Students read a question, discuss the correct and incorrect answers, and then cooperatively make a decision, with the IF-AT providing immediate feedback. A third way, suggested by Ives (2011), is to do a two-stage group quiz. Ives suggests that you should have individual students write weekly quiz questions (first-stage), then get into teams and take quizzes in teams that consist of student’s written questions. However, the question then becomes, can the combination of TBL and IF-AT instructional strategies improve metacognition?

Figure 1. Team-Based Learning Using IF-AT. 

The Interplay Among IF-AT, TBL, and Metacognition
As I argued previously (Richmond, 2017), IF-AT may improve student’s metacognition; however, by adding TBL, what metacognitive processes and skills might improve? I see a several metacognitive benefits that may occur when combining these two instructional strategies.

First, the combination of IF-AT and TBL may increase student’s metacognitive awareness. For instance, test anxiety may be reduced in a group setting when using IF-AT (Ives, 2011) because students have the opportunity to debate the answers, hear from others, gather consensus and share responsibility. As the awareness of and conscious effort to reduce test anxiety is part of metacognitive awareness, the combination of TBL and IF-AT may make this process more salient.

Second, using TBL with IF-AT may also increase student’s calibration (e.g., the accuracy of knowing when you know or do not know something). That is, in a cooperative learning activity such as TBL, students are either reinforced with their correct knowledge through the process of debating and discussion of answers OR confronted with their incorrect knowledge by interacting with team members. Consequently, their assessment (calibration) of their knowledge should become more accurate through this process. For example, if a team member accurately identifies a correct answer, and one of the team members (who had the incorrect answer to start with) observes this, they may reflect on their answer, determine why and how they came to the incorrect answer, and change future strategies to study and subsequent estimations of knowledge. Or, an individual team member could consistently get (originally) the correct answer, but always underestimate his or her knowledge. This type of student may gain confidence in their knowledge and become more accurately calibrated.

Third, by combining TBL and IF-AT, there may also be an increase of metacognitive, cognitive, and learning strategy skills. That is, as team members share how, where, what, and why they studied, other team members may incorporate these strategies  into their quiver of learning strategies (especially if the team member who suggested it was correct). For example, one team member may explain the elaborative strategy they used effectively to study, and other team members listen and incorporate elaboration into their repertoire of strategies. Or, for example, a team member may consistently get questions wrong and share what strategy he or she uses (e.g., cramming and rehearsal). Other team members observe this, realize that strategy does not appear to work very well, and subsequently rarely use it themselves (we could only wish J).

Based on the above examples, it does seem likely that the combined use of TBL and IF-AT may improve various metacognitive skills.

Concluding Thoughts and The Hallmark of Good Assessments—Evidence
As a SoTL scholar, I would be remiss not to investigate the evidence supporting or refuting the efficacy of IF-AT and TBL. There are a handful of studies that demonstrate the advantage of using TBL and IF-AT to increase academic performance and enjoyment of class (e.g., Carmichael, 2009; Haberyan, 2007). The combination of IF-AT and TBL has also demonstrated to stimulate small group discussion and identify and correct content misconceptions (Cotner, Baepler, & Kellerman, 2008). However, there appears to be a gap in the research. Specifically, there are several research questions which arise:

  1. Does the combination of IF-AT and TBL increase metacognitive awareness?
  2. Does the combination of IF-AT and TBL increase the accuracy of a student’s calibration?
  3. Does the combination of IF-AT and TBL increase a student’s repertoire of cognitive and learning strategies?
  4. What other metacognitive processes may be enhanced by using IF-AT in a TBL setting?

As I mentioned in my first blog on IF-AT (Richmond, 2017) and here, I think there are enormous SoTL research opportunities for investigating the effects of IF-AT and TBL to improve metacognition. This, invariably, leads to the proverbial phrase: A little knowledge is a dangerous thing—so get to work!

Please follow me on Twitter: @AaronSRichmond

References
Carmichael, J. (2009). Team-based learning enhances performance in introductory biology. Journal of College Science Teaching, 38(4), 54–61.

Clark, M. C., Nguyen, H. T., Bray, C., & Levine, R. E. (2008). Team-based learning in an undergraduate nursing course. Journal of Nursing Education, 47, 111–117.

Cotner, S., Baepler, P., & Kellerman, A. (2008). Scratch this! The IF-AT as a technique for stimulating group discussion and exposing misconceptions. Journal of College Science Teaching37(4), 48.

Haberyan, A., (2007). Team-based learning in an industrial/organizational psychology course. North American Journal of Psychology, 9,143–152.

Hefley, T., & Tyre, A. J. (2012). Favorable team scores under the team-based learning paradigm: A statistical artifact?. RURALS: Review of Undergraduate Research in Agricultural and Life Sciences6(1), 1. Retrieved from http://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1043&context=rurals

Ives, J. (2011). Two-stage group quizzes part 1: What, how and why. Science Learnification: Teaching and learning in the sciences with a focus on physics education research (PER) from the trenches.  Retrieved from https://learnification.wordpress.com/2011/03/23/two-stage-group-quizzes-part-1-what-how-and-why/

Richmond, A. S. (2017, February 24th). Scratch and win or scratch and lose? Immediate Feedback Assessment Technique. Retrieved from https://www.improvewithmetacognition.com/scratch-win-scratch-lose-immediate-feedback-assessment-technique/

Slepkov, A. D., & Shiell, R. C. (2014). Comparison of integrated testlet and constructed-response question formats. Physical Review Special Topics-Physics Education Research10(2), 020120.


Does a Machine Have Metacognition?

by Roman Taraban, Ph.D.,  Texas Tech University

In the movie Arrival, the character Louise Banks is portrayed as a linguist who can decipher an alien language. For much of this task, Louise and colleagues are doing pattern matching, trying to establish a correspondence between English and the alien language. A critical piece of the plot is in the interpretation given to the translation of the alien message “offer weapon.” Louise’s competitors interpret this as “use weapon” and propose to attack and destroy the aliens. Alternatively, Louise considers whether there might be an alternative interpretation for weapon, like “tool” or “technology.” From a metacognitive perspective, we might describe the competitors as thinking at a cognitive level, interpreting the phrase literally and acting accordingly. Louise, we could say, acted metacognitively, questioning whether the literal cognitive process was sufficient. Throughout the movie, Louise questions the sufficiency of her cognitive resources for the task at hand, and figures out how to overcome her limitations. In the end, metacognition saves the day.

Normally, we think of metacognition as a value-added add-on to everyday thinking. The metacognitive individual in a sense transcends his or her own limitations. The person recognizes limitations of memory storage and speed of processing, and the value of external memory, spaced practice, planning, and so on. With this recognition of limitations of memory and processing comes a search for and discovery of strategies for managing cognition. This “higher-order” processing is metacognitive, and in the movie Arrival, Louise Banks is our metacognitive hero.

Although we are inclined to attribute metacognition to bright individuals, like Louise Banks, can we dismiss the possibility that metacognition can exist in “dumb” machines – dumb in the sense that they do not have human-like understanding? Intelligent machines, like computers, process patterns mechanically. Does a computer need to experience metacognition like a human in order for the process to be regarded as metacognitive? Is a jolt of adrenalin a necessary part of the recognition process signaling to us that we should monitor and check our calculations? The present blog is not about some distance aliens, but about a smart machine that is being widely used in many different applications today. The machine is IBM’s Watson.

There are clearly some areas in which today’s computers do not need to be metacognitive. Humans can only hold 7 + 2 chunks of information in short-term memory. An intelligent system like IBM’s Watson https://www.ibm.com/watson/developercloud/nl-classifier.html has 15 terabytes of cache memory and processes 80 teraflops per second, so neither short-term memory nor speed of processing are issues. Metacognitive processes for recognizing and preserving short-term memory would seem to be pointless, as would many of the metacognitive resource-management strategies that humans depend on. Would IBM Watson need to grab a pencil and jot a phone number onto a scrap of paper? Not likely

There may be other ways, though, that machines could exhibit metacognitive behaviors. For instance, a machine like IBM Watson might know that humans are limited in how much information they can process in a unit of time. As a metacognitive strategy, Watson might control and monitor the rate at which he verbalizes in conversation. Watson might change his linguistic register when conversing with a young child https://www.youtube.com/watch?v=vqjndtS8jQU . Watson could attempt to choose an appropriate theme with specific speakers, like Bob Dylan. In a commercial with Dylan, Watson wisely chose to talk about Dylan https://www.youtube.com/watch?v=oMBUk-57FGU. Watson apparently can monitor and modulate its own behavior depending on the context, goal, and particular interaction.

What about monitoring its own resources? If we gave Watson a set of test questions, it is not likely that Watson would reason about them metacognitively like a human. For example, Watson would not acknowledge that word problems are more difficult than straight calculations, so would attack the calculations first. However, it is not difficult to imagine situations in which Watson could reason metacognitively about his own processing and plan, control, and monitor those processes. For instance, recognizing that in the context of a crisis certain information is more critical, Watson could modify the order in which information is compiled and provided to, say, paramedics at the scene of a disaster. This would involve prioritizing specific information, queueing it up in a specific order, delivering it, and monitoring its effective transfer to the paramedics.

The irony, perhaps, is that Watson is not exhibiting “transcendent” behavior, which is how we might view metacognition in humans. Instead, Watson is simply carrying out mechanical computations, which, in a sense, are like any other computations that Watson carries out. The existence of machines like Watson should prompt us to ask whether our metacognitive ruminations may also simply be computational processes. Perhaps the sense of metacognition involving “higher-order” thinking, the self-pride we take in thinking about thinking, is an add-on, an epiphenomenon on which actual metacognitive processing in no way depends. In any case, the question of whether computers can be designed to use metacognitive strategies, to plan and modulate behaviors depending on circumstances, and to monitor the success of their efforts, may deserve a positive “yes.”


Scratch and Win or Scratch and Lose? Immediate Feedback Assessment Technique

By Aaron S. Richmond, Ph. D., Metropolitan State University of Denver

When prepping my courses for this spring semester, I was thinking about how I often struggle with providing quick and easy feedback on quiz and exam performance to my students. I expressed this to my colleague, Dr. Anna Ropp (@AnnaRopp), and she quickly suggested that I check out Immediate Feedback Assessment Technique (IF-AT) by Epstein Educational Enterprises. When she showed me the IF-ATs, I was intrigued and thought I might as well give it a try—so I ordered some. IF-AT is used to instantaneously provide performance feedback to learners by allowing students to scratch off what they believe to be the correct answer on a multiple-choice exam, quiz or test. See Figure 1a and 1b for student examples of a completed IF-AT.  Students can find out what the incorrect or correct answer is by just scratching the chosen answer (see question 1 in Figure 1a). Students can scratch more than one answer to find the correct answer (see question 2 in Figure 1a). You may also use it as a way of providing partial credit for sequenced attempts (e.g., scratch 1 choice for full credit if correct, then scratch a second choice, and maybe a third, to get decreasing amounts of partial credit). See question 6 in Figure 1b for an example of this.  Epstein and colleagues suggest that IF-AT not only assesses student learning, but it can also teach at the same time. However, it occurred to me that this is not only an assessment and teaching tool, rather it is a great opportunity to increase metacognition.

                                                        (a)                                                (b)

Figure 1. Completed and Unscored 10-Question IF AT Completed 10-Question IF AT Student and Teacher Scored

How to Use IF-AT
Epstein and colleagues suggest that IF-AT is fair, fast, active, fun, and respectful and builds knowledge. The IF-AT scratch assessments come in 10, 25, or 50-item test options with 5 different orders of correct answers. The Epsteins suggest that IF-AT can be used in many ways. For example, they can be used for chapter tests; individual study (at home or in class); quizzes; pyramidal-sequential-process quizzing; exams; team-based and cooperative-learning; study-buddy learning; and most importantly as a feedback mechanism (see http://www.epsteineducation.com/home/about/uses.aspx website for further explanation). There are several metacognitive functions (although the Epstein’s do not couch their claims using this term) of the IF-AT. First, the Epstein’s argue that you can arrange your IF-AT so that the first question (and the immediate feedback of the correct answer) can be used in a pyramidal sequential process. That is, the correct answer to the first question is needed to answer subsequent questions as it is foundational knowledge for the remaining question. This sequential process allows the instructor and student to pin-point where the student’s knowledge of the integrated content broke down. This is implicit metacognitive modeling of a student’s metacognitive knowledge that should be made explicit. Meaning, by explaining to your students how the exam is set up, students can use cues and knowledge from previous questions and answers on the test to assist in their understanding of subsequent questions and answers. This is a key step to the learning process. Second, the IF-AT may also be used in a Team-Based way (i.e., distributed cognition) by forming groups, problem solving, and the team discovering what the correct answer is. IF-AT may also be used in dyads to allow students to discuss correct and incorrect answers. Students read a question, discuss the correct and incorrect answer, then cooperatively make a decision and receive immediate feedback. Third, IF-AT may be used to increase cognitive and metacognitive strategies. That is, by providing feedback immediately, students (if you explicitly instruct them to do so) may adjust their cognitive and metacognitive strategies for future study. For example, if a student used flashcards to study, and did poorly, they may want to adjust how they construct and use flashcards (e.g., distributed practice). Finally, and most importantly, IF-AT may improve student’s metacognitive regulation via calibration (i.e., the accuracy of knowing when you do and don’t know the answer to a question). That is, by providing immediate feedback, students may become more accurate in their judgments of knowing or even their feelings of knowing based on the feedback.

Is it Scratch to Win or is Scratch to Lose?
As described, by using the IF-AT, students get immediate feedback on whether they got the question correct, incorrect, and what is the appropriate answer. From a metacognitive perspective, this is outstanding. Students can calibrate (i.e., adjust their estimations and confidence in knowing an answer) in real-time, engage in distributed cognition, provide feedback on choice of cognitive and metacognitive strategies, can increase cognitive monitoring, and regulation control. These are all WIN, WIN, WIN, byproducts. HOWEVER, is there a down-side to instantaneously knowing you are wrong? That is, is there an emotional regulation and reactivity to IF-AT? As I have been implementing the use of the IF-AT, I have noticed (anecdotally) that about 1 in 10 students react negatively and it seems to increase their test anxiety. Presumably, the other 90% of the students love it and appreciate the feedback. Yet, what about the 10%? Does IF-AT stunt or hinder their performance? Again, my esteemed colleague Dr. Anna Ropp and I engaged in some scholarly discourse to answer this question, and Anna suggested that I make the first 3-5 questions on each IF-AT “soft-ball” questions. That is, questions that 75% of students will get correctly so that students’ fears and anxiety is remediated to some degree. Another alternative is to provide students with a copy of the test or exam and let them rank order or weight their answers (see Chris Was’ IwM Blog, 2014; on how to do this). Despite these two sound suggestions, there still may be an affective reaction that could be detrimental to student learning. To date, there has been no research to investigate this issue and there are only a hand full of well-designed studies to investigate IF-AT (e.g., Brosvic et al., 2006; Dihov et al., 2005; Epstein et al., 2002, 2003; Slepkov & Sheill, 2014). As such, more well-constructed and executed empirical research is needed to investigate this issue (Hint: all you scholars looking for a SoTL project…here’s your sign).

Concluding Thoughts and Questions for You
After investigating, reflecting on, and using IF-AT in my classroom, I think that it is a valuable assessment tool in your quiver of assessments to increase metacognition—but of course not an educational panacea. Furthermore, in my investigation of this assessment technique, (as usual), more questions popped up on the use of IF-AT. So, I will leave you with a charge and call to help me answer the questions below:

  1. Are there similar assessments that provide immediate feedback that you use? If so, are they less expensive or free?
  2. If you are using IF-AT, what is your favorite way to use it?
  3. Do you think IF-AT could cause substantial test anxiety? If so, to whom and to what level within your classes?
  4. How could you use IF-AT be used as a tool for calibration more efficiently? Or, what other ways do you think IF-AT can be used to increase metacognition?
  5. I think there are enormous opportunities for SoTL on IF-AT (e.g., the effects on calibration, distributed cognition, cognitive monitoring, conditional knowledge of strategy use, etc.), which means we all have some more work to do. J

References
Brosvic, G. M., Epstein, M. L., Dihoff, R. E., & Cook, M. J. (2006). Acquisition and retention of Esperanto: The case for error correction and immediate feedback. The Psychological Record56(2), 205.

Dihoff, R. E., Brosvic, G. M., Epstein, M. L., & Cook, M. J. (2005). Adjunctive role for immediate feedback in the acquisition and retention of mathematical fact series by elementary school students classified with mild mental retardation. The Psychological Record55(1), 39.

Epstein, M. L., Brosvic, G. M., Costner, K. L., Dihoff, R. E., & Lazarus, A. D. (2003). Effectiveness of feedback during the testing of preschool children, elementary school children, and adolescents with developmental delays. The Psychological Record53(2), 177.

Epstein, M. L., Lazarus, A. D., Calvano, T. B., & Matthews, K. A. (2002). Immediate feedback assessment technique promotes learning and corrects inaccurate first responses. The Psychological Record52(2), 187.

Slepkov, A. D., & Shiell, R. C. (2014). Comparison of integrated testlet and constructed-response question formats. Physical Review Special Topics-Physics Education Research10(2), 020120.

Was, C. (2014, August). Testing improves knowledge monitoring. Improve with Metacognition. Retrieved from https://www.improvewithmetacognition.com/testing-improves-knowledge-monitoring/


Collateral Metacognitive Damage

Why Seeing Others as “The Little Engines that Could” beats Seeing Them as “The Little Engines Who Were Unskilled and Unaware of It”

by Ed Nuhfer,Ph.D. Professor of Geology, Director of Faculty Development and Director of Educational Assessment, California State Universities (retired)

What is Self-Assessment?

At its root, self-assessment registers as an affective feeling of confidence in one’s ability to perform in the present. We can become consciously mindful of that feeling and begin to distinguish the feeling of being informed by expertise from the feeling of being uninformed. The feeling of ability to rise in the present to a challenge is generally captured by the phrase “I think I can….” Studies indicate that we can improve our metacognitive self-assessment skill with practice.

Quantifying Self-Assessment Skill

Measuring self-assessment accuracy assessment lies in quantifying the difference between a felt competence to perform and a measure of the actual competence demonstrated. However, what at first glance appears to be a calculation of simple subtraction has proven to be a nightmarish challenge to a researcher’s efforts in presenting data clearly and interpreting it accurately. I speak of this “nightmare” with personal familiarity. Some colleagues and I recently summarized different kinds of self-assessments, self-assessment’s relationship to self-efficacy, the importance of self-assessment to achievement, and the complexity of interpreting self-assessment measurements (Nuhfer and others, 2016; 2017).

Can we or can’t we do it?

The children’s story, The Little Engine that Could is a well-known story of the power of positive self-assessment. The throbbing “I think I can, I think I can…” and the success that follows offers an uplifting view of humanity’s ability to succeed. That view is close to the traits of the “Growth Mindset” of Stanford Psychologist Carol Dweck (2016). It is certainly more uplifting than an alternative title, The Little Engine that Was Unskilled and Unaware of It,” which predicts a disappointing ending to “I think I can I think I can….” The dismal idea that our possible competence is capped by what nature conferred at birth is a close analog to the traits of Dweck’s “Fixed Mindset,” which her research revealed as toxic to intellectual development.

As writers of several Improve with Metacognition blog entries have noted, “Unskilled and Unaware of It” are key words from the title of a seminal research paper (Kruger & Dunning, 1999) that offered one of the earliest credible attempts to quantify the accuracy of metacognitive self-assessment. That paper noted that some people were extremely unskilled and unaware of it. Less than a decade later, psychologists were claiming: “People are typically overly optimistic when evaluating the quality of their performance on social and intellectual tasks” (Ehrlinger and others, 2008). Today, laypersons cite the “Dunning-Kruger Effect” and often use it to label any individual or group that they dislike as “unskilled and unaware of it.” We saw the label being applied wholesale in the 2016 presidential election, not just to the candidates but also to the candidates’ supporters.

Self-assessment and vulnerability

Because self-assessment is about taking stock of ourselves rather than judging others, using the Dunning-Kruger Effect to label others is already on shaky ground. But are the odds that those we are tempted to label as “unskilled and unaware of it” likely to be correct? While the consensus in the literature of psychology seems to indicate that they are, our investigation of the numeracy underlying the consensus indicates otherwise (Nuhfer and others, 2017).

We think that nearly two decades of replicated studies that concluded that people are “…typically overly optimistic…” exhibited replication because they all relied on variants of a unique graphic introduced in the seminal paper in 1999. These graphs generate artifact patterns from both actual data and random numbers that are patterns expected from a Dunning-Kruger Effect, and the artifacts are easily mistaken for expressions of actual human self-assessment traits.

After gaining some understanding of the hazards presented by the devilish nature of self-assessment measures, our quantitative results showed that people, in general, have a surprisingly good awareness of their capabilities (Nuhfer and others, 2016, 2017). About half of our studied populace of over a thousand students and faculty accurately self-assessed their performance within ± 10 percentage points (ppts), and about two-thirds of people proved accurate within ±15 ppts. About 25% might qualify as having inadequate self-assessment skills (greater than ± 20 ppts), but only about 5% of our academic populace might merit the label “unskilled and unaware of it” (overestimated their abilities by 30 ppts or more). Odds seem high against a randomly selected person being seriously “unskilled and unaware of it” and are very high against this label being validly applicable to a group.

Others often rise to the expectations we have of them.

Consider the collective effects of people’s accepting beliefs about themselves and others as “unskilled and unaware of it.” This negative perspective can predispose an organization to accept, as a given, that people are less capable than they really are. Further, for those of us with power, such as instructors over students or tenured peers over untenured instructors, we should become aware of a term called “gaslighting.” In gaslighting, our negatively biased actions or comments may result in taking away the self-confidence of others who accept us as credible, trustworthy, and important to their lives. This type of influence can lead to lower performance, thus seeming to substantiate the initial negative perspective. When gaslighting is deliberate, it constitutes a form of emotional abuse.

Aren’t you CURIOUS yet?

Wondering about your self-assessment skills and how they compare with those of novices and experts? Give yourself about 45 minutes and try the self-assessment instrument used in our research at <http://tinyurl.com/metacogselfassess>. You will receive a confidential report if you furnish your email at the end of completing that self-assessment.

Several of us, including our blog founder Lauren Scharff, will be presenting the findings and implications of our recent numeracy studies in August, at the Annual Meeting of the American Psychological Association in Washington DC. We hope some of our fellow bloggers will be able to join us there.

References

Dweck, C. (2016). Mindset: The New Psychology of Success. New York: Ballantine.

Ehrlinger J., Johnson, K., Banner M., Dunning, D., and Kruger, J. (2008). Why the unskilled are unaware: Further explorations of absent self-insight among the incompetent. Organizational Behavior and Human Decision Processes 105: 98–121. http://dx.doi.org/10.1016/j.obhdp.2007.05.002.

Kruger, J. and Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one‘s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology 77: 1121‒ 1134. http://dx.doi.org/10.1037/0022- 3514.77.6.1121

Nuhfer, E. B., Cogan, C., Fleisher, S., Gaze, E., and Wirth, K., (2016). Random number simulations reveal how random noise affects the measurements and graphical portrayals of self-assessed competency.” Numeracy 9 (1): Article 4. http://dx.doi.org/10.5038/1936-4660.9.1.4

Nuhfer, E. B., Cogan, C., Fleisher, S., Wirth, K. and Gaze, E., (2017), “How random noise and a graphical convention subverted behavioral scientists’ explanations of self-assessment data: Numeracy underlies better alternatives. Numeracy: 10 : (1): Article 4.

<DOI: http://dx.doi.org/10.5038/1936-4660.10.1.4>


Promoting academic rigor with metacognition

By John Draeger (SUNY Buffalo State)

A few weeks ago, I visited Lenoir-Rhyne University to talk about promoting academic rigor and I was reminded of the importance of metacognition. College faculty often worry that students are arriving in their courses increasingly underprepared and they often find it difficult to maintain the appropriate level of academic rigor. Faced with this challenge, some colleagues and I developed a model for promoting academic rigor. According to this model, promoting academic rigor requires actively engaging students in meaningful content with higher-order thinking at the appropriate level of expectation for a given context (Draeger, del Prado Hill, Hunter, and Mahler, 2013). The model (see FIGURE ONE) can be useful insofar as it can prompt reflection and frame conversation.  In particular, faculty members can explore how to improve student engagement, how to uncover a course’s most meaningful elements, how to determine the forms of higher-order thinking most appropriate for a course, and how to modulate expectations for different student groups (e.g., majors, non-majors, general education, honors). There is nothing particularly magical about this model. It is one of many ways that college instructors might become more intentional about various aspect of course design, instruction, and assessment. However, I argue that promoting academic rigor in these way requires metacognition.

In a previous post, Lauren Scharff and I argued that metacognition can be used to select the appropriate teaching and learning strategy for a given context (Draeger & Scharff, 2016). More specifically, metacognition can help instructors “check in” with students and make meaningful “in the moment” adjustments. Similarly, engaging students in each of the components of the rigor model can take effort, especially because students often need explicit redirection. If instructors are monitoring student learning and using that awareness to make intentional adjustments, then they are more likely to encourage students to actively engage meaningful content with higher-order thinking at the appropriate level of expectation.

Consider, for example, a course in fashion merchandising. Students are often drawn to such a course because they like to shop for clothes. This may help with enrollment, but the goal of the course is to give students insight into industry thinking. In particular, students need to shift from a consumer mentality to the ability to apply consumer behavior theory in ways that sell merchandise. What would it mean to teach such a course with rigor? The model of academic rigor sketched above recognizes that each of the components can occur independently and not lead to academic rigor. For example, students can be actively engaged in content that is less than meaningful to the course (e.g., regaling others with shopping stories) and students can be learning meaningful content without being actively engaged (e.g., rote learning of consumer behavior theory). Likewise, students can be actively and meaningfully engaged with or without higher-order thinking. The goal, however, is to have multiple components of the model occur together, i.e. to actively engage students in meaningful content with higher-order thinking at the appropriate level of expectations. In the case of fashion merchandising, a professor might send students to the mall to have them use consumer behavior theory to justify why a particular rack of clothes occupies a particular place on the shop floor. If they can complete this assignment, then they are actively engaged (at the mall) in meaningful content (consumer behavior theory) with higher-order-thinking (applying theory to a rack of clothes). Metacognition requires that instructors monitor student learning and use that awareness to make intentional adjustments. If a fashion merchandising instructor finds students lapsing into stories about their latest shopping adventures, then the instructor might redirect the discussion towards higher-order-thinking with meaningful content by asking the students to use consumer behavior theory to question their assumptions about their shopping behaviors.

Or consider a course in introductory astronomy (Brogt & Draeger, 2015). Students often choose such a course to satisfy their general education requirements because they think it has something to do with star gazing and it is preferable to other courses, like physics. Much to their surprise, however, students quickly learn that astronomy is physics by another name. Astronomy instructors struggle because students in introductory astronomy often lack the necessary background in math and science. The trick, therefore, is to make the course rigorous when students lack the usual tools. One solution could be to use electromagnetic radiation (a.k.a. light) as the touchstone concept for the course. After all, light is the most salient evidence we have for occurrences far away. As such, it can figure into conversations about the scientific method, including scientific skepticism about various astronomical findings. Moreover, even if students cannot do precise calculations, it might be enough that they be able to estimate the order-of-magnitude of distant stars. Astronomy instructors have lots of great tools for actively engaging students in order-of-magnitude guesstimates. These can be used to scaffold students into understanding how answers to order-of-magnitude estimates involving light can provide evidence about distant objects. If so, then students are actively engaging meaningful content with higher-order thinking at a level appropriate to an introductory course satisfying a general education requirement. Again, metacognition can help instructors make intentional adjustments based on “in the moment” observations about student performance. If, for example, an instructor finds that students “check out” once mathematical symbols go up on the board, the instructor can redouble efforts to highlight the importance of understanding order-of-magnitude and can make explicit the connection between previous guesstimate exercises and the symbols on the board.

If tools for reflection (e.g., a model of academic rigor) help instructors map out the most salient aspects of a course, then metacognition is the mechanism by which instructors navigate that map. If so, then I suggest that promoting academic rigor requires metacognition. It is important to understand how we can help students actively engage in meaningful course content with higher-order-thinking at the appropriate level of expectation for a given course. However, consistently shepherding students to the intersection of those elements requires  metacognitive awareness and self-regulation on behalf of the instructor.

References

Brogt, E. & Draeger, J. (2015). “Academic Rigor in General Education, Introductory Astronomy Courses for Nonscience Majors.” The Journal of General Education, 64 (1), 14-29.

Draeger, J. (2015). “Exploring the relationship between awareness, self-regulation, and metacognition.”  Retrieved from https://www.improvewithmetacognition.com/exploring-the-relationship-between-awareness-self-regulation-and-metacognition/

Draeger, J., del Prado Hill, P., Hunter, L. R., Mahler, R. (2013). “The Anatomy of Academic Rigor: One Institutional Journey.” Innovative Higher Education 38 (4), 267-279.

Draeger, J. & Scharff, L. (2016). “Using Metacognition to select and apply appropriate teaching strategies.” Retrieved from https://www.improvewithmetacognition.com/using-metacognition-select-apply-appropriate-teaching-strategies/


Teacher, Know Thyself (Translation: Use Student Evaluations of Teaching!)

by Guy Boysen, Ph.D., McKendree University

I’ll call him Donald. I am willing to bet that you know a Donald too. Students fled from Donald’s classes in droves. His was the pedagogy of narcissism – “I am the Lord thy teacher and thou shall have no classroom activities unfocused on me!” Donald’s grading system was so subjective, vindictive, and Byzantine as to be barely defensible. Enrollment in his classes always followed the same pattern: intro course full of students who did not know any better at the start of the semester and then decimation by the end of the semester; advanced seminars empty except for a few adoring students with Stockholm syndrome. Asked about his student evaluations, Donald would say “My seminar evals are good, but I don’t even look at my intro evals anymore – they don’t know about teaching.”

Donald calls to mind the classic metacognitive phenomenon of being unskilled and unaware of it (Kruger & Dunnnig, 1999; Lodge, 2016; Schumacher, Akers, & Taraban, 2016). This is something teachers see in students all of the time; bad students overestimate their abilities and therefore don’t work to improve. As illustrated by Donald, this phenomenon applies to teachers as well.

There are a number of wide-ranging characteristics that make someone a model teacher (Richmond, Boysen, & Gurung, 2016), but the use of student evaluations to improve teaching is one that has a strong metacognitive component. Student evaluations provide teachers with feedback so that they can engage in metacognitive analysis of their pedagogical skills and practices. Based on that analysis, goals for improvement can be set and pursued.

Recommendations for Using Student Evals

How should teachers use student evaluations to develop metacognitive awareness of their pedagogical strengths and weakness? Several suggestions can be found in An Evidence-Based Guide for College and University Teaching: Developing the Model Teacher (Richmond, Boysen, & Gurung, 2016).

Set goals for improvement and follow through with them.

Have you ever gotten on a scale and not liked the number staring back at you? Did you just get on and off the scale repeatedly expecting the number to change? No? Well, that trick doesn’t work in teaching either. Collecting student evaluations without using them to set and implement goals for improvement is like a diet that only consists of repeated weigh-ins – the numbers will not change without the application of direct effort. Use your student evaluations, preferably in collaboration with a mentor or teaching expert, to set manageable goals for change. 

Select the correct assessment tool.    

Wouldn’t it be great if we could select our own teaching evaluations? Mine might look something like this.

But wait! You can select your own teaching evaluations. Official, summative evaluations may be set at the institutional level, but teachers can implement any survey they want for professional development purposes. Choose wisely, however.

If you are a numbers person, select a well-researched measure that provides feedback across several dimensions of teaching that are relevant to you. Perhaps the best known of these is the Student Evaluation of Educational Quality (SEEQ), which measures teaching quality across nine different factors (Marsh, 1983). The advantages to this type of measure are that the results can be scientifically trusted and are detailed enough to inform goals for improvement.

Not a numbers person? You might ask for written comments from students. Whatever you want to know about your teaching, you can simply ask – believe me, students have opinions! Although analyzing student comments can be laborious (Lewis, 2001), they can offer unequalled richness and specificity. Beware of asking for general feedback however. General questions tend to elicit general feedback (Hoon, Oliver, Szpakowska, & Newton, 2015). Rather, provide specific prompts such as the following.

  • What should I/we STOP doing in this class?
  • What should I/we START doing in this class?
  • What should I/we CONTINUE doing in this class? 

Don’t wait until the end of the semester.

Imagine if Donald could get feedback from the students who drop his classes. Perhaps he could make pedagogical changes to reach those students before they flee. Guess what, he can! Formative assessment is the key.

Teachers often allow official, end-of-semester student evaluations to serve as their only feedback from students. The problem with this approach is that the feedback comes too late to make midsemester course corrections. This is analogous to the metacognitive importance of providing students with early feedback on their performance. You wouldn’t expect students to succeed in your course if a final exam was the only grade, would you? Well, don’t put yourself in the same position. Model teachers ask for student feedback both at the end of the semester (i.e., summative) and early enough in the semester to make immediate improvements (i.e., formative).

Make changes large and small.

Student evaluations can be used to inform revisions to all levels of pedagogy. Imagine that students report being absolutely bewildered by a concept in your class. Potential responses to this feedback could be to change (a) the time spent on the concept in class, (b) scaffolding of knowledge needed to understand the concept, (c) the availability of study aids related to the concept, (d) the basic instructional technique used to teach the concept, or (e) the decision to even include the concept in the course. For model teachers, student feedback can inform changes large and small.

Conclusion

Every single semester students comment on my evaluations that they want the tests to be multiple choice rather than short answer/essay, and every semester I tell students that I will not be changing the test format because students do not study as hard for multiple-choice tests. Thus, my point is not that model teachers incorporate all student feedback into their courses. However, failure to respond should be a sound and intentional pedagogical choice rather than a Donald-like failure of metacognition – don’t be caught unskilled and unaware.

References

Hoon, A., Oliver, E., Szpakowska, K., & Newton, P. (2015). Use of the Stop, Start, Continue method is associated with the production of constructive qualitative feedback by students in higher education. Assessment & Evaluation in Higher Education, 40, 755-767. doi:10.1080/02602938.2014.956282

Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77, 1121-1134.

Lewis, K. G. (2001). Making sense of student written comments. New Directions for Teaching and Learning, 87, 25-32.

Lodge, J. (2016) Hypercorrection: Overcomming overconfidence with metacognition. Retreived from https://www.improvewithmetacognition.com/hypercorrection-overcoming-overconfidence-metacognition/

Marsh, H. W. (1983). Multidimensional ratings of teaching effectiveness by students from different academic settings and their relation to student/course/instructor characteristics. Journal of Educational Psychology, 75, 150-166. doi:10.1037/0022-0663.75.1.150

Richmond, A. S., Boysen, G. A., Gurung, R. A. R. (2016). An evidence-based guide for college and university teaching: Developing the model teacher. Routledge.

Schumacher, J. R., Akers, E. & Taraban, R. (2016). Unskilled and unaware: A metacognitive bias. Retrieved from https://www.improvewithmetacognition.com/unskilled-unaware-metacognitive-bias/


New Year Metacognition

by Lauren Scharff, Ph.D., United States Air Force Academy *

Happy New Year to you! This seasonal greeting has many positive connotations, including new beginnings, hope, fresh starts, etc. But, it’s also strongly associated with the making of new-year resolutions, and that’s where the link to metacognition becomes relevant.

As we state on the Improve with Metacognition home page, “Metacognition refers to an intentional focusing of attention on the development of a process, so that one becomes aware of one’s current state of accomplishment, along with the situational influences and strategy choices that are currently, or have previously, influenced accomplishment of that process. Through metacognition, one should become better able to accurately judge one’s progress and select strategies that will lead to success.”

Although this site typically focuses on teaching and learning processes, we can be metacognitive about any process / behavior in which we might engage. A new year’s resolution typically involves starting a new behavior that we might deem to be healthier for us, or stopping an already established behavior that we deem to be unhealthy for us. Either way, some effort is likely to be involved, because if it was going to be easy, we wouldn’t create a resolution to make the change.

Effort alone, however, is unlikely to lead to success. Just like students who “study harder” without being metacognitive about it, people who simply “try hard” to make a change will often be unsuccessful. This is because most behaviors, including learning, are complex. There are a multitude of situational factors and personal predispositions that interact to influence our success in obtaining our behavioral goals. Thus, it’s unlikely that a single strategy will work at all times. In fact, persisting with an ineffective strategy will lead to frustration, cynicism, and the giving up on one’s resolution.

Now, typically, I am not the sort of person who actually makes new-year resolutions. But this new year presents a new situation for me. I will be on sabbatical and working from home. I have prepared a fairly ambitious list of professional development activities that I hope to accomplish. I know I am capable of each of them. But, I also know that I will be working in an environment with a different group of distractions and without many external deadlines. Instead of committee work, grading, short turn-around taskers, and meetings with students and colleagues preventing me from working on my publications and other professional development activities, I will have a dog with big brown eyes who would love to go for a walk, children who need attention when they’re home from school, and projects at home that I usually can put out of mind when I’m at the office.

My resolution to myself for the coming 6 months of my sabbatical is that I will create a positive work environment for myself and accomplish my list of professional development activities while maintaining a balance with my family and personal goals. I know that I will need a variety of strategies, and that I will need to take time to reflect on the state of my progress and show self-regulation in my choice of strategies at different times. I plan to use a journal to help me with my awareness of the alignment between my daily goals and the activities in which I choose to engage in order to accomplish those goals.[1] This awareness will guide my self-regulation when, inevitably, I get off track. I also plan to make some public commitments and provide updates to my friends and colleagues regarding specific goals I plan to accomplish at specific times, as public commitment provides motivation, often results in social support, and is another way to encourage self-awareness and self-regulation, i.e. metacognition.

I’ll let you know how it goes in 6 months. 🙂  Meanwhile, Happy New Year and all the best to you with your new-year resolutions. Try using the tools of metacognition to help you succeed!

[1] See our preliminary research summary about the effectiveness of instructors using journals to enhance their metacognitive instruction.

* Disclaimer: The views expressed in this document are those of the author and do not reflect the official policy or position of the U. S. Air Force, Department of Defense, or the U. S. Govt.


Bringing a Small Gift – The Metacognitive Experience

by Roman Taraban, Ph.D.,  Texas Tech University

In the Christmas song, “The Little Drummer Boy,” the young boy brings his humble gift to the “mighty king,” which he presents from the heart. This is an apt situation to bring up at this time of year, for you too, might have received a small gift. For ‘tis the season for metacognition.

John Flavell and others generally describe metacognition as thinking about thinking. More specifically, “Metacognitive knowledge is one’s stored knowledge or beliefs about oneself and others as cognitive agents, about tasks, about actions or strategies, and about how all these interact to affect the outcomes of any sort of intellectual enterprise” (Flavell, 1999, p. 906). Flavell (1999) broadened metacognitive theory to include affect: “Metacognitive experiences are conscious cognitive or affective experiences that occur during the enterprise and concern any aspect of it—often, how well it is going” (p. 906). Affect, as part of metacognitive experiences, is important because if you have the feeling that something is difficult to comprehend, remember, or solve, those feelings may trigger careful metacognitive reflection and changes in goals or strategies (Papaleontiou-Louca, 2008). Nuhfer (2014), in a related vein, affirms the crucial role of affect to metacognition in developing students’ metacognitive skills: “[A]ttempts to develop students’ metacognitive proficiency without recognizing metacognition’s affective qualities are likely to be minimally effective.”

So what is that gift I mentioned earlier? It’s your end-of-semester evaluations, of course. There we ask students to evaluate and comment on whether the course objectives were specified and followed by the instructor, whether the instructor was an effective teacher, and whether the course was a valuable learning experience. These questions prompt students to think about their thinking in the course. Without prompting, students spontaneously also comment on their affect. And here come the gifts, some of them encouraging, pleasant, and precious as gold. Here are a few examples: I very much enjoyed the discussions and deeper exploration of the material. I felt that the papers pushed me to genuinely consider and critically evaluate the material in a way I may not have otherwise. Thank you for an enjoyable and thought-provoking seminar. This has been my favorite psychology class. The work assignments were challenging and (dare I say) fun.

But sometimes the gift can be a bit disconcerting. There was one unfortunate December when I unluckily received my course evaluations just before leaving on a family vacation to Las Vegas. I had gone through the semester thinking how wise I was and how well things were going. The students told me otherwise. Yes, they explained why I deserved those low ratings, so they had to think about their metacognitive experience – i.e., what it was like learning the material in my course and how they felt about the process. For a week, I was inconsolable. But the students had got my attention. I realized I had become too complacent. I had to think deeply about my thinking about how to organize and deliver the course. I had to engage in metacognitions about teaching. And it wasn’t just about thinking about the knowledge I had and they had (or had not). It was also about the affect – how I felt about the course, myself, and the students, in the context of those metacognitions.

That semester was a gift. Every semester is a gift. But we have to accept the gift for it to be meaningful and make a difference. So…all good tidings for the season – I mean, end of the semester.

References

Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry. American Psychologist, 34(10), 906-911. doi.org/10.1037/0003-066X.34.10.906

Nuhfer, E. (2014). Self-assessment and the affective quality of metacognition: Part 1 of 2. Retrieved from https://www.improvewithmetacognition.com/self-assessment-and-the-affective-quality-of-metacognition-part-1-of-2/

Papaleontiou-Louca, E. (2008). Metacognition and theory of mind. Newcastle, UK: Cambridge Scholars Publishing.

 


Can Reciprocal Peer Tutoring Increase Metacognition in Your Students?

Aaron S. Richmond, Ph. D.

How many of you use collaborative learning in your classroom? If you do, do you specifically use it to increase metacognition in your students? If the answer is yes, you are likely building on the work of Hadwin, Jarvela, and Miller (2011) and Schraw, Crippen, and Hartley (2006). For those of you unfamiliar with collaborative learning, I tend to agree with Slavich and Zimbardo’s (2012) definition, in collaborative learning students “…tackle problems and question with peers—especially more knowledgeable peers—insofar as such experiences provide students with opportunities to learn new problem-solving strategies and to debate ideas in a way that challenges their understanding of concepts” (p. 572). There are many ways to use collaborative learning in the classroom, jigsaw classroom, paired annotations, send-a-problem, think-pair-share, three-step interview, peer tutoring, number heads, etc. Of particular interest, recent research on collaborative learning suggests that reciprocal peer tutoring may be particularly useful when your goal is to not only learn course material, but to increase your student’s metacognition (De Backer, Van Keer, Moerkerke, & Valcke, 2016).

In their innovative study, De Backer and colleagues (2016) investigated the effects of using reciprocal peer tutoring (RPT) to support and increase metacognitive regulation in higher education. De Backer and colleagues defined RPT as “the structured exchange of the tutor role among peers in groups/pairs…and enables each student to experience the specific benefits derived from providing and receiving academic guidance.” (p. 191) De Backer et al. had students, over a course of the semester, complete eight peer tutoring sessions. All students were trained to be a tutor,  experienced being a tutor, and tutored their peers at least twice. Tutoring sessions were 120 minutes in length and occurred outside of class. The tutor’s role was to manage the tutees and promote collaborative learning. During each tutoring session, the tutees were asked to solve a problem related to the class content. Each problem had three specific components:

(1) An outline of learning objectives to guide peers’ discussion to central course-related topics; (2) a subtask aimed at getting familiar with the theme-specific terminology; and (3) a subtask in which students were instructed to apply theoretical notions to realistic instructional cases. (De Backer et al., 2016, p. 193)

The problems presented, often did not have clear-cut answers and required considerable cognitive effort. De Backer et al. video recorded all the tutoring sessions and then scored each session on the amount and type of metacognitive regulation that occurred by both tutors and tutees. For example, they looked at the student’s ability to orient, plan, monitor, and evaluate. They also measured the level of processing (whether it was shallow or deep processing of metacognitive strategies). Appendix D of De Backer et al.’s article provided examples of how to code metacognitive data. See Table 1 for an example of the scoring (De Backer et al., 2016, p. 41). They then scored the frequency of metacognitive regulations that occurred per session.

Table 1. Examples of Lower and Deep Level Metacognitive Regulation in Reciprocal Peer Tutoring by De Backer et al. (2016, pp. 41-42)
Metacognition–Monitoring
Comprehension Monitoring Noting lack of comprehension T: “Does everyone understand the outlines of instructional behaviorism?”
t1: “I still don’t understand the concept of aptitude.”
Checking comprehension by repeating (LL) T: “Does everyone agree now that instructional behaviorism and instructional constructivism are opposites?”
t1: “I think (…) because in behaviorism the instructor decides on everything but constructivism is about learners being free to construct their own knowledge.:
t2: “Yes constructivist learners are much more independent and active, not so?”
Checking comprehension by elaborating (DL) T: “The behavioristic instructor permanently provides feedback. Who knows why?”
t1: “Is it not to make sure that learners don’t make mistakes?”
t2: “Could that also be the reason why they structure the learning materials extensively? And why they don’t like collaborative learning? Because collaborative learning requires

spontaneous discussions between students. You cannot really structure it in advance, not

so?”

Note. DL = Deep learning, LL = low or shallow learning, T = tutor, t1 and t2 = tutees.

De Backer and colleagues (2016) found that as the semester progressed, students engaged in more and more metacognitive regulatory processes. Specifically, their orientation increased, their monitoring increased and their evaluation increased (in general the frequency was 3 times greater at the end of the semester than at the beginning of the semester). However, planning stayed stagnant over the course of the semester. Specifically, the frequency of planning use continued to be low throughout the semester.  Far more interesting was that students (over the course of the semester) decreased their use of shallow or low-level metacognitive strategies and increased their use of deep-level metacognitive strategies as result. Increases in metacognitive regulation occurred across most types of metacognitive strategies (e.g., regulation, orientation, activating prior knowledge, task analysis, monitoring, and evaluation).

 As demonstrated by De Backer and colleagues study and the work of other researchers (e.g., King, 1997; De Backer, Van Keer, & Valcke, 2012), RPT and other collaborative learning instructional methods may be a useful in increasing metacognitive processes of students.

Concluding Thoughts and Questions for You

After reading De Backer et al. (2016), I was fascinated by the possible use of RPT in my own classroom. So, I started to think about how to implement it myself. Some questions arose that I thought you might help me with:

  1. How do I specifically scaffold the use of RPT in my classroom? More so, what does a successful RPT session resemble? Fortunately, De Backer and colleagues did provide an appendix to their study (Appendix C) that gives an example of what a tutoring session may look like.
  2. How many tutoring sessions is enough to increase the metacognition in my students? De Backer et al. had 8 sessions. This would be difficult for me to squeeze into my course planning. Would 3-4 be enough? What do you think? But then not all students could be a tutor. Do they get more (metacognitively) out of being a tutor vs. a tutee? This is something that De Backer and colleagues did not analyze. (Hint, hint all you folks—SoTL project in the making;)
  3. De Backer et al. briefly described that the tutors had a 10-page manual on how to be a tutor. Hmm…I don’t know if my students would be able to effectively learn from this. What other simple ways might we use to teach students how to be effective tutors in the context of RPT?
  4. Finally, are you do anything like De Backer et al.? And if so, do you think it is improving your student’s metacognitive regulation?

 References

De Backer, L., Van Keer, H., Moerkerke, B., & Valcke, M. (2016). Examining evolutions in the adoption of metacognitive regulation in reciprocal peer tutoring groups. Metacognition and Learning, 11, 187-213. doi:10.1007/s11409-015-9141-7

De Backer, L., Van Keer, H., & Valcke, M. (2012). Exploring the potential impact of reciprocal peer tutoring on higher education students’ metacognitive knowledge and metacognitive regulation. Instructional Science, 40, 559–588.

Hadwin, A. F., Järvelä, S., & Miller, M. (2011). Self-regulated, co-regulated, and socially shared regulation of learning. In B. J. Zimmerman & D. H. Schunk (Eds.), Handbook of self-regulation of learning and performance (pp. 65–84). New York: Routledge.

King, A. (1997). Ask to think-tell why©: A model to transactive peer tutoring for scaffolding higher level complex learning. Educational Psychologist, 32, 221–235.

Schraw, G., Crippen, K. J., & Hartley, K. (2006). Promoting self-regulation in science education: metacognition as part of a broader perspective on learning. Research in Science Education, 36, 111–139.

Slavich, G. M., & Zimbardo, P. G. (2012). Transformational teaching: Theoretical underpinnings, basic principles, and core methods. Educational Psychology Review, 24, 569-608. doi:10.1007/s10648-012-9199-6


The GAMES Survey: A Tool to Scaffold Metacognitive Practices

by Lauren Scharff, U. S. Air Force Academy

As many of us educators know, an unfortunately large number of students, both at the K-12 and college-levels, do not give much thought to how and why they try to learn the way they do, much less demonstrate strong habits of metacognition. Talking in general about metacognition might garner some students’ interest, but without some concrete guidance on how to engage in behaviors that support metacognition, students are less likely to develop such practices.

Thus, I was pleased to rediscover the GAMES survey / self-assessment tool created by Marilla Svinicki when I was re-reading her excellent book, Learning and Motivation in the Postsecondary Classroom, as part of a book group at my institution. GAMES stands for:

  • Goal-oriented studying
  • Active studying
  • Meaningful and memorable studying
  • Explain to understand
  • Self-monitor

For each component of the survey, there are five to ten behaviors for which students indicate their likelihood to perform using a 5-point scale ranging from “Never” to “Always.” These behaviors are distinct, tangible actions such as:

  • Analyze what I have to do before beginning to study. (Goal-oriented studying)
  • Ask myself questions before, during, and after studying. (Active studying)
  • Make connections between what I am studying and past classes or units. (Meaningful and memorable studying)
  • Discuss the course content with anyone willing to listen. (Explain to understand)
  • Keep track of things I don’t understand and note when they finally become clear and what made that happen. (Self-monitor)

Marilla suggests that the use of such an instrument can help students become more aware of the possibility of self-regulating their learning behaviors. This combination of awareness and self-regulation is key to metacognition, and is what is prompting this blog post.

Through the process of completing the GAMES survey, students are introduced to more than 30 specific behaviors that holistically will support metacognition about learning. Students can easily observe areas where they might show stronger or weaker engagement, and they can focus their efforts where they are weaker, using the list of specific, tangible behaviors as a scaffold to help them target their activity.

At my institution, the U. S. Air Force Academy, we plan to use the GAMES survey in a current Science of Learning workshop series for our students led by students. Most of the seminar attendees are students who are struggling academically, but we are advertising that, by “studying smarter, not only harder” students of all levels of academic achievement can improve their learning. We believe that the GAMES survey will help students target specific behaviors that have been shown to support deeper learning.

We are not the only institution that has seen value in disseminating the GAMES survey to students. For example, several years ago, Georgia Tech encouraged its use across all sections of their first-year seminar. Importantly, they didn’t simply ask students to complete the survey and that was it. They encouraged instructors to help students use the results in a meaningful way, such as by picking a weak behavior and striving to improve it over a 2-week time period, or by having students journal about changes they made and how those changes seemed to impact their academic performance.

This survey tool is appropriate across the disciplines and only takes a few minutes for students to complete. Its use and a short follow-on activity to encourage meaningful application would not add great burden to a faculty member or take much time from normal course activities. But, the pay-off could be large for individual students, both in that course as well as others if they transfer the principles into new contexts. It’s definitely worth a try!

——————

Svinicki, M. D. (2004). Learning and motivation in the postsecondary classroom. Bolton, MA: Anker Publishing Co.

If you do not have access to Marilla Svinicki’s book, you can read a short online overview of GAMES on the Association for Psychological Science website (2006), and obtain a pdf copy of the survey online.


Promoting metacognitive reading through Just-in-Time Teaching

by  John Draeger, SUNY Buffalo State

In a series of previous posts, I have discussed some of the ways that Just-in-Time Teaching techniques can promote metacognition (Draeger, 2014, 2015, 2016). Just-in-Time assignments require that students complete short assignments prior to class and instructors review those assignments before class begins so that they can tailor their lesson based on those responses (Novak, 1999). My introductory philosophy courses typically have 40 students and my Just-in-Time assignments each involve three short essay questions prior to each class session. The questions have a predictable structure — one asks students to explicate a central idea in the reading, one asks them to engage in critical thinking about the reading (e.g., how might the author respond to an issue raised earlier in the semester), and one encourages metacognition (e.g., whether their reading strategy was effective). This post shares my attempts to promote metacognition through Just-in-Time techniques to a larger section of introductory ethics (175 students), and, it further explores how Just-in-Time assignments can promote metacognitive reading.

All of the students in my larger introductory ethics course are required to answer the Just-in-Time questions prior to each class via my institution’s learning management system. Because of the large number of students and short turn-around time, I have adapted the format of these assignments. I typically ask two multiple-choice questions related to the central ideas in the reading. I ask one short essay question encouraging critical thinking about the reading (e.g., how is the current reading related to the previous one?). I also ask one Likert-style question to gauge how confident they are in their understanding of the reading, and one short answer question to encourage them to be metacognitive about their learning process (e.g., what was your reading strategy for this reading?, what was your annotation strategy?, or what was your strategy for relating the current reading to the previous?).

Before each class, I review a computer generated summary of the multiple-choice questions to gauge broad understanding of the material and look for trends. For example, their responses to the Likert-style question gauging their confidence in their understanding of the material often overestimate their actual understanding as determined by the multiple-choice questions, critical thinking essay, and overall course performance. However, this difference can serve as a conversation starter about their performance in the course. In some cases, I also ask Likert-style questions related to the author’s central thesis. So, for example, when we read an essay on sexual harassment, I asked them how often they believed sexual harassment occurred, with response options of daily, weekly, monthly, annually, and never. This Likert question became the opening move in our conversation during the next lesson. I shared that 87% reported that sexual harassment occurs at least daily or weekly. This led to an open-ended discussion about the sorts of behaviors that counted as harassment. In this way, Just-in-Time assignments can both inform and facilitate class conversation about the material.

Just-in-time assignments can also inform and facilitate conversations about how to become more metacognitive about learning; in this class I focus on their reading skills. For example, I recently asked students a Likert-style question about whether they have changed their reading practices since the beginning of the semester and a follow-up short answer question regarding how their practices have changed. 74% of students reported that their reading practices have changed. A number of interesting themes emerged in their description of those changes. First, many students reported moving to the “next stage” of their reading practice. Students moved (a) from not doing the reading to at least skimming, (b) from skimming until they got bored to finishing the reading, (c) from reading to re-reading, and (d) from re-reading to re-reading with an aim to synthesize the large themes. These responses also highlight to me the fact that not all students are in the same place with respect to their learning practices, so I should not make generalized assumptions, nor assume that one recommendation from me will accurately match all students’ developmental needs. Second, students reported changes in their annotation strategies. They moved from no highlighting to highlighting and from highlighting to more intentional annotation strategies (e.g., outlining in the margins, summarizing important thesis, adding critical questions in the margins). Third, students reported using strategies that we’d previously discussed in class (e.g., reading with different speeds, developing intentional annotations, reading the conclusion first and then reconstructing how the author gets there). Fourth, some students transformed their view of what reading philosophy is really about (e.g., they moved from reading for “information” to looking for the conceptual connections between big ideas). Finally, students reflected on the importance of time-management (e.g., devoting more time to the reading task, finding better physical reading environments, finding times in the day when they are more like to be able to process philosophy). Responses from the 26% of students who had not changed their reading practices were similarly illuminating. For example, most reported that they recognized a change was in order even if they had not yet managed to change. They also identified problems with their current reading practices. For example, they said that they waited until the last minute and rushed through Just-in-Time assignments. They recognized the value of intentional annotation and expressed the hope that they would eventually adopt those practices. And some students were able to diagnose why they were struggling (e.g., they quickly lose patience with authors who do not share their point of view). In short, Just-in-Time assignments can promote metacognitive reading by encouraging students to intentionally consider and evaluate their reading techniques as well as facilitate conversations about alternative reading strategies.

It should come as no surprise that teaching introductory ethics to a section of 175 students differs from teaching to a section of 40 students. However, it is clear that the Just-in-Time teaching technique is not only viable in a large class, but it can promote metacognition about learning as well as inform me about their level of content understanding. Indeed, teaching a larger section has led me to better ways of encouraging conversations with students about their learning process.

 References

Draeger, J. (2014). “Just-in-Time for Metacognition.” Retrieved from https://www.improvewithmetacognition.com/just-in-time-for-metacognition.

Draeger, J. (2015). “Using Just-in-Time assignments to promote metacognition.” Retrieved from https://www.improvewithmetacognition.com/using-just-in-time-assignments-to-promote-metacognition.

Draeger, J. (2016). “Fine-tuning Just-in-Time assignments to encourage metacognition.” Retrieved from  https://www.improvewithmetacognition.com/fine-tuning-just-time-assignments-encourage-metacognition/

Novak, G., Patterson, E., Gavrin, A., & Christian, W. (1999). Just-in-time teaching: Blending active learning with web technology. Upper Saddle River, NJ: Prentice Hall.