So what if ‘metacognition’ is vague!

by John Draeger (SUNY Buffalo State)

When Lauren Scharff invited me to join Improve with Metacognition last year, I was only vaguely aware of what ‘metacognition’ meant. As a philosopher, I knew about various models of critical thinking and I had some inkling that metacognition was something more than critical thought, but I could not have characterized the extra bit. In her post last week, Scharff shared a working definition of ‘metacognitive instruction’ developed by a group of us involved as co-investigators on a project (Scharff, 2015). She suggested that it is the “intentional and ongoing interaction between awareness and self-regulation.” This is better than anything I had a year ago, but I want to push the dialogue further.

I’d like to take a step back to consider the conceptual nature of metacognition by applying an approach in legal philosophy used to analyze terms with conceptual vagueness. While clarity is desirable, Jeremy Waldron argues that there are limits to the level of precision that legal discourse can achieve (Waldron, 1994). This is not an invitation to be sloppy, but rather an acknowledgement that certain legal concepts are inescapably vague. According to Waldron, a concept can be vague in at least two ways. First, particular instantiations can fall along a continuum (e.g., actions can be more or less reckless, negligent, excessive, unreasonable). Second, some concepts can be understood in terms of overlapping features. Democracies, for example, can be characterized by some combination of formal laws, informal patterns of participation, shared history, common values, and collective purpose. These features are neither individually necessary nor jointly sufficient for a full characterization of the concept. Rather, a system of government counts as democratic if it has “enough” of the features. A particular democratic system may look very different from its democratic neighbor. This is in part because particular systems will instantiate the features differently and in part because particular systems might be missing some feature altogether. Moreover, democratic systems can share features with other forms of government (e.g., formal laws, common values, and collective purpose) without there being a clear boundary between democratic and non-democratic forms of government. According to Waldron, there can be vagueness within the concept of democracy itself and in the boundaries between it and related concepts.

While some might worry that the vagueness of legal concepts is a problem for legal discourse, Waldron argues that the lack of precision is desirable because it promotes dialogue. For instance, when considering whether some particular instance of forceful policing should be considered ‘excessive,’ we must consider the conditions under which force is justified and the limits of acceptability. Answering these questions will require exploring the nature of justice, civil rights, and public safety. Dialogue is valuable, in Waldron’s view, because it brings clarity to a broad constellation of legal issues even though clarity about any one of the constituents requires thinking carefully about the other elements in the constellation.

Is ‘metacognition’ vague in the ways that legal concepts can be vague? To answer this question, consider some elements in the metacognitive constellation as described by our regular Improve with Metacognition blog contributors. Self-assessment, for example, is feature of metacognition (Fleisher, 2014, Nuhfer, 2014). Note, however, that it is vague. First, self-assessments may fall along a continuum (e.g., students and instructors can be more or less accurate in their self-assessments). Second, self-assessment is composed of a variety of activities (e.g., predicting exam scores, tracking gains in performance, understanding personal weak spots and understanding one’s own level of confidence, motivation, and interest). These activities are neither individually necessary nor jointly sufficient for a full characterization of self-assessment. Rather, students or instructors are engaged in self-assessment if they engage in “enough” of these activities. Combining these two forms of vagueness, each of the overlapping features can themselves fall along a continuum (e.g., more or less accurate at tracking performance or understanding motivations). Moreover, self-assessment shares features with other related concepts such as self-testing (Taraban, Paniukov, and Kiser, 2014), mindfulness (Was, 2014), calibration (Gutierrez, 2014), and growth mindsets (Peak, 2015). All are part of the metacognitive constellation of concepts. Each of these concepts is individually vague in both senses described above and the boundaries between them are inescapably fuzzy. Turning to Scharff’s description of metacognitive instruction, all four constituent elements (i.e. ‘intentional,’ ‘ongoing interaction,’ ‘awareness,’ and ‘self-regulation’) are also vague in both senses described above. Thus, I believe that ‘metacognition’ is vague in the ways legal concepts are vague. However, if Waldron is right about the benefits of discussing and grappling with vague legal concepts (and I think he is) and if the analogy between vague concepts and the term ‘metacognition’ holds (and I think it does), then vagueness in this case should be perceived as desirable because it facilitates broad dialogue about teaching and learning.

As Improve with Metacognition celebrates its first year birthday, I want to thank all those who have contributed to the conversation so far. Despite the variety of perspectives, each contribution helps us think more carefully about what we are doing and why. The ongoing dialogue can improve our metacognitive skills and enhance our teaching and learning. As we move into our second year, I hope we can continue exploring the rich the nature of the metacognitive constellation of ideas.

References

Fleisher, Steven (2014). “Self-assessment, it’s a good thing to do.” Retrieved from https://www.improvewithmetacognition.com/self-assessment-its-a-good-thing-to-do/

Gutierrez, Antonio (2014). “Comprehension monitoring: the role of conditional knowledge.” Retrieved from https://www.improvewithmetacognition.com/comprehension-monitoring-the-role-of-conditional-knowledge/

Nuhfer, Ed (2014). “Self-Assessment and the affective quality of metacognition Part 1 of 2.”Retrieved from https://www.improvewithmetacognition.com/self-assessment-and-the-affective-quality-of-metacognition-part-1-of-2/

Peak, Charity (2015). “Linking mindset to metacognition.” Retrieved from https://www.improvewithmetacognition.com/linking-mindset-metacognition/

Scharff, Lauren (2015). “What do we mean by ‘metacognitive instruction’?” Retrieved from https://www.improvewithmetacognition.com/what-do-we-mean-by-metacognitive-instruction/

Taraban, Roman, Paniukov, Dmitrii, and Kiser, Michelle (2014). “What metacognitive skills do developmental college readers need? Retrieved from https://www.improvewithmetacognition.com/what-metacognitive-skills-do-developmental-college-readers-need/

Waldron, Jeremy (1994). “Vagueness in Law and Language: Some Philosophical Issues.” California Law Review 83(2): 509-540.

Was, Chris (2014). “Mindfulness perspective on metacognition. ”Retrieved from https://www.improvewithmetacognition.com/a-mindfulness-perspective-on-metacognition/

 


What Do We Mean by “Metacognitive Instruction”?

by Lauren Scharff (U.S. Air Force Academy*) 

Many of you are probably aware of the collaborative, multi-institutional metacognitive instruction research project that we initiated through the Improve with Metacognition site.  This project has been invigorating for me on many levels. First, through the process of developing the proposal, I was mentally energized. Several of us had long, thoughtful conversations about what we meant when we used the term “metacognitive instruction” and how these ideas about instruction “mapped” to the concept of “metacognitive learning.”  These discussions were extensions of some early blog post explorations, What do we mean when we say “Improve with metacognition”? (Part 1 and Part 2). Second, my involvement in the project led me to (once again) examine my own instruction. Part of this self-examination happened as a natural consequence of the discussions, but also it’s happening in an ongoing manner as I participate in the study as an intervention participant. Good stuff!

For this post, I’d like to share a bit more about our wrangling with what we meant by metacognitive instruction as we developed the project, and I invite you to respond and share your thoughts too.

Through our discussions, we ultimately settled on the following description of metacognitive instruction:

Metacognitive instructors are aware of what they are doing and why. Before each lesson, they have explicitly considered student learning goals and multiple strategies for achieving those goals.  During the lesson, they actively monitor the effectiveness of those strategies and student progress towards learning goals.  Through this pre-lesson strategizing and during lesson monitoring awareness, a key component of metacognition, is developed; however, awareness is not sufficient for metacognition.  Metacognitive instructors also engage in self-regulation. They have the ability to make “in-the-moment”, intentional changes to their instruction during the lesson based on a situational awareness of student engagement and achievement of the learning objectives — this creates a responsive and customized learning experience for the student.

One of the questions we pondered (and we’d love to hear your thoughts on this point), is how these different constructs were related and / or were distinct. We came to the conclusion that there is a difference between reflective teaching, self-regulated teaching, and metacognitive instruction/teaching.

More specifically, a person can reflect and become aware of their actions and their consequences, but at the same time not self-regulate to modify behaviors and change consequences, especially in the moment. A person can also self-regulate / try a new approach / be intentional in one’s choice of actions, but not be tuned in / aware of how it’s going at the moment with respect to the success of the effort. (For example, an instructor might commit to a new pedagogical approach because she learned about it from a colleague. She can implement that new approach despite some personal discomfort due to changing pedagogical strategies, but without conscious and intentional awareness of how well it fits her lesson objectives or how well it’s working in the moment to facilitate her students’ learning.) Metacognition combines the awareness and self-regulation pieces and increases the likelihood of successfully accomplishing the process (teaching, learning, or other process).

Thus, compared to other writings we’ve seen, we are more explicitly proposing that metacognition is the intentional and ongoing interaction between awareness and self-regulation. Others have generally made this claim about metacognitive learning without using the terms as explicitly. For example, “Simply possessing knowledge about one’s cognitive strengths or weaknesses and the nature of the task without actively utilizing this information to oversee learning is not metacognitive.” (Livingston, 1997). But, in other articles on metacognition and on self-regulated learning, it seems like perhaps the metacognitive part is the “thinking or awareness” part and the self-regulation is separate.

What do you think?

——————
Livingston, J. A. (1997). Metacognition: An Overview. Unpublished manuscript, State University of New York at Buffalo. http://gse.buffalo.edu/fas/shuell/cep564/metacog.htm

* Disclaimer: The views expressed in this document are those of the authors and do not reflect the official policy or position of the U. S. Air Force, Department of Defense, or the U. S. Govt.


Executive Function: Can Metacognitive Awareness Training Improve Performance?

by Antonio Gutierrez, Georgia Southern University

In a recent meta-analysis of 67 research studies that utilize an intervention targeted at enhancing metacognitive awareness, Jacob and Parkinson (in press) argue that metacognitive interventions aimed at improving executive function processes are not as effective at improving student achievement as once believed by scholars and practitioners alike. In essence, the evidence in support of robust effects of these types of interventions in improving achievement is inconclusive. While descriptive research studies continue to report high associations between metacognitive awareness and performance or achievement measures, Jacob and Parkinson argue that the experimental evidence supporting a strong role of metacognitive training in improving student performance is scant. I have recently pondered a similar dilemma with research on the effect of metacognitive monitoring training on students’ performance, confidence judgments but especially calibration. The literature on these topics converges on the finding that metacognitive monitoring training improves performance and confidence in performance judgments but not necessarily calibration (see e.g., Bol et al., 2005; Gutierrez & Schraw, 2015; Hacker et al., 2008).

While Jacob and Parkinson’s meta-analysis is illuminating, I wonder whether, like the calibration literature, the conclusion that executive function interventions are not as effective at improving achievement may be due to very different conceptualizations of the constructs under investigation. In the case of calibration, the mixed findings may be due to the fact that the metacognitive monitoring interventions were not likely targeting the same thing. For instance, some interventions may have been targeting a reduction in calibration errors (overconfidence and underconfidence), others may have been targeting improvement in calibration accuracy, whereas yet others may have been targeting both, whether intentionally or unintentionally. Because these interventions were targeting different aspects of calibration, it could be that the inconclusive findings were due to a confounding of these various dimensions of calibration … comparing apples to oranges, if you will. Could the lack of robust effects of executive function interventions on achievement be due to a similar phenomenon? What if these studies were not targeting the same executive function processes, in which case they would not be as directly comparable as at first glance? Jacob and Parkinson’s (in press) study may lead some to believe that there is little to be gained in investing time and effort in executive function interventions. However, before we abandon these interventions, perhaps we should consider developing executive function interventions that are more specific and finer grained such as by targeting very specific aspects of the executive function rather than a more general approach.

References
Bol, L., Hacker, D. J., O’Shea, P., & Allen, D. (2005). The influence of overt practice, achievement level, and explanatory style on calibration accuracy, and performance. The Journal of Experimental Education, 73, 269-290.

Gutierrez, A. P., & Schraw, G. (2015). Effects of strategy training and incentives on students’ performance, confidence, and calibration. The Journal of Experimental Education: Learning, Instruction, and Cognition. Advance online publication. doi: 10.1080/00220973.2014.907230

Hacker, D. J., Bol, L., & Bahbahani, K. (2008). Explaining calibration accuracy in classroom contexts: The effects of incentives, reflection, and explanatory style. Metacognition Learning, 3, 101-121.

Jacob, R., & Parkinson, J. (in press). The potential for school-based interventions that target executive function to improve academic achievement: A review. Review of Educational Research. Advance online publication. doi: 10.3102/0034654314561338


The Metacognitive Syllabus!

By Aaron S. Richmond, Ph.D.
Metropolitan State University of Denver

This blog may be like no other in Improve with Metacognition (IwM). I am asking you, the readers to actively participate. Yes, I mean YOU, YOU, and YOU☺. But let me clarify—I do not ask rhetorical questions. As such, please respond using the comment function in IwM or Tweet your answer to the three questions in this blog.

Question #1: How can we use the syllabus as a metacognitive tool?
As delineated by scores of researchers and teachers, the syllabus can be many things. The syllabus can be a contract (Slattery & Carlson, 2005). These elements of the syllabus typically include policies on attendance, late work, ethics, grading, etc. The syllabus can also be a permanent record (Parkes & Harris, 2002). Permanent record elements of a syllabus include course objectives, assessment procedures, course description, and course content. The syllabus is also a communication device that can set the tone for your class and is an opportunity to gain your students trust and respect by modeling your pedagogical beliefs (Bain, 2004) .

As the syllabus can be many things, the syllabus, it is very possible that the syllabus can serve as a metacognitive tool. Several researchers suggest that the syllabus is also a cognitive map (Parkes & Harris, 2002) and a learning tool (Matejka & Kurke, 1994). These elements typically include a description of how to succeed in the course, common pitfalls and misconceptions that occur in the course, campus resources that can assist the students in learning (e.g., writing center), a teaching philosophy, and embedded explanations of class assignments, structure, and student learning. If we consider the syllabus in this context, I believe that we can easily incorporate metacognitive elements. For instance, in my personal teaching philosophy, I specifically mention my focus on improving metacognition. Another example is that I have at least one student learning objective that is megacogntively based with assignments designed to assess this objective. For example, Students will understand what metacognition is and how it improves their own learning (assessed by experiential learning assignment 1 and comprehensive exam). Or Students will understand what it means to develop a culture of metacognition in the classroom (assessed by classroom observation and mid-term exam). Finally, I actively incorporate course content on learning strategies and the metacognitive explanations for those strategies which sets the tone for the importance of metacognition in the class.

Question #2: How are you using the syllabus as a metacognitive tool?
I really want to hear from you on how you may be using the syllabus as a metacognitive tool. For example, what specific statements do you include related to metacognition goals? What assignments do you mention that link to metacognitive development?

Question #3: If the syllabus can be used as a metacognitive tool, how do we know it is effective?
What is your answer to this question? My answer centers on the Scholarship of Teaching and Learning. That is, we don’t have empirical evidence yet to say that the syllabus is a metacognitive tool. That doesn’t mean that it can’t be or isn’t already in practice. But I think you(we) should take up this challenge and investigate this issue. The syllabus can have profound impact on how student learning, instruction, and student ratings of instruction (Richmond, Becknell, Slattery, Morgan, & Mitchell, 2015; Saville, Zinn, Brown, & Marchuk, 2010). so let’s investigate how to improve the syllabus through metacognition.

UsCourse syllabi can be a metacognitive tool. Share on X

 

References
Bain, K. (2004). What the best college teachers do. Cambridge, MA: Harvard University Press.
Matejka, K., & Kurke, L. B. (1994). Designing a great syllabus. College Teaching, 42(3), 115-117. doi:10.1080/87567555.1994.9926838
Parkes, J., & Harris, M. B. (2002). The purposes of a syllabus. College Teaching, 50(2), 55-61. doi:10.1080/87567550209595875
Richmond, A. S., Becknell, J., Slattery, J., Morgan, R., & Mitchell, N. (2015, August). Students’ perceptions of a student-centered syllabus: An experimental analysis. Poster presented the annual meeting of the American Psychological Association, Toronto, Canada.
Saville, B. K., Zinn, T. E., Brown, A. R., & Marchuk, K. A. (2010). Syllabus detail and students’ perceptions of teacher effectiveness. Teaching of Psychology, 37, 186-189. doi:10.1080/00986283.2010.488523
Slattery, J. M., & Carlson, J. F. (2005). Preparing an effective syllabus: Current best practices. College Teaching, 53, 159-164. doi:10.3200/CTCH.53.4.159-164


Exploring the Developmental Progression of Metacognition

by Sarah L. Bunnell at Ohio Wesleyan University (slbunnel@owu.edu)

As a developmental psychologist, it is difficult to consider student learning (and my own learning as well) without a strong nod to developmental process. Metacognition, as has been described by many others on this blog and in other venues (e.g., Baxter-Magolda, 1992; Flavell, 1979; Kuhn, 1999; Perry, 1970), requires the cognitive skills of reflection, connection, evaluation, and revision. Metacognitive acts are initially quite cognitively demanding, and like most conscious cognitive processes, require practice to become more automatic or at least less consuming of cognitive resources. In addition to examining how students acquire the tools required for the hard work of metacognition, I have also been interested in whether there are developmental differences in students’ ability to make connections and reflections across the college years.

I recently conducted two examinations of metacognitive development; the first project involved my Introductory Psychology course, which enrolls primarily first year students, and the second project involved my Adolescent Psychology course, which enrolls primarily sophomore-level students. Below, I will provide a brief summary of each study and then discuss what I see as some take-home points and next-steps for inquiry.

In the Introductory Psychology course (n = 45), each student completed a metacognitive portfolio (hosted through the MERLOT website; http://eportfolio.merlot.org/) throughout the semester. In this portfolio, students responded to a series of prompts to reflect on their current thinking about course concepts and the ways in which these concepts play out in their own lives. At the end of the semester, students were asked to review their responses, identify any responses that they would now change, and explain why they would now alter their responses. They were also asked to describe how they thought their thinking had changed over the course of the semester.

Given the large body of work on the learning benefits associated with metacognition, I was not surprised that students who wanted to change a greater number of their responses performed significantly better on the final exam than did students who identified fewer points of change. More surprising, however, was the finding that students who did well on the final exam were significantly more likely to have endorsed changes in their thinking about themselves as opposed to changes in their thinking about others. A year after this class ended, I contacted these same students again, and I asked them to reflect on their thinking at the end of the course relative to their thinking about Psychology and themselves now. Of note, an analysis of these responses indicated that the students who were high performers on the final exam and in the course overall were no longer reporting many self-related metacognitive links. Instead, these students were significantly more likely to say that they now had a greater understanding of others than they did before. Thus, there was a powerful shift over time in the focus of metacognition from self to other.

In my Adolescent Psychology course (n = 35), students conduct a semi-structured interview of an adolescent, transcribe the interview, and then analyze the interview according to developmental theory. This assignment is designed to foster connection and application, and I have compelling evidence indicating that this experience enhances learning. What was less clear to me, however, is whether participating in this course and in the interview paper activity contributes to students’ metacognitive awareness of self? To address this question, I implemented a pre-post test design. On the first day of class, students were asked, “Are you currently an adolescent? Please explain your answer.” To answer this question, one must consider multiple ways in which we may conceptualize adolescence (i.e., age, legal responsibility, physical maturity, financial responsibility); as you can clearly see, the lens we apply to ourselves and others leads to quite varied views of when adolescence ends and adulthood begins! At the end of the term, students were again asked the same question, plus an additional prompt that asked them to reflect on how their thinking about themselves had changed across the semester.

On Day 1, 17 students endorsed currently being an adolescent, 16 students reported no longer being an adolescent, and 2 students said they did not feel that they had enough information to respond. It is important to note that all students in the course were between the ages of 18 and 21 years and as such, all were technically late adolescents. On the last day of class, 21 class members labeled themselves as adolescents, 4 students said that they did not consider themselves to be adolescents, and 5 said that they were an adolescent in some contexts of their life and not others. As an example of a contextual way of thinking, one student said: “I believe that neurologically I am still an adolescent because I am below the age of 25 and therefore do not have a fully developed frontal lobe, which can alter decision making, and from a Piagetian standpoint I believe I am out of adolescence because I have reached the formal operational stage of development and possibly even beyond that. Overall though, I believe that I can’t fully define myself as an adolescent or not because there are so many factors in play.”

I examined these group-level differences in terms of course performance from a number of angles, and two interesting patterns emerged. First, students who adopted a more context-dependent view of self did significantly better on the application-based, cumulative final exam than did students who held an absolute view of self. This first finding is consistent with the work on Marcia Baxter-Magolda (1992), William Perry (1970), and others, which views contextual knowing as a complicated and mature form of meta-knowing. Second, students who changed their view of themselves across the semester conducted significantly more advanced analyses of the interview content relative to those whose view of self did not change. Thus, the students who displayed greater advances in metacognition were better able to apply these reflections and connections to themselves and, in turn, to the lives of others.

Taken together, this work suggests to me that the ability to engage in metacognitive reflection and connection may initially develop in a self-focused way and then, following additional experience and metacognitive skill attainment, extend beyond the self. Please note that I am careful to suggest that the ability of other-related connection emerges following experience and the acquisition of lower-level preparatory skills, rather than merely the passage of time, even though there is clearly a temporal dimension at play. Instead, as Donald Baer warned us, age is at best a proxy for development; at the most extreme, development is “age-irrelevant” (Baer, 1970). Why do students demonstrate improved metacognition across the college years? It is certainly not merely because the days have ticked by. Instead, these advances in thinking, as well as students’ willingness to refine their thinking about the self, are supported and constructed by a range of experiences and challenges that their college experience affords. To understand age- or college-level changes in thinking, therefore, we should focus on the developmental tasks and experiences that support this development. I hope that my lines of inquiry contribute in small part to this process.

References

Baer, D. M. (1970). An age-irrelevant concept of development. Merrill-Palmer Quarterly, 16, 238-245.

Baxter Magolda, M. B. (1992). Students’ epistemologies and academic experiences: implications for pedagogy, Review of Higher Education, 15, 265-87.

Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34, 906 – 911.

Kuhn, D. (1999). A developmental model of critical thinking. Educational Researcher, 28, 16-25.

Perry, William G., Jr. (1970). Forms of intellectual and ethical development in the college years: A scheme. New York: Holt, Rinehart, & Winston.


Who says Metacognition isn’t Sexy?

By Michael J. Serra at Texas Tech University

This past Sunday, you might have watched “The 87th Academy Awards” (i.e., “The Oscars”) on television. Amongst the nominees for the major awards were several films based on true events and real-life people, including two films depicting key events in the lives of scientists Stephen Hawking (The Theory of Everything) and Alan Turing (The Imitation Game).

There are few things in life that I am sure of, but one thing I can personally guarantee is this: No film studio will ever make a motion picture about the life of your favorite metacognition researcher. Believe it or not, the newest issue of Entertainment Weekly does not feature leaked script details about an upcoming film chronicling how J. T. Hart came up with the idea to study people’s feelings of knowing (Hart, 1967), and British actors are not lining up to depict John Flavell laying down the foundational components for future theory and research on metacognition (Flavell, 1979). Much to my personal dismay, David Fincher hasn’t returned my calls regarding the screenplay I wrote about that time Thomas Nelson examined people’s judgments of learning at extreme altitudes on Mt. Everest (Nelson et al., 1990).

Just as film studios seem to lack interest in portraying metacognition research on the big screen, our own students sometimes seem uninterested in anything we might tell them about metacognition. Even the promise of improving their grades sometimes doesn’t seem to interest them! Why not?

One possibility, as I recently found out from a recent blog post by organic-chemistry professor and tutor “O-Chem Prof”, is that the term “metacognition” might simply not be sexy to our students (O-Chem Prof, 2015). He suggests that we instead refer to the concept as “sexing up your noodle”.

Although the idea of changing the name of my graduate course on the topic to “PSY 6969: Graduate Seminar in Sexing-up your Noodle” is highly tempting, I do not think that the problem is completely one of branding or advertising. Rather, regardless of what we call metacognition (or whether or not we even put a specific label on it for our students), there are other factors that we know play a crucial role in whether or not students will actually engage in self-regulated learning behaviors such as the metacognitive monitoring and control of their learning. Specifically, Pintrich and De Groot (1990; see Miltiadou & Savenye, 2003 for a review) identified three major factors that determine students’ motivation to learn that I suggest will also predict their willingness to engage in metacognition: value, expectancy, and affect.

The value component predicts that students will be more interested and motivated to learn about topics that they see value in learning. If they are struggling to learn a valued topic, they should be motivated to engage in metacognition to help improve their learning about it. A wealth of research demonstrates that students’ values and interest predict their motivation, learning, and self-regulation behaviors (e.g., Pintrich & De Groot, 1990; Pintrich et al., 1994; Wolters & Pintrich, 1998; for a review, see Schiefele, 1991). Therefore, when students do not seem to care about engaging in metacognition to improve their learning, it might not be that metacognition is not “sexy” to them; it might be that the topic itself (e.g., organic chemistry) is not sexy to them (sorry, O-Chem Prof!).

The expectancy component predicts that students will be more motivated to engage in self-regulated learning behaviors (e.g., metacognitive control) if they believe that their efforts will have positive outcomes (and won’t be motivated to do so if they believe their efforts will not have an effect). Some students (entity theorists) believe that they cannot change their intelligence through studying or practice, whereas other students (incremental theorists) believe that they can improve their intelligence (Dweck et al., 1995; see also Wolters & Pintrich, 1998). Further, entity theorists tend to rely on extrinsic motivation and to set performance-based goals, whereas incremental theorists tend to rely on intrinsic motivation and to set mastery-based goals. Compared to entity theorists, students who are incremental theorists earn higher grades and are more likely to persevere in the face of failure or underperformance (Duckworth & Eskreis-Winkler, 2013; Dweck & Leggett, 1988; Romero et al., 2014; see also Pintrich, 1999; Sungur, 2007). Fortunately, interventions have been successful at changing students to an incremental mindset, which in turn improves their learning outcomes (Aronson et al., 2002; Blackwell et al., 2007; Good et al., 2003; Hong et al., 1999).

The affective component predicts that students will be hampered by negative thoughts about learning or anxiety about exams (e.g., stereotype threat; test anxiety). Unfortunately, past research indicates that students who experience test anxiety will struggle to regulate their learning and ultimately end up performing poorly despite their efforts to study or to improve their learning (e.g., Bandura, 1986; Pintrich & De Groot, 1990; Pintrich & Schunk, 1996; Wolters & Pintrich, 1998). These students in particular might benefit from instruction on self-regulation or metacognition, as they seem to be motivated and interested to learn the topic at hand, but are too focused on their eventual test performance to study efficiently. At least some of this issue might be improved if students adopt a mastery mindset over a performance mindset, as increased learning (rather than high grades) becomes the ultimate goal. Further, adopting an incremental mindset over an entity mindset should reduce the influence of beliefs about lack of raw ability to learn a given topic.

In summary, although I acknowledge that metacognition might not be particularly “sexy” to our students, I do not think that is the reason our students often seem uninterested in engaging in metacognition to help them understand the topics in our courses or to perform better on our exams. If we want our students to care about their learning in our courses, we need to make sure that they feel the topic is important (i.e., that the topic itself is sexy), we need to provide them with effective self-regulation strategies or opportunities (e.g., elaborative interrogation, self-explanation, or interleaved practice questions; see Dunlosky et al., 2013) and help them feel confident enough to employ them, we need to work to reduce test anxiety at the individual and group/situation level, and we need to convince our students to adopt a mastery (incremental) mindset about learning. Then, perhaps, our students will find metacognition to be just as sexy as we think it is.

ryan gosling metacog (2)

References

Aronson, J., Fried, C. B., & Good, C. (2002). Reducing the effects of stereotype threat on African American college students by shaping theories of intelligence. Journal of Experimental Social Psychology, 38, 113-125. doi:10.1006/jesp.2001.1491

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall.

Blackwell, L. S., Trzesniewski, K. H., & Dweck, C. S. (2007). Implicit theories of intelligence predict achievement across an adolescent transition: A longitudinal study and an intervention. Child Development, 78, 246-263. doi: 10.1111/j.1467-8624.2007.00995.x

Duckworth, A., & Eskreis-Winkler, L. (2013). True Grit. Observer, 26. http://www.psychologicalscience.org/index.php/publications/observer/2013/april-13/true-grit.html

Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14, 4-58. doi: 10.1177/1529100612453266

Dweck, C. S., Chiu, C. Y., & Hong, Y. Y. (1995). Implicit theories and their role in judgments and reactions: A world from two perspectives. Psychological Inquiry, 6, 267-285. doi: 10.1207/s15327965pli0604_1

Dweck, C. S., & Leggett, E. L. (1988). A social-cognitive approach to motivation and personality. Psychological Review, 95, 256-273. doi: 10.1037/0033-295X.95.2.256

Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34, 906-911. doi: 10.1037/0003-066X.34.10.906

Good, C., Aronson, J., & Inzlicht, M. (2003). Improving adolescents’ standardized test performance: An intervention to reduce the effect of stereotype threat. Applied Developmental Psychology, 24, 645-662. doi: 10.1016/j.appdev.2003.09.002

Hart, J. T. (1967). Memory and the memory-monitoring process. Journal of Verbal Learning and Verbal Behavior, 6, 685-691. doi: 10.1016/S0022-5371(67)80072-0

Hong, Y., Chiu, C., Dweck, C. S., Lin, D., & Wan, W. (1999). Implicit theories, attributions, and coping: A meaning system approach. Journal of Personality and Social Psychology, 77, 588-599. doi: 10.1037/0022-3514.77.3.588

Miltiadou, M., & Savenye, W. C. (2003). Applying social cognitive constructs of motivation to enhance student success in online distance education. AACE Journal, 11, 78-95. http://www.editlib.org/p/17795/

Nelson, T. O., Dunlosky, J., White, D. M., Steinberg, J., Townes, B. D., & Anderson, D. (1990). Cognition and metacognition at extreme altitudes on Mount Everest. Journal of Experimental Psychology: General, 119, 367-374.

O-Chem Prof. (2015, Jan 7). Our Problem with Metacognition is Not Enough Sex. [Web log]. Retrieved from http://phd-organic-chemistry-tutor.com/our-problem-with-metacognition-not-enough-sex/

Pintrich, P. R. (1999). The role of motivation in promoting and sustaining self-regulated learning. International Journal of Educational Research, 31, 459-470. doi: 10.1016/S0883-0355(99)00015-4

Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82, 33-40. doi: 10.1037/0022-0663.82.1.33

Pintrich, P. R., Roeser, R., & De Groot, E. V. (1994). Classroom and individual differences in early adolescents’ motivation and self-regulated learning. Journal of Early Adolescence, 14, 139-161. doi: 10.1177/027243169401400204

Pintrich, P. R., & Schunk D. H. (1996). Motivation in education: Theory, research, and applications. Englewood Cliffs, NJ: Merrill/Prentice Hall.

Romero, C., Master, A., Paunesku, D., Dweck, C. S., & Gross, J. J. (2014). Academic and emotional functioning in middle school: The role of implicit theories. Emotion, 14, 227-234. doi: 10.1037/a0035490

Schiefele, U. (1991). Interest, learning, and motivation. Educational Psychologist, 26, 299-323. doi: 10.1080/00461520.1991.9653136

Sungur, S. (2007). Modeling the relationships among students’ motivational beliefs, metacognitive strategy use, and effort regulation. Scandinavian Journal of Educational Research, 51, 315-326. doi: 10.1080/00313830701356166

Wolters, C. A., & Pintrich, P. R. (1998). Contextual differences in student motivation and self-regulated learning in mathematics, English, and social studies classrooms. Instructional Science, 26, 27-47. doi: 10.1023/A:1003035929216


Linking Mindset to Metacognition

By Charity Peak, Ph.D. (U. S. Air Force Academy)

As part of our institution’s faculty development program, we are currently reading Carol Dweck’s Mindset: The New Psychology of Success. Even though the title and cover allude to a pop-psychology book, Dweck’s done a fabulous job of pulling together decades of her scholarly research on mindsets into a layperson’s text.

After announcing the book as our faculty read for the semester, one instructor lamented that she wished we had selected a book on the topic of metacognition. We have been exploring metacognition as a theme this year through our SoTL Circles and our participation in the multi-institutional Metacognitive Instruction Project. My gut reaction was, “But Mindset is about metacognition!” Knowing your own mindset requires significant metacognition about your own thinking and attitudes about learning. And better yet, understanding and recognizing mindsets in your students helps you to identify and support their development of mindsets that will help them to be successful in school and life.

If you haven’t read the book, below are some very basic distinctions between the fixed and growth mindsets that Dweck (2006) discovered in her research and outlines eloquently in her book:

Fixed Mindset Growth Mindset
Intelligence is static. Intelligence can be developed.
Leads to a desire to look smart and therefore a tendency to:

  • avoid challenges
  • give up easily due to obstacles
  • see effort as fruitless
  • ignore useful feedback
  • be threatened by others’ success
Leads to a desire to learn and therefore a tendency to:

  • embrace challenges
  • persist despite obstacles
  • see effort as a path to mastery
  • learn from criticism
  • be inspired by others’ success

 

What does this mean for metacognition? Dweck points out that people go through life with fixed mindsets without even realizing they are limiting their own potential. For example, students will claim they are “not good at art,” “can’t do math,” “don’t have a science brain.” These mindsets restrict their ability to see themselves as successful in these areas. In fact, even when instructors attempt to refute these statements, the mindsets are so ingrained that they are extremely difficult to overcome.

What’s an instructor to do? Help students have metacognition about their self-limiting beliefs! Dweck offers a very simple online assessment on her website that takes about 5 minutes to complete. Instructors can very easily suggest that students take the assessment, particularly in subjects where these types of fallacious self-limiting attitudes abound, as a pre-emptive way to begin a course. These assessment results would help instructors easily identify who might need the most assistance in overcoming mental barriers throughout the course. Instructors can also make a strong statement to the class early in the semester that students should fight the urge to succumb to these limiting beliefs about a particular subject area (such as art or math).   As Dweck has proven through her research, people can actually become artistic if taught the skills through learnable components (pp. 68-69). Previously conceived notions of talent related to a wide variety of areas have been refuted time and again through research. Instead, talent is likely a cover for hard work, perseverance, and overcoming obstacles. But if we don’t share those insights with students, they will never have the metacognition of their own self-limiting – and frankly mythical – belief systems.

Inspired but wish you knew how to apply it to your own classes? A mere Google search on metacognition and mindset will yield a wealth of resources, but I particularly appreciate Frank Noschese’s blog on creating a metacognition curriculum. He started his physics course by having students take a very simple survey regarding their attitudes toward science. He then shared a short video segment called “Grow Your Brain” from the episode Changing Your Mind (jump to 13:20) in the Scientific American Frontiers series from PBS. Together, he and his students began a journey of moving toward a growth mindset in science. Through an intentional metacognition lesson, he sent a very clear message to his students that “I can’t” would not be tolerated in his course. He set them up for success by demonstrating clearly that everyone can learn physics if they put their minds (or mindsets) to it.

Metacognition about mindsets offers instructors an opportunity to give students the gift of a lifetime – the belief that they can overcome any learning obstacles if they just persevere, that their intelligence is not fixed but actually malleable, that learning is sometimes hard but not impossible! When I reflect on why I am so deeply dedicated to education as a profession, it is my commitment to helping students see themselves using a growth mindset. Helping them to change their mindsets can change their future, and metacognition is the first step on that journey!

 

References:

“Changing the Mind.” (11/21/00). Scientific American Frontiers. Boston: Ched-Angier Production Co. Retrieved from http://chedd-angier.com/frontiers/season11.html

Dweck, C. S. (2006). Mindset: The new psychology of success. New York: Ballantine Books.

Noschese, F. (September 10, 2012). Metacognition curriculum (Lesson 1 of ?). Retrieved from https://fnoschese.wordpress.com/2012/09/10/metacognition-curriculum-lesson-1-of/

 


Fostering Metacognition: Right-Answer Focused versus Epistemologically Transgressive

by Craig E. Nelson at Indiana University (Contact: nelson1@indiana.edu)

I want to enrich some of the ideas posted here by Ed Nuhfer (2014 a, b, c and d) and Lauren Scharff (2014). I will start by emphasizing some key points made by Nuhfer (2014 a):

  • Instead of focusing on more powerful ways of thinking, most college instruction has thus far focused on information, concepts and discipline specific skills. I will add that even when concepts and skills are addressed they, too, are often treated as memorizable information both by students and faculty. Often little emphasis is placed on demonstrating real understanding, let alone on application and other higher-level skills.
  • “Adding metacognitive components to our assignments and lessons can provide the explicit guidance that students need. However, authoring these components will take many of us into new territory…” This is tough because such assignments require much more support for students and many of faculty members have had little or no practice in designing such support.
  • The basic framework for understanding higher-level metacognition was developed by Perry in the late 1960s and his core ideas have since been deeply validated, as well as expanded and enriched, by many other workers (e.g. Journal of Adult Development, 2004; Hoare, 2011.).
  • “Enhanced capacity to think develops over spans of several years. Small but important changes produced at the scale of single quarter or semester-long courses are normally imperceptible to students and instructors alike.”

It is helpful (e.g. Nelson, 2012, among many) to see most of college-level thinking as spanning four major levels, a truncated condensation of Perry’s 9 stages as summarized in Table 1 of Nuhfer (2014 a). Each level encompasses a different set of metacognitive skills and challenges. Individual students’ thinking is often a mix or mosaic where they approach some topics on one level and others at the next.

In this post I am going to treat only the first major level, Just tell me what I need to know (Stages 1 & 2 of Table 1 in Nuhfer, 2012 a). In this first level, students view knowledge fundamentally as Truth. Such knowledge is eternal (not just some current best model), discovered (not constructed) and objective (not temporally or socially situated). In contrast, some (but certainly not all) faculty members view what they are teaching as constructed best current model or models and as temporally and socially situated with the subjectivity that implies.

The major cognitive challenges within this first level are usefully seen as moving toward a more complete mastery of right-answer reasoning processes (Nelson, 2012), sometimes referred to as a move from concrete to formal reasoning (although the extent to which Piaget’s stages actually apply is debated). A substantial majority of entering students at most post-secondary institutions have not yet mastered formal reasoning. However, many (probably most) faculty members tacit assume that all reasonable students will quickly understand anything that is asked in terms of most right-answer reasoning. As a consequence, student achievement is often seriously compromised.

Lawson et al. (2007) showed that a simple test of formal reasoning explained about 32% of the variance in final grades in an introductory biology course and was the strongest such predictor among several options. This is quite remarkable considering that the reasoning test had no biological content and provided no measure of student effort. Although some reasoning tasks could be done by most students, an understanding of experimental designs was demonstrated largely by students who scored as having mastered formal reasoning. Similar differences in achievement have been documented for some other courses (Nelson, 2012).

Nuhfer (2014 b) and Scharff (2014) discuss studies of the associations among various measures of student thinking. From my viewpoint, their lists start too high up the thinking scale. I think that we need to start with the transitions between concrete and formal reasoning. I have provided a partial review of key aspects of this progression and of the teaching moves that have been shown to help students master more formal reasoning, as well as sources for key instruments (Nelson, 2012). I think that such mastery will turn out to be especially helpful, and perhaps essential, to more rapid development of higher level-reasoning skills.

This insight also may helps to resolve a contrast, between the experience of Scharff and her colleagues (Scharff, 2014) and Nuhfer’s perspective (2014 b). Scharff reports: “At my institution we have some evidence that such an approach does make a very measurable difference in aspects of critical thinking as measured by the CAT (Critical Thinking Assessment, a nationally normed, standardized test …).” In his responses, Nuhfer (2014 b) emphasizes that, given how we teach, there is, not surprisingly, very little change over the course an undergraduate degree in higher-order thinking. (“… the typical high school graduate is at about [Perry] level 3 2/3 and the typical college graduate is a level 4. That is only one-third of a Perry stage gain made across 4-5 years of college.”)

It is my impression that the “Critical Thinking Assessment” discussed by Scharff deals primarily with right-answer reasoning. The mastery of the skills underlying right-answer reasoning questions is largely a matter of mastering formal reasoning processes. Indeed, tests of concrete versus formal reasoning usually consist exclusively of questions that have very clear right answers. I think that several of the other thinking assessments that Nuhfer and Scharff discuss also have exclusively or primarily clear right-answers. This approach contrasts markedly with the various instruments for assessing intellectual development in the sense of Perry and related authors, none of which focuses on right-answer questions. An easily accessible instrument is given the appendices of King and Kitchener (1994).

This leads to three potentially helpful suggestions for fostering metacognition.

  • Use one of the instruments for assessing concrete versus formal reasoning as a background test for all of your metacognitive interventions. This will allow you to ask whether students who perform differently on such an assessment also perform differently on your pre- or post-assessment, or even in the course as a whole (as in Lawson et al. 2007).
  • Include interventions in your courses that are designed to help students succeed with formal, right-answer reasoning tasks. In STEM courses, teaching with a “learning cycle” approach that starts with the examination or even the generation of data is one important, generally applicable such approach.
  • Carefully distinguish between the ways that you are helping students master right-answer reasoning and the ways you are trying to foster more complex forms of reasoning. Fostering right-answer reasoning will include problem-focused reasoning, self-monitoring and generalizing right-answer reasoning processes (e.g. “Would using a matrix help me solve this problem?”).

Helping students move to deeper sophistication requires epistemologically transgressive challenges. Those who wish to pursue such approaches seriously should examine first, perhaps, Nuhfer’s (2014d) “Module 12 – Events a Learner Can Expect to Experience” and ask how one could foster each successive step.

Unfortunately, the first key step to helping students move beyond right-answer thinking requires helping them understand the ways in which back-and-white reasoning fails in one’s discipline. For this first epistemologically transgressive challenge, understanding that knowledge is irredeemably uncertain, one might want to provide enough scaffolding to allow students to make sense of readings such as: Mathematics: The Loss of Certainty (Kline, 1980); Be Forewarned: Your Knowledge is Decaying (Arbesman, 2012); Why Most Published Research Findings Are False (Ioannidis, 2005); and Lies, Damned Lies, and Medical Science (Freedman, 2010).

As an overview for students of the journey in which everything becomes a matter of better and worse ideas and divergent standards for judging better, I have had some success using a heavily scaffolded approach (detailed study guides, including exam ready essay questions, and much group work) to helping students understand Reality Isn’t What It Used to Be: Theatrical Politics, Ready-to-Wear Religion, Global Myths, Primitive Chic, and Other Wonders of the Postmodern World (Anderson,1990).

We have used various heavily scaffolded, epistemologically transgressive challenges to produce an average gain of one-third Perry stage over the course of a single semester (Ingram and Nelson, 2009). As Nuhfer (2014b) noted, this is about the gain usually produced by an entire undergraduate degree of normal instruction.

And for the bravest, most heavily motivated faculty, I would suggest In Over Our Heads: The Mental Demands of Modern Life (Kegan, 1994). Kegan attempts to make clear that each of us has our ability to think in more complex ways limited by epistemological assumptions of which we are unaware. This is definitely not a book for undergraduates nor is it one that easily embraced by most faculty members.

REFERENCES CITED

  • Hoare, Carol. Editor (2011). The Oxford Handbook of Reciprocal Adult Development and Learning. 2nd Edition. Oxford University Press.
  • Ingram, Ella L. and Craig E. Nelson (2009). Applications of Intellectual Development Theory to Science and Engineering Education. P 1-30 in Gerald F. Ollington (Ed.), Teachers and Teaching: Strategies, Innovations and Problem Solving. Nova Science Publishers.
  • Ioannidis, John (2005). “Why Most Published Research Findings Are False.” PLoS Medicine August; 2(8): e124. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/ [The most downloaded article in the history of PLoS Medicine. Too technical for many first-year students even with heavy scaffolding?]
  • Journal of Adult Development (2004). [Special volume of nine papers on the Perry legacy of cognitive development.] Journal of Adult Development 11(2):59-161.
  • King, Patricia M. and Karen Strohm Kitchner (1994). Developing Reflexive Judgment: Understanding and Promoting Intellectual Growth and Critical Thinking in Adolescents and Adults. Jossey-Bass.
  • Kline, Morris (1980). Mathematics: The Loss of Certainty. Oxford University Press. [I used the summary (the Preface) in a variety of courses.]
  • Nelson, Craig E. (2012). “Why Don’t Undergraduates Really ‘Get’ Evolution? What Can Faculty Do?” Chapter 14 (p 311-347) in Karl S. Rosengren, E. Margaret Evans, Sarah K. Brem, and Gale M. Sinatra (Editors.) Evolution Challenges: Integrating Research and Practice in Teaching and Learning about Evolution. Oxford University Press. [Literature review applies broadly, not just to evolution]

Goal Monitoring in the Classroom

by Tara Beziat at Auburn University at Montgomery 

What are your goals for this semester? Have you written down your goals? Do you think your students have thought about their goals and written them down? Though these seem like simple tasks, we often do not ask our students to think about their goals for our class or for the semester. Yet, we know that a key to learning is planning, monitoring and evaluating one’s learning (Efklides, 2011; Nelson, 1996; Schraw and Dennison, 1994; Nelson & Narens, 1994). By helping our students engage in these metacognitive tasks, we are teaching them how to learn.

Over the past couple of semesters, I have asked my undergraduate educational psychology students to complete a goal-monitoring sheet so they can practice, planning, monitoring and evaluating their learning. Before we go over the goal-monitoring sheet, I explain the learning process and how a goal-monitoring sheet helps facilitate learning. We discuss how successful students set goals for their learning, monitor these goals and make necessary adjustments through the course of the semester (Schunk, 1990). Many first-generation students and first-time freshman come to college lacking self-efficacy in academics and one set back can make them feel like college is not for them (Hellman, 1996). As educators we need to help them understand we all make mistakes and sometimes fail, but we need to make adjustments based on those failures not quit.

Second, I talk with my class about working memory, long-term memory, and how people access information in one of two ways: verbally or visually (Baddeley, 2000, 2007). Seeing and/or hearing the information does not make learning happen. As a student, they must take an active role and practice retrieving the information (Karpicke & Roediger, 2008; Roediger & Butler, 2011). Learning takes work. It is not a passive process. Finally, we discuss the need to gauge their progress and reflect on what is working and what is not working. On the sheet I reiterate what we have discussed with the following graphic:

LearningGoalsCycleTaraBeziat

After this brief introduction about learning, we talk about the goal-monitoring sheet, which is divided into four sections: Planning for Success, Monitoring your Progress, Continued Monitoring and Early Evaluation and Evaluating your Learning. Two resources that I used to make adjustments to the initial sheet were the questions in Tanner’s (2012) article on metacognition in the classroom and the work of Gabrielle Oettingen (2014). Oettigen points out that students need to consider possible obstacles to their learning and evaluate how they would handle them. Students can use the free WOOP (Wish, Outcome, Obstacle, Plan) app to “get through college.”

Using these resources and the feedback from previous students, I created a new goal-monitoring sheet. Below are the initial questions I ask students (for the full Goal Monitoring Sheet see the link at the bottom):

  • What are your goals for this class?
  • How will you monitor your progress?
  • What strategies will you use to study and prepare for this class?
  • When can you study/prepare for this class?
  • Possible obstacles or areas of concern are:
  • What resources can you use to achieve your goals?
  • What do you want to be able to do by the end of this course?

Interestingly, many students do not list me, the professor as a resource. I make sure to let the students know that I am available and should be considered a resource for the course. As students, move through the semester they submit their goal-monitoring sheets. This continuing process helps me provide extra help but also guide them toward necessary resources. It is impressive to see the students’ growth as they reflect on their goals. Below are some examples of student responses.

  • “I could use the book’s website more.”
  • “One obstacle for me is always time management. I am constantly trying to improve it.”
  • “I will monitor my progress by seeing if I do better on the post test on blackboard than the pre test. This will mean that I have learned the material that I need to know.”
  • “Well, I have created a calendar since the beginning of class and it has really helped me with keeping up with my assignments.”
  • “I feel that I am accomplishing my goals because I am understanding the materials and I feel that I could successfully apply the material in my own classroom.”
  • “I know these [Types of assessment, motivation, and the differences between valid and reliable, and behaviorism] because I recalled them multiple times from my memory.

Pressley and his colleagues (Pressely, 1983; Pressely & Harris, 2006; Pressely & Hilden, 2006) emphasize the need for instructors, at all levels, to help students build their repertoire of strategies for learning. By the end of the course, many students feel they now have strategies for learning in any setting. Below are a few excerpts from students’ final submission on their goal monitoring sheets:

  • “The most unusual thing about this class has been learning about learning. I am constantly thinking of how I am in these situations that we are studying.”
  • “…we were taught new ways to take in work, and new strategies for studying and learning. I feel like these new tips were very useful as I achieved new things this semester.
Goal Monitoring in the Classroom: Have your students have thought about their goals for your course and written them down? Share on X

References

Efklides, A. (2011). Interactions of metacognition with motivation and affect in self-regulated learning: The MASRL model. Educational Psychologist46(1), 6-25.

Hellman, C. (1996). Academic self-efficacy: Highlighting the first generation student. Journal of Applied Research in the Community College, 3, 69–75.

Karpicke, J. D., & Roediger, H. L. (2008). The critical importance of retrieval for learning. science319(5865), 966-968.

Nelson, T. O. (1996). Consciousness and metacognition. American Psychologist, 51(2), 102-116. doi:10.1037/0003-066X.51.2.102

Nelson, T. O., & Narens, L. ( 1994). Why investigate metacognition?. In J.Metcalfe & A.Shimamura ( Eds.), Metacognition: Knowing about knowing (pp. 1– 25). Cambridge, MA: Bradford Books.

Oettingen, G. (2014). Rethinking Positive Thinking: Inside the New Science of Motivation. New York, NY: Penguin Group.

Pressely, M. (1983). Making meaningful materials easier to learn. In M. Pressely & J.R. Levin (Eds.), Cognitive strategy research: Educational applications. NewYork: Springer-Verlag.

Pressley, M., & Harris, K.R. (2006). Cognitive strategies instruction; From basic research to classroom instruction. In P.A. Alexander & P.H. Winne (Eds.), Handbook of educational psychology (2nd ed). Mahwah, NJ: Erlbaum.

Pressley, M., & Hilden, K. (2006). Cognitive strategies. In W. Damon & R. Lerner (Eds.), Handbook of child psychology (6th ed.). New York: Wiley.

Roediger III, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention. Trends in cognitive sciences15(1), 20-27.

Schunk, D. H. (1990). Goal setting and self-efficacy during self-regulated learning. Educational psychologist25(1), 71-86.

Tanner, K.D. (2012). Promoting Student Metacognition. CBE-Life Sciences Education, 11(2), 113-120. doi:10.1187/cbe.12


Metacognitive Skills and the Faculty

by Dave Westmoreland, U. S. Air Force Academy

While I applaud the strong focus on student development of metacognitive practices in this forum, I suspect that we might be overlooking an important obstacle to implementing the metacognitive development of our students – the faculty. Most faculty members are not trained how to facilitate the metacognitive development of students. In fact many are not aware of the need to help students develop metacognitive skills because explicit development of their own metacognitive skills didn’t occur to them when they were students.

I teach at a military institution in which the faculty is composed of about 60% military officers and 40% civilians. Since military faculty stay for three-year terms, there is an annual rotation in which about 20% of the entire faculty body is new to teaching. This large turn over poses an ongoing challenge for faculty development. Each year we have a week-long orientation for the new faculty, followed by a semester-long informal mentorship. Despite these great efforts, I believe that we need to do more when it comes to metacognition.

For example, in my department there is a strong emphasis on engaging students in the conceptual structure of science. As part of this training, we employ the exercise that I described in a previous blog (Science and Social Controversy – a Classroom Exercise in Metacognition”, 24 April 2014). With few exceptions, our new faculty, all of whom possess advanced degrees in science, struggle with the concepts as much as our undergraduates. It seems that the cognitive structure of science (the interrelation of facts, laws and theories) is not a standard part of graduate education. And without a faculty proficient in this concept, our goal of having students comprehend science as a way of knowing about the natural world will fail.

What is needed within faculty development is a more intentional focus on how faculty can develop their own metacognitive skills, and how they can support the metacognitive skill development of their students. A recent report by Academic Impressions reveals that, while virtually all institutions of higher education proclaim an emphasis on professional development, more than half of faculty perceive that emphasis to be little more than talk. Only ~ 42% of institutions give professional development a mission-critical status, and actively support professional development in their faculty and staff (Mrig, Fusch, & Cook, 2014). Highly effective institutions are proactive in directing professional development to meet emerging needs – perhaps this is where an emphasis on metacognition will take hold.
To that end, it is encouraging to see the initiative for a research study of metacognitive instruction on our own Improve with Metacognition site.

See https://www.improvewithmetacognition.com/researching-metacognition/ for the Call to Participate.

Reference

Mrig, A., Fusch, D, and Cook, P. 2014. The state of professional development in higher ed. http://www.academicimpressions.com/professional-development-md/?qq=29343o721616qY104


Self-Assessment, It’s A Good Thing To Do

by Stephen Fleisher, CSU Channel Islands

McMillan and Hearn (2008) stated persuasively that:

In the current era of standards-based education, student self-assessment stands alone in its promise of improved student motivation and engagement, and learning. Correctly implemented, student self-assessment can promote intrinsic motivation, internally controlled effort, a mastery goal orientation, and more meaningful learning (p. 40).

In her study of three meta-analyses of medical students’ self-assessment, Blanch-Hartigan (2011) reported that self-assessments did prove to be fairly accurate, as well as improving in later years of study. She noted that if we want to increase our understanding of self-assessment and facilitate its improvement, we need to attend to a few matters. To understand the causes of over- and underestimation, we need to address direction in our analyses (using paired comparisons) along with our correlational studies. We need also to examine key moderators affecting self-assessment accuracy, for instance “how students are being inaccurate and who is inaccurate” (p. 8). Further, the wording and alignment of our self-assessment questions in relation to the criteria and nature of our performance questions are essential to the accuracy of understanding these relationships.

When we establish strong and clear relationships between our self-assessment and performance questions for our students, we facilitate their use of metacognitive monitoring (self-assessment, and attunement to progress and achievement), metacognitive knowledge (understanding how their learning works and how to improve it), and metacognitive control (changing efforts, strategies or actions when required). As instructors, we can then also provide guidance when performance problems occur, reflecting on students’ applications and abilities with their metacognitive monitoring, knowledge, and control.

Self-Assessment and Self-Regulated Learning

For Pintrich (2000), self-regulating learners set goals, and activate prior cognitive and metacognitive knowledge. These goals then serve to establish criteria against which students can self-assess, self-monitor, and self-adjust their learning and learning efforts. In monitoring their learning process, skillful learners make judgments about how well they are learning the material, and eventually they become better able to predict future performance. These students can attune to discrepancies between their goals and their progress, and can make adjustments in learning strategies for memory, problem solving, and reasoning. Additionally, skillful learners tend to attribute low performance to low effort or ineffective use of learning strategies, whereas less skillful learners tend to attribute low performance to an over-generalized lack of ability or to extrinsic things like teacher ability or unfair exams. The importance of the more adaptive attributions of the aforementioned skillful learners is that these points of view are associated with deeper learning rather than surface learning, positive affective experiences, improved self-efficacy, and greater persistence.

Regarding motivational and affective experiences, self-regulating learners adjust their motivational beliefs in relation to their values and interests. Engagement improves when students are interested in and value the course material. Importantly, student motivational beliefs are set in motion early in the learning process, and it is here that instructional skills are most valuable. Regarding self-regulation of behavior, skillful learners see themselves as in charge of their time, tasks, and attention. They know their choices, they self-initiate their actions and efforts, and they know how and when to delay gratification. As well, these learners are inclined to choose challenging tasks rather than avoid them, and they know how to persist (Pintrich, 2000).

McMillan and Hearn (2008) summarize the role and importance of self-assessment:

When students set goals that aid their improved understanding, and then identify criteria, self-evaluate their progress toward learning, reflect on their learning, and generate strategies for more learning, they will show improved performance with meaningful motivation. Surely, those steps will accomplish two important goals—improved student self-efficacy and confidence to learn—as well as high scores on accountability tests (p. 48). 

As a teacher, I see one of my objectives being to discover ways to encourage the development of these intellectual tools and methods of thinking in my own students. For example, in one of my most successful courses, a colleague and I worked at great length to create a full set of specific course learning outcomes (several per chapter, plus competencies we cared about personally, for instance, life-long learning). These course outcomes were all established and set into alignment with the published student learning outcomes for the course. Lastly, homework, lectures, class activities, individual and group assignments, plus formative and summative assessments were created and aligned. By the end of this course, students not only have gained knowledge about psychology, but tend to be pleasantly surprised to have learned about their own learning.

 

References

Blanch-Hartigan, D. (2011). Medical students’ self-assessment of performance: Results from three meta-analyses. Patient Education and Counseling, 84, 3-9.

McMillan, J. H., & Hearn, J. (2008). Student self-assessment: The key to stronger student motivation and higher achievement. Educational Horizons, 87(1), 40-49. http://files.eric.ed.gov/fulltext/EJ815370.pdf

Pintrich, P. R. (2000). The role of goal orientation in self-regulated learning. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.) Handbook of self-regulation. San Diego, CA: Academic.


Evidence for metacognition as executive functioning

by Kristen Chorba, PhD and Christopher Was, PhD, Kent State University

Several authors have noted that metacognition and executive functioning are descriptive of a similar phenomenon (see Fernandez-Duque, et al., 2000; Flavell, 1987; Livingston, 2003; Shimamura, 2000; Souchay & Insingrini, 2004). Many similarities can be seen between these two constructs: both regulate and evaluate cognitions, both are employed in problem solving, both are required for voluntary actions (as opposed to automatic responses), and more. Fernandez-Duque, et al. (2000) suggest that, despite their similarities, these two areas have not been explored together because of a divide between metacognitive researchers and cognitive neuroscientists; the metacognitive researchers have looked exclusively at metacognition, focusing on issues related to its development in children and its implications for education. They have preferred to conduct experiments in naturalistic settings, as a way to maximize the possibility that any information gained could have practical applications. Cognitive neuroscientists, on the other hand, have explored executive functioning using neuroimaging techniques, with the goal of linking them to brain structures. In the metacognitive literature, it has been noted metacognition occurs in the frontal cortex; this hypothesis has been evaluated in patients with memory disorders, and studies have noted that patients with frontal lobe damage, including some patients with amnesia, had difficulties performing metacognitive functions, including FOK judgments (Fernandez-Duque, et al., 2000; Janowsky, Shimamura, & Squire, 1989; Shimamura & Squire, 1986; as cited in Shimamura, 2000). Additionally, source monitoring and information retrieval has also been linked with the frontal cortex; source monitoring is an important metacognitive judgment (Shimamura, 2000). As previously stated, executive functions seem to be located generally in the frontal lobes, as well as specifically in other areas of the brain, contributing to the growing body of literature indicating that executive functions are both correlated and function independently. To explore the link between executive functioning and metacognition, Souchay and Isingrini (2004) carried out an experiment in which subjects were first asked to make evaluations on their own metacognition; they were then given a series of neurological tests to assess their executive functioning. They not only found a “significant partial correlation between metamemory control and executive functioning” (p. 89) but, after performing a hierarchical regression analysis, found that “age-related decline in metamemory control may be largely the result of executive limitations associated with aging” (p. 89).

As it relates to executive functioning, Fernandez-Duque, et al. (2008) noted that “the executive system modulates lower level schemas according to the subject’s intentions . . . [and that] without executive control, information processing loses flexibility and becomes increasingly bound to the external stimulus” (p. 289). These authors use the terms executive function and metacognition as essentially interchangeable, and note that these functions enable humans to “guide actions” where preestablished schema are not present and allow the individual to make decisions, select appropriate strategies, and successfully complete a task. Additionally, the primary task of both metacognition and executive functions are top-down strategies, which inform the lower level (i.e.: in metacognition, the object level; in executive functioning, as the construct which controls the “selection, activation, and manipulation of information in working memory” [Shimamura, 2000, p. 315]). Reviewing the similarities between metacognition and executive function, it seems that they are highly correlated constructs and perhaps share certain functions.

Executive functions and metacognition, while exhibiting similar functions and characteristics have, largely, been investigated along separate lines of research. Metacognitive research has focused on application and informing the teaching and learning processes. Executive functions, on the other hand, have primarily been researched as they relate to structures and locations within the brain. Recent literature and research indicates that executive functions and metacognition may be largely the same process.

References

Baddeley, A. (2005). Human Memory: Theory and Practice, Revised Edition. United Kingdom; Bath Press.

Blavier, A., Rouy, E., Nyssen, A., & DeKeyster, V. (2005). Prospective issues for error   detection. Ergonomics, 7(10), 758-781.

Dinsmore, D., Alexander, P., & Loughlin, S. (2008). Focusing the conceptual lens on metacognition, self-regulation, and self-regulated learning. Educational psychology review, 20(4), 391-409.

Dunlosky, J., Metcalfe, J. (2008). Metacognition. Los Angeles: Sage.

Fernandez-Duque, D., Baird, J., Posner, M. (2000). Executive attention and metacognitive regulation. Consciousness and Cognition, 9, 288-307.

Flavell, J. (1987). Speculations about the nature and development of metacognition. In F. Weinert and R. H. Kluwe, (Eds.) Metacognition, Motivation, and Understanding. Hillsdale, NJ: Lawrence Erlbaum.

Friedman, N. P., Haberstick, B. C., Willcutt, E. G., Miyake, A., Young, S. E., Corley, R.   P., & Hweitt, J. K. (2007). Greater attention problems during childhood predict        poorer executive functioning in late adolescence. Psychological Science, 18(10), 893-900.

Friedman, N. P., Miyake, A., Young, S. E., DeFries, J. C., Corley, R. P., Hewitt, J. K. (2008).  Individual differences in executive functions are almost entirely genetic in origin.  Journal of Experimental Psychology, General, 137(2), 201-225.

Friedman, N. P., Miyake, Corley, R. P., Young, S. E., DeFries, J. C., & Hewitt, J. K. (2006). Not all executive functions are related to intelligence. Psychological Science, 17(2), 172-179.

Georghiades, P. (2004). From the general to the situated: Three decades of metacognition.  research report. International Journal of Science Education, 26(3), 365-383.

Higham, P. A. & Gerrard, C. (2005). Not all errors are created equal: Metacognition and   changing answers on multiple-choice tests. Canadian Journal of Experimental   Psychology, 59(1), 28-34.

Keith, N. & Frese, M. (2005) Self-regulation in error management training: Emotion control and    metacognition as mediators of performance effects. Journal of Applied Psychology,  90(4), 677-691.

Keith, N. & Frese, M. (2008). Effects of error management training: A meta-analysis. Journal of Applied Psychology, 93(1), 59-69.

Lajoie, S. (2008). Metacognition, self regulation, and self-regulated learning: A rose by any other name? Educational Psychology Review, 20(4), 469-475.

Livingston, J. A. (2003). Metacognition: An overview. Online ERIC Submission.

Miyake, A., Friedman, N. P., Emerson, M. J., Witzki, A. H., & Howenter, A. (2000). The unity and diversity of executive functions and their contributions to complex         “frontal lobe” tasks: A latent variable analysis. Cognitive Psychology, 41, 49-100.

Nelson, T. O., & Narens, L. (1990). Metamemory: A theoretical framework and new

findings. In G. H. Bower (Ed.), The Psychology of Learning and Knowing. Cambridge, MIT Press, p. 1-26.

PP, N. (2008). Cognitions about cognitions: The theory of metacognition. Online ERIC Submission.

Shimamura, A. (2000). Toward a cognitive neuroscience of metacognition. Consciousness and Cognition, 9, 313-323.

Souchay, C., & Isingrini, M. (2004). Age related differences in metacognitive control: Role of executive functioning. Science Direct. 56(1), 89-99.

Thiede, K. W., & Dunlosky, J. (1994). Delaying students’ metacognitive monitoring improves their accuracy in predicting their recognition performance. Journal of educational psychology, 86(2), 290-302.

Winne, P. H., & Hadwin, A. F. (1998). Studying as self-regulated learning. In D. J. Hacker, J., Dunlosky, & A. Graessser (Eds.), Metacognition in educational theory       and practice, (p. 277-304). Hillsdale, NJ: Lawrence Erlbaum.


Self-assessment and the Affective Quality of Metacognition: Part 2 of 2

Ed Nuhfer, Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029

In Part 1, we noted that knowledge surveys query individuals to self-assess their abilities to respond to about one hundred to two hundred challenges forthcoming in a course by rating their present ability to meet each challenge. An example can reveal how the writing of knowledge survey items is similar to the authoring of assessable Student Learning Outcomes (SLOs). A knowledge survey item example is:

I can employ examples to illustrate key differences between the ways of knowing of science and of technology.

In contrast, SLOs are written to be preceded by the phrase: “Students will be able to…,” Further, the knowledge survey items always solicit engaged responses that are observable. Well-written knowledge survey items exhibit two parts: one affective, the other cognitive. The cognitive portion communicates the nature of an observable challenge and the affective component solicits expression of felt confidence in the claim, “I can….” To be meaningful, readers must explicitly understand the nature of the challenges. Broad statements such as: “I understand science” or “I can think logically” are not sufficiently explicit. Each response to a knowledge survey item offers a metacognitive self-assessment expressed as an affective feeling of self-assessed competency specific to the cognitive challenge delivered by the item.

Self-Assessed Competency and Direct Measures of Competency

Three competing hypotheses exist regarding self-assessed competency relationship to actual performance. One asserts that self-assessed competency is nothing more than random “noise” (https://www.koriosbook.com/read-file/using-student-learning-as-a-measure-of-quality-in-hcm-strategists-pdf-3082500/; http://stephenporter.org/surveys/Self%20reported%20learning%20gains%20ResHE%202013.pdf).  Two others allow that self-assessment is measurable. When compared with actual performance, one hypothesis maintains that people typically overrate their abilities and generally are “unskilled and unaware of it” (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2702783/).  The other, “blind insight” hypothesis, indicates the opposite: a positive relationship exists between confidence and judgment accuracy (http://pss.sagepub.com/content/early/2014/11/11/0956797614553944).

Suitable resolution of the three requires data acquired from paired instruments of known reliability and validity. Both instruments must be highly aligned to collect data that addresses the same learning construct. The Science Literacy Concept Inventory (SLCI), a 25-item test tested on over 17,000 participants, produces data on competency with Cronbach Alpha Reliability .84, and possesses content, construct, criterion, concurrent, and discriminant validity. Participants (N=1154) who took the SLCI also took a knowledge survey (KS-SLCI with Cronbach Alpha Reliability of .93) that produced a self-assessment measure based on the identical 25 SLCI items. The two instruments are reliable and tightly aligned.

If knowledge surveys register random noise, then data furnished from human subjects will differ little from data generated with random numbers. Figure 1 reveals that data simulated from random numbers 0, 1, and 2 yield zero reliability, but real data consistently show reliability measures greater than R = .9 (Figure 1). Whatever quality(ies) knowledge surveys register is not “random noise.” Each person’s self-assessment score is consistent and characteristic.

randomksevenodd1154.eps

Figure 1. Split-halves reliabilities of 25-item KS-SLCI knowledge surveys produced by 1154 random numbers (left) and by 1154 actual respondents (right). 

Correlation between the 1154 actual performances on the SLCI and the self-assessed competencies through the KS-SLCI is a highly significant r = 0.62. Of the 1154 participants, 41.1% demonstrated good ability to self-assess their actual abilities to perform within ±10%, 25.1% of participants proved to be under-estimators, and 33.8% were over-estimators.

Because each of the 25 SLCI items poses challenges of varying difficulty, we could also test whether participants’ self-assessments gleaned from the knowledge survey did or did not show a relationship to the actual difficulty of items as reflected by how well participants scored on each of them. The collective self-assessments of participants revealed an almost uncanny ability to reflect the actual performance of the group on most of the twenty-five items (Figure 2), thus supporting the “blind insight” hypothesis. Knowledge surveys appear to register meaningful metacognitive measures, and results from reliable, aligned instruments reveal that people do generally understand their degree of competency.

knowledgesurveySLCIdifficulty

Figure 2. 1154 participants’ average scores on each of 25 SLCI items correspond well (r = 0.76) to their average scores predicted by knowledge survey self-assessments.

Advice in Using Knowledge Surveys to Develop Metacognition

  • In developing competency in metadisciplinary ways of knowing, furnish a bank of numerous explicit knowledge survey items that scaffold novices into considering the criteria that experts consider to distinguish a specific way of knowing from other ways of thinking.
  • Keep students in constant contact with self-assessing by redirecting them repeatedly to specific blocks of knowledge survey items relevant to tests and other evaluations and engaging them in debriefings that compare their self-assessments with performance.
  • Assign students in pairs to do short class presentations that address specific knowledge-survey items while having the class members monitor their evolving feelings of confidence to address the items.
  • Use the final minutes of the class period to enlist students in teams in creating alternative knowledge survey items that address the content covered by the day’s lesson.
  • Teach students the Bloom Taxonomy of the Cognitive Domain (http://orgs.bloomu.edu/tale/documents/Bloomswheelforactivestudentlearning.pdf) so that they can recognize both the level of challenge and the feelings associated with constructing and addressing different levels of challenge.

Conclusion: Why use knowledge surveys?

  • Their skillful use offers students many practices in metacognitive self-assessment over the entire course.
  • They organize our courses in a way that offers full transparent disclosure.
  • They convey our expectation standards to students before a course begins.
  • They serve as an interactive study guide.
  • They can help instructors enact instructional alignment.
  • They might be the most reliable assessment measure we have.

 

 


The Stakes: “You’ve Been Mucking With My Mind”

by Craig Nelson Indiana University

Earlier on this blog site, Ed Nuhfer (2014, Part 1, Part 2) urged us to consider the fundamental importance of Perry’s book (1970, 1999) for understanding what we are trying to do in fostering critical thinking, metacognition and other higher order outcomes. I enthusiastically agree.

I read Perry’s book (1970) shortly after it was published. I had been teaching at IU for about five years and had seen how difficult it was to effectively foster critical thinking, even in college seniors. Perry’s book transformed my thinking and my teaching. I realized that much of my own thinking was still essentially what he might have called sophomoric. I had to decide if I was convinced enough to fundamentally change how I thought. Once I began to come to grips with that I saw that Perry’s synthesis of his students’ experiences really mattered for teaching.

Perry made clear that there were qualitatively very different ways to think. Some of these ways included what I had been trying to get my students to master as critical thinking. But Perry helped me understand more explicitly what that might mean. More importantly, perhaps, he also helped me understand how to conceptualize the limits of my approach and what kinds of critical thinking the students would need to master next if I were to be successful. At the deepest level Perry helped me see that the issues were not only how to think in sophisticated ways. The real problems are more in the barriers and costs to thinking in more sophisticated ways.

And so I began. For about five years I taught in fundamentally new ways, challenging the students’ current modes of thinking and trying to address the existential barriers to change (Nelson 1989, 1999). I then decided that perhaps I should let the students more fully in on what I was doing. I thought it might be helpful to have them actually read excepts from Perry’s book. This was a challenging thought. I was teaching a capstone course for biology majors. Perry’s “Forms of Intellectual And Ethical Development, A Scheme” was rather clearly not the usual fare for such a course. So I decided to introduce it about halfway into the course, after the students had been working within a course framework designed to foster a deep understanding of scientific thinking.

I had incorporated a full period discussion each week for which the students prepared a multiple-page worksheet analyzing a reading assignment (See the Red Pen Worksheet; Nelson 2009, 2010a). Most of the students responded very positively to Perry as a discussion assignment (Perry Discussion Assignment1; Perry Selected Passages; Ingram and Nelson 2006, 2009).

For the final discussion at the end of the course, one of the questions was approximately: “Science is always changing. I will want to introduce new readings next time. Of the ones we read, which three should I consider replacing and why, and which three should I most certainly keep and why?” Perry was the reading most frequently cited as among the “most certainly keep.” Indeed, reactions were so strongly positive that comments even included: “I personally got little from Perry but the others in my group found it so valuable that you have to keep it.”

One subsequent year I assigned Perry on a Tuesday to be read for the discussion period scheduled the next week. The next morning I arrived at my office at 8 am. One of my students was sitting outside my office on the floor. I greeted her by name and asked if she was waiting to talk to me. “Why, yes.” (She skipped “duh.” Note that we are into seriously deviant behavior here: residential campus at 8 am, no appointment, no reason to think I would be there at that time.) After a few pleasantries she announced: “I read Perry last night.” (Deviance was getting thicker: She had read the assignment immediately and a week before it was due!) “I finally understand what you are trying to do in this course, and I really like it.” (This was a fairly common reaction as Perry provided a metacognitive framework that allowed the students to more deeply understand the purpose of the assignments we had been doing2.) “And I liked Perry a lot, too.” (I am thinking: it is 8 am, she can’t be here simply to rave about an assignment.) “But, I am a bit mad at you. You have been mucking3 with my mind. College courses don’t do that! I haven’t had a course muck with my mind since high school. And, I just wanted to say that you should have warned me!” (She was a college senior.) I agreed that I should have warned her and apologized. She seemed satisfied with this and we parted on good terms.

However, I was not satisfied with this state of affairs. She had felt violated by my trying to foster changes in how she thought. And I guessed that many of my other students probably had felt at least twinges of the same feelings. This led me to ask myself: “What is the difference between indoctrination and meaningful but fair education.” I concluded that fair educational practice would require trying to make the agenda public and understood before trying to change students’ minds. This openness would require more than just assigning a reading in the middle of the semester. Thereafter, I always included a non-technical summary of Perry in my first day classes (as in Nelson 2010 b) and usually assigned Perry as the second discussion reading, having used the first discussion to start mastering the whole-period discussion process.

I would generalize my conclusion here. We instructors (almost?) always need to keep students aware of our highest-level objectives in order to avoid indoctrination rather than fair education. Fortunately, I think that this approach also will often facilitate student mastery of these objectives. It is nice when right seems to match effective, eh?

 

1I decided somewhat reluctantly to include the details of this assignment. I strongly suggest that you read this short book and select the pages and passages that seem most relevant to the students and topics that you are teaching.

2Students find it especially easy to connect with Perry’s writing. He includes numerous direct quotations from interviews with students.

3She used a different initial letter in “mucking,” as you may have expected.

————-

Ingram, Ella L. & Craig E. Nelson. 2006. Relationship between achievement and students’ acceptance of evolution or creation in an upper-level evolution course. Journal of Research in Science Teaching 43:7-24.

Ingram, Ella L. & Craig E. Nelson. 2009. Applications of intellectual development theory to science and engineering education. P 1-30 in Gerald F. Ollington (Editor), Teachers and Teaching: Strategies, Innovations and Problem Solving. Nova Science Publishers.

Nelson, Craig E. (1986). Creation, evolution, or both? A multiple model approach. P 128–159 in Robert W. Hanson (Editor), Science and Creation: Geological, Theological, and Educational Perspectives New York: MacMillan.

Nelson, Craig E. (1989). Skewered on the unicorn’s horn: The illusion of a tragic tradeoff between content and critical thinking in the teaching of science. P 17–27 in Linda Crowe (Editor), Enhancing Critical Thinking in the Sciences. Washington, DC: Society of College Science Teachers.

Nelson, Craig E. (1999). On the persistence of unicorns: The tradeoff between content and critical thinking revisited. P 168–184 in Bernice A. Pescosolido & Ronald Aminzade (Editors), The Social Worlds of Higher Education: Handbook for Teaching in a New Century. Thousand Oaks, CA: Pine Forge Press.

Nelson, Craig E. 2009. The “Red Pen” Worksheet. Quick Start Series. Center for Excellence in Learning & Teaching. Humboldt State University. 2 pp. [Edited excerpt from Nelson 2010 a.]

Nelson, Craig E. (2010 a). Want brighter, harder working students? Change pedagogies!

Examples from biology. P 119–140 in Barbara J. Millis (Editor), Cooperative Learning in Higher Education: Across the Disciplines, Across the Academy. Sterling, VA: Stylus.

Nelson, Craig E. 2010. Effective Education for Environmental Literacy. P 117-129 in Heather L. Reynolds, Eduardo S. Brondizio, and Jennifer Meta Robinson with Doug Karpa and Briana L. Gross (Editors). Teaching Environmental Literacy in Higher Education: Across Campus and Across the Curriculum. Bloomington, IN: Indiana University Press.

Nelson, Craig E. 2012. Why Don’t Undergraduates Really ‘Get’ Evolution? What Can Faculty Do? P 311-347 in Karl S. Rosengren, Sarah K. Brem, E. Margaret Evans, & Gale M. Sinatra (Editors.) Evolution Challenges: Integrating Research and Practice in Teaching and Learning about Evolution. Oxford University Press.

Nuhfer, Ed. (2104a). Metacognition for guiding students to awareness of higher-level thinking (part 1). Retrieved from https://www.improvewithmetacognition.com/metacognition-for-guiding-students-to-awareness-of-higher-level-thinking-part-1/

Nuhfer, Ed. (2104b). Metacognition for guiding students to awareness of higher-level thinking (part 2). Retrieved from https://www.improvewithmetacognition.com/metacognition-for-guiding-students-to-awareness-of-higher-level-thinking-part-2/

Perry, William G., Jr. (1970). Forms of Intellectual and Ethical Development in the College YearsA Scheme. New York: Holt, Rinehart, and Winston.

Perry, William G., Jr. (1999). Forms of Ethical and Intellectual Development in the College Years: A Scheme. (Reprint of the 1968 1st edition with a new introduction by Lee Knefelkamp). San Francisco: Jossey-Bass.


Metacognition, Self-Regulation, and Trust

by  Dr. Steven Fleisher, CSU Channel Islands, Department of Psychology

Early Foundations

I’ve been thinking lately about my journey through doctoral work, which began with studies in Educational Psychology. I was fortunate to be selected by my Dean, Robert Calfee, Graduate School of Education at University of California Riverside, to administer his national and state grants in standards, assessment, and science and technology education. It was there that I began researching self-regulated learning.

Self-Regulated Learning

Just before starting that work, I had completed a Masters Degree in Marriage and Family Counseling, so I was thrilled to discover the relevance of the self-regulation literature. For example, I found it interesting that self-regulation studies began back in the 1960s examining the development of self-control in children. Back then the framework that evolved for self-regulation involved the interaction of personal, behavioral, and environmental factors. Later research in self-regulation focused on motivation, health, mental health, physical skills, career development, decision-making, and, most notable for our purposes, academic performance and success (Zimmerman, 1990), and became known as self-regulated learning.

Since the mid-1980s, self-regulated learning researchers have studied the question: How do students progress toward mastery of their own learning? Pintrich (2000) noted that self-regulated learning involved “an active, constructive process whereby learners set goals for their learning and then attempt to monitor, regulate, and control their cognition, motivation, and behavior, guided and constrained by their goals and the contextual features in the environment” (p. 453). Zimmerman (2001) then established that, “Students are self-regulated to the degree that they are metacognitively, motivationally, and behaviorally active participants in their own learning process” (p. 5). Thus, self-regulated learning theorists believe that learning requires students to become proactive and self-engaged in their learning, and that learning does not happen to them, but by them (see also Leamnson, 1999).

Next Steps

And then everything changed for me. My Dean invited Dr. Bruce Alberts, then President of the National Academy of Sciences, to come to our campus and lecture on science and technology education. Naturally, as Calfee’s Graduate Student Researcher, I asked “Bruce” what he recommended for bringing my research in self-regulated learning to the forefront. His recommendation was to study the, then understudied, role and importance of the teacher-student relationship. Though it required changing doctoral programs to accommodate this recommendation, I did it, adding a Doctorate in Clinical Psychology to several years of coursework in Educational Psychology.

Teacher-Student Relationships 

Well, enough about me. It turns out that effective teacher-student relationships provide the foundation from which trust and autonomy develop (I am skipping a lengthy discussion of the psychological principles involved). Suffice it to say, where clear structures are in place (i.e., standards) as well as support, social connections, and the space for trust to develop, students have increased opportunities for exploring how their studies are personally meaningful and supportive of their autonomy, thereby taking charge of their learning.

Additionally, when we examine a continuum of extrinsic to intrinsic motivation, we find the same principles involved as with a scale showing minimum to maximum autonomy, bringing us back to self-regulated learning. Pintrich (2000) included the role of motivation in his foundations for self-regulated learning. Specifically, he reported that a goal orientation toward performance arises when students are motivated extrinsically (i.e., focused on ability as compared to others); however, a goal orientation toward mastery occurs when students are motivated more intrinsically (i.e., focused on effort and learning that is meaningful to them).

The above concepts can help us define our roles as teachers. For instance, we are doing our jobs well when we choose and enact instructional strategies that not only communicate clearly our structures and standards but also provide needed instructional support. I know that when I use knowledge surveys, for example, in building a course and for disclosing to my students the direction and depth of our academic journey together, and support them in taking meaningful ownership of the material, I’m helping their development of metacognitive skill and autonomous self-regulated learning. We teachers can help improve our students’ experience of learning. For them, learning in order to get the grades pales in comparison to learning a subject that engages their curiosity, along with investigative and social skills that will last a lifetime.

References

Leamnson, R. (1999). Thinking about teaching and learning: Developing habits of learning with first year college and university students. Sterling, VA: Stylus.

Pintrich, P. R. (2000). The role of goal orientation in self-regulated learning. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.) Handbook of self-regulation. San Diego, CA: Academic.

Zimmerman, B. J. (1990). Self-regulating academic learning and achievement: The emergence of a social cognitive perspective. Educational Psychology Review, 2(2), 173-201.

Zimmerman, B. J. (2001). Theories of self-regulated learning and academic achievement: An overview and analysis. In B. J. Zimmerman & D. H. Schunk (Eds.) Self-regulated learning and academic achievement: Theoretical perspectives (2e). New York: Lawrence Erlbaum.


Self-assessment and the Affective Quality of Metacognition: Part 1 of 2

Ed Nuhfer, Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029

In The Feeling of What Happens: Body and Emotion in the Making of Consciousness(1999, New York, Harcourt), Antonio Damasio distinguished two manifestations of the affective domain: emotions (the external experience of others’ affect) and feelings(the internal private experience of one’s own affect). Enacting self-assessment constitutes an internal, private, and introspective metacognitive practice.

Benjamin Bloom recognized the importance of the affective domain’s involvement in successful cognitive learning, but for a time psychologists dismissed the importance of both affect and metacognition on learning (See Damasio, 1999; Dunlosky and Metcalfe, 2009, Metacognition, Los Angeles, Sage). To avoid repeating these mistakes, we should recognize that attempts to develop students’ metacognitive proficiency without recognizing metacognition’s affective qualities are likely to be minimally effective.

In academic self-assessment, an individual must look at a cognitive challenge and accurately decide her/his capability to meet that challenge with present knowledge and resources. Such decisions do not spring only from thinking cognitively about one’s own mental processes. Affirming that “I can” or “I cannot” meet “X” (the cognitive challenge) with current knowledge and resources draws from affective feelings contributed by conscious and unconscious awareness of what is likely to be an accurate decision.

“Blind insight” (http://pss.sagepub.com/content/early/2014/11/11/0956797614553944) is a new term in the literature of metacognition. It confirms an unconscious awareness that manifests as a feeling that supports sensing the correctness of a decision. “Blind insight” and “metacognitive self-assessment” seem to overlap with one another and with Damasio’s “feelings.”

Research in medical schools confirmed that students’ self-assessment skills remained consistent throughout medical education (http://files.eric.ed.gov/fulltext/ED410296.pdf.)  Two hypotheses compete to explain this confirmation.  One is that self-assessment skills establish early in life and cannot be improved in college. The other is that self-assessment skill remains fixed in post-secondary education only because it is so rarely taught or developed. The first hypothesis seems contradicted by the evidence supporting brain plasticity, constructivist theories of learning and motivation, metacognition theory, self-efficacy theory (http://files.eric.ed.gov/fulltext/EJ815370.pdf), and by experiments that confirm self-assessment as a learnable skill that improves with training (http://psych.colorado.edu/~vanboven/teaching/p7536_heurbias/p7536_readings/kruger_dunning.pdf).

Nursing is perhaps the discipline that has most recognized the value of developing intuitive feelings informed by knowledge and experience as part of educating for professional practice.

“At the expert level, the performer no longer relies on an analytical principle (rule, guideline, maxim) to connect her/his understanding of the situation to an appropriate action. The expert nurse, with her/his enormous background of experience, has an intuitive grasp of the situation and zeros in on the accurate region of the problem without wasteful consideration of a large range of unfruitful possible problem situations. It is very frustrating to try to capture verbal descriptions of expert performance because the expert operates from a deep understanding of the situation, much like the chess master who, when asked why he made a particularly masterful move, will just say, “Because it felt right. It looked good.” (Patricia Benner, 1982, “From novice to expert.” American Journal of Nursing, v82 n3 pp 402-407)

Teaching metacognitive self-assessment should include an aim toward improving students’ ability to clearly recognize the quality of “feels right” regarding whether one’s own ability to meet a challenge with present abilities and resources exists. Developing such capacity requires practice in committing errors and learning from them through metacognitive reflection. In such practice, the value of Knowledge Surveys (see http://profcamp.tripod.com/KS.pdf and http://profcamp.tripod.com/Knipp_Knowledge_Survey.pdf) becomes apparent.

Knowledge Surveys (Access tutorials for constructing knowledge surveys and obtaining downloadable examples at http://elixr.merlot.org/assessment-evaluation/knowledge-surveys/knowledge-surveys2.) consist of about a hundred to two hundred questions/items relevant to course learning objectives. These query individuals to self-assess by rating their present ability to meet a challenge on a three-point multiple-choice scale:

A. I can fully address this item now for graded test purposes.
B. I have partial knowledge that permits me to address at least 50% of this item.
C. I am not yet able to address this item adequately for graded test purposes.

and thereafter to monitor their mastery as the course unfolds.

In Part 2, we will examine why knowledge surveys are such powerful instruments for supporting students’ learning and metacognitive development, ways to properly employ knowledge surveys to induce measurable gains, and we will provide some surprising results obtained from pairing knowledge surveys in conjunction with a standardized assessment measure.


Mind the Feedback Gap

by Roman Taraban (Texas Tech University)

mindthegap

The saying “Mind the Gap” originated in 1969 to warn riders on London subways of the gap between the platform and subway car. Since then, it has been broadly applied to situations in which there may be something missing or lacking between where you are and where you want to be. The cautionary message sounded loudly this semester when I realized that my undergraduate students were not particularly interested in the constructive feedback they were receiving on their bi-weekly formative evaluations over the course content, consisting of short-answer and brief essay responses. This was troubling since I was trying to promote metacognition through my feedback. But I am getting a bit ahead of myself.

Feedback in the Classroom

Technology now affords instructors easy-to-use means of providing timely and detailed feedback on work that is submitted digitally. As one example, assignments can be sent to a website and the instructor can use tools like “Track Changes” and “New Comment” in Microsoft WordTM to insert edits and comments in a clear and readable fashion. Beyond these basic digital tools, the coming of age of automated instructional tutors has brought with it a science of just-in-time feedback, synced with the computer’s best guess as to what a student knows at any given moment, and providing little to extensive feedback and guidance, depending on a student’s ability and prior experience (Graesser et al. 2005; Koedinger et al., 1997). In terms of technology, there are broad options available to instructors, from easy markup tools to software that will automatically grade papers. Indeed, there has not been a better time for developing and delivering effective feedback to students.

Students’ Perceptions of Feedback

The utility of feedback has been examined empirically, and has produced several practical suggestions (Koedinger et al., 1997; Shute, 2008). Students’ perceptions of feedback, though, have not been extensively researched; however, a few things are known. Weaver (2006) reported that students found several aspects of feedback to be unhelpful: when the comments provided were general or vague, when the comments did not provide guidance for rethinking or revising, when they focused on the negative, and when they were unrelated to the task. On a more positive note, Higgins and Harley (2002) conducted a survey of college students and reported the criteria that over 75% of students considered important:

  • Comments that tell you what you could do to improve – 92%
  • Comments that explain your mistakes – 91%
  • Comments that focus on the level of critical analysis – 90%
  • Comments that focus on your argument – 89%
  • Comments that focus on the tutor’s overall impressions – 87%
  • Comments that tell you what you have done badly – 86%
  • Comments that focus on the subject matter – 82%
  • Comments that correct your mistakes – 80%
  • Feedback that tells you the grade – 79%
  • Comments that focus on your use of supporting evidence – 79% (p. 60)

Students’ Reactions to Feedback

For several semesters I have been following Weaver’s and Higgins and Harley’s dictums, using formative evaluations in an undergraduate class that prompt critical, reflective, and evaluative thinking, for many of the questions. This semester, I dutifully edited and commented on students’ responses and electronically delivered these back to students. After the second formative evaluation, I announced to students that grades had been posted and that if they wanted more detailed comments to let me know and I would email them as I had done for the first exam. Here is the irony: only 2 out of 30 students wanted the feedback.   Assuring students that sending commented responses would not create extra work for me did not change the outcome on subsequent evaluations. Students simply did not care to hear my thoughts on their work. As it turns out, Higgins and Hartley (2002) had already anticipated my situation when they suggested that students may be extrinsically motivated to achieve a specific grade and to acquire related credentials, and may not be intrinsically motivated to reflect on their understanding of the material through the critical lens afforded by instructors’ comments.

Perceptions of Feedback – A Touchstone

Feedback may be a touchstone of metacognition. Often, to boost metacognition in the classroom, we implement tasks intended to evoke critical thinking. But what better way to increase metacognition than through developing a keener sense in students for feedback. In a way, deeply considering the teacher’s feedback requires “thinking about someone else’s thinking” in order to improve one’s own “thinking about thinking.” It appears that for too long, I have been over-estimating students’ interest in thinking critically about their own work. And as is true with the development of other cognitive abilities, several things will need to happen for change to occur. From my side, more “demandingness” may be required: to be explicit about what I want, to sensitize students to my feedback through questioning and prompting, and to scaffold the process of reflecting on feedback through directed exercises. Most importantly, the feedback needs to have carry-over value to future student work.

It is generally accepted that feedback is an essential component of learning, providing a vehicle for thinking about one’s own thinking. Logically, alerting students to their strengths and weaknesses can provide the means by which they can reflect on how they thought through a task and how to constructively modify their approach in future work. None of this will happen, though, if students fail to consider the feedback. Wojtas (1998) warned of this possibility some years ago, when he reported on the research findings in one university, suggesting that some students were concerned only with their grade and not with substantive feedback. It may be helpful to pose the same stark question to our students in order to begin to close the feedback gap: Are you only interested in your grade?

My own experience has led me to other researchers confronting similar disconcerting situations. Jollards et al. (2009) write “teachers often feel their time is wasted when it is invested in marking work and making comments on assignments, only to see work not collected in class and then left at their doorstep at the end of semester. Even if it is collected the students might not read the feedback, and even if it is read, they might not act on it. As Shute (2008) points out, “Feedback can promote learning, if it is received mindfully” (p. 172). In sum, feedback is necessary because it can give students something to think about and can prompt deeper levels of reflection. Feedback needs to be good if the gap is going to be closed. But it is also the case that good feedback alone is not enough. Metacognition is necessary if feedback is going to lead to meaningful improvement. Students must process the feedback via metacognition if they are to close the gap. (Thanks to John Draeger for these summary points!)

References

Graesser, A. C., McNamara, D., & VanLehn, K. (2005). Scaffolding deep comprehension strategies through AutoTutor and iSTART. Educational Psychologist, 40, 225–234.

Higgins, R., & Hartley, P. (2002). The Conscientious Consumer: Reconsidering the role of assessment feedback in student learning. Studies in Higher Education, 27(1), 53-64. DOI:10.1080/0307507012009936 8

Jollands, M., McCallum, N., & Bondy, J. (2009). If students want feedback why don’t they collect their assignments? 20th Australasian Association for Engineering Education Conference, University of Adelaide, Australia.

Koedinger, K., Anderson, J. R., Hadley, W. H., Mark, M. (1997). Intelligent tutoring goes to school in the big city. International Journal of Artificial Intelligence in Education, 8, 30-43.

Shute, V. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189. DOI:10.3102/0034654307313795

Weaver, M. R. (2006). Do students value feedback? Student perceptions of tutors’ written responses. Assessment & Evaluation in Higher Education, 31(3), 379-394. DOI:10.1080/02602930500353061

Wojtas, O. (1998). Feedback? No, just give us the answers. Times Higher Education Supplement, September 25 1998.

 


Thinking about How Faculty Learn about Learning

By Cynthia Desrochers, California State University Northridge

Lately, two contradictory adages have kept me up nights:  “K.I.S.S. – Keep It Simple, Stupid” (U.S. Navy) and “For every complex problem there is an answer that is clear, simple, and wrong” (H.L. Mencken).  Which is it?  Experts have a wealth of well-organized, conditionalized, and easily retrievable knowledge in their fields (Bradford, et al., 2000).  This may result in experts skipping over steps when they teach a skill that has become automatic to them.  But where does this practice leave our novice learners who need to be taught each small step—almost in slow motion—to begin to grasp a new skill?

I have just completed co-facilitating five of ten scheduled faculty learning community (FLC) seminars in a yearlong Five GEARS for Activating Learning FLC.  As a result of this experience, my takeaway note to self now reads in BOLD caps:  (1) keep it simple in the early stages of learning and (2) model the entire process and share my thinking out loud—no secrets hidden behind the curtains!

The Backstory

The Five Gears for Activating Learning project at California State University, Northridge, began in fall 2012. It was my idea, and I asked seven university-wide faculty leaders to join me in a grassroots effort. Our goals were to improve student learning from inside the classroom (vs. policy modifications), promote faculty use of the current research on learning, provide a lens for judging the efficacy of various teaching strategies (e.g., the flipped classroom), and develop a common vocabulary for use campuswide (e.g., personnel communications).  Support for this project came from the University Provost and the dean of the Michael D. Eisner College of Education in the form of reassigned time for me and 3-unit buyouts for each of the eight FLC members, spread over the entire academic year, 2014-15.

We read as a focus book How Learning Works: 7 Research-Based Principles for Smart Teaching (Ambrose, et al., 2010). We condensed Ambrose’s seven principles to five GEARS, one of which is Developing Mastery, which we defined as deep learning, reflection, and self-direction—critical elements of metacognition and the focus of this blog site.

On Keeping It Simple

I have been in education for forty-five years, yet I’m having many light-bulb moments with this FLC group – I’m learning something new, or reorganizing prior knowledge, or having increased clarity.  Hence, I’ve given a lot of thought to the conflict between keeping it simple and omitting some important elements versus sharing more complex definitions and relationships and overwhelming our FLC members. My rationale for choosing simple: If I am still learning about how learning works, how can I expect new faculty—who teach Political Science, Business Law, Research Applications, and African Americans in Film, all without benefit of a teaching credential—to process some eighty years of research on learning in two semesters?

In opting for the K.I.S.S. approach, we have developed a number of activities and tools that scaffold learning to use the five GEARS in our teaching; moreover, each activity or tool models explicitly with faculty some practices we are encouraging them to use with their students.  This includes (1) reflective writing in the form of learning logs and diaries, (2) an appraisal instrument to self-assess their revised (using the GEARS) spring 2015 course design, and (3) a class-session plan to scaffold their use of the GEARS.  [See the detailed descriptions given in the handout resource posted on this site.] I hope to have some results data regarding their use in my spring blog.

Looking to next semester, our spring FLC projects will likely center around not only teaching the redesigned five GEARS course but also disseminating the five GEARS campuswide.  As a direct result of the Daily Diary that FLC members kept for three weeks on others’ use and misuse of the five GEARS, they want to share our work.  [See handout for further description of the Daily Diaries.] Dissemination possibilities include campus student tour guides, colleagues who teach a common course, Freshman Seminar instructors, librarians, and the Career Center personnel.  If another adage is true, “Tell me and I forget, teach me and I may remember, involve me and I learn” (Benjamin Franklin), our FLC faculty will likely move of their own accord along the continuum from a simple to complex understanding of the five GEARS in their efforts to teach the five GEARS to others on campus.

A Word about GEARS

Why is this blog not focusing solely on the metacognition gear, which we call Developing Mastery? The simple answer is that learning is so intertwined that all the GEARS likely support metacognition in some way.  However, any one of the activities or tools we have employed can be modified to limit the scope to your definition of metacognition.  Our postcard below shows all five GEARS:

5_GEARS_postcard


Transparency and Metacognition

by James Rhem (Executive Editor, National Teaching and Learning Forum)

Some readers may know that The National Teaching and Learning FORUM has undertaken a series of residencies on campuses across the country looking at teaching and learning at a variety of institutions, and all the efforts to support and improve it. Currently I’m at the University of Nevada-Las Vegas where Mary-Ann Winkelmas is coordinator of instructional development and research.

One of the things Mary-Ann brought to Las Vegas from her previous work at the University of Chicago and the University of Illinois is something called the Transparency Project, a project carried out in collaboration with AAC&U. To my mind this approach to increasing students’ connections with and understanding of the assignments they’re given in the courses they take seems to have a lot to do with metacognition. Perhaps it’s a homely version, I’m not sure, but I think it’s something those interested in improving student performance through metacognitive awareness ought to know about. So, that’s my post for the moment. Take a look at the impressive body of research the project has already amassed, and the equally impressive results in improved student performance, and see if you don’t agree there’s a relation to metacognitive approaches, something to take note of.

Here’s a link to a page of information on the project with even more links to the research:

http://www.unlv.edu/provost/teachingandlearning


Metacognition for Purposeful Living

by Charity Peak, U.S. Air Force Academy*

One of my most remarkable professional experiences was teaching a humanities course called “Leading Lives That Matter.” After reading a variety of philosophical texts in an anthology by the same name, students explored the meaning of their lives.  Students identified what type of monuments they would want erected in their honor and wrote their own obituaries.  Surprisingly, it wasn’t a philosophy course; it was actually quite pragmatic.  It also wasn’t a higher level course for juniors or seniors.  And it wasn’t optional.  It was a mandatory requirement for all freshmen before embarking on their educational journeys.  The course was designed to help students reflect on why they were pursuing an education and determine a potential vision for their future after obtaining their degrees.  After all, if you’re going to spend thousands of dollars over several years, why not ask the important questions first?

Many students will say that they are going to school because they want to improve the lives of their families or because they hope to earn more money in a better paying job. But what do faculty do to help students see the grander vision?  What is an instructor’s role in supporting students to understand how their gifts and talents could transform the world around them?  Metacognition requires developing self-awareness and the ability to self-assess. It requires reflection about one’s education and learning – past, present, and future.  Helping students develop metacognitive skills is essential for them to become self-regulated learners with a vision for the future (Zimmerman, 2002).  Faculty, then, are ideally positioned to help students learn to leverage their self-awareness for purposeful living.

Each semester I teach, I encounter a student in distress, desperately struggling to identify how their education can help them develop into the person they wish to become. I am not a trained counselor, yet I often find myself coaching and mentoring these adults – both young and old – to determine how they can utilize their gifts to contribute to the world in some way.  As John F. Kennedy so eloquently shared with us, “One person can make a difference, and every person should try” (http://thinkexist.com).  But so many people seek higher education for such limited reasons – money and jobs.  Instead of being focused on how an education could aid students’ chances of surviving fiscally, what if faculty embraced their role of facilitating greatness?

This insight has transcended all of the settings in which I have taught throughout my career – from elementary school up through adult learning – but it has never been more evident than where I am now. Advising students at a military service academy lends itself to even more critical conversations about purpose and meaning.  These young people are receiving a “free” education, which helps their family’s financial commitment, but at a great price – risking their lives for their country.  It becomes apparent very quickly that what brings students to a military service academy is not what keeps them there.  While all college students go through a bit of an identity crisis, these students will potentially pay the ultimate price for their commitment to their country.  They must be solid in their decision, and they must evolve into an altruistic state much earlier than the average college student.  Money and job security are not enough to get you through a service academy’s rigor, let alone the life of a military officer.  As a faculty member, it is my duty to help these young people make the right choice, one that is good for them and their country.

While these metacognitive insights are quite visible at a military service academy, all faculty should commit to the duty of enabling metacognitive reflection about purposeful living at their own institutions. They should facilitate courses, or even a series of conversations, that encourage students to be metacognitive about who they want to be when they grow up – not what job they want to possess (self-seeking) but how they want to contribute (community building).  In order to venture down this path, however, we as faculty and advisors need to provide ample time and guidance for self-reflection in and out of our courses.  We should help students to gain self-awareness about their gifts and talents so that they can see their collegiate journey as a path toward purposeful living.

If vocation is “the place where deep gladness and the world’s deep hunger meet” (Buechner, p. 112), faculty should consider what role they play in helping students to become metacognitive about how their talents lead to majors, which guide careers, and eventually become paths to greatness. By helping our students identify how to intentionally lead lives that matter, we too can benefit vicariously by renewing our spirit to teach.  After all, many of us chose this vocation because of our own yearning to live purposefully.

References:

Buechner, F. (2006). Vocation. In Schwehn, M. R., & Bass, D. C. (Eds.), Leading lives that matter: What we should do and who we should be (pp. 111-12). Grand Rapids, MI: William B. Eerdmans Publishing Co.

Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory into Practice 41(2): 64-70. Retrieved from http://www.jstor.org/stable/1477457

 

* Disclaimer: The views expressed in this document are those of the authors and do not reflect the official policy or position of the U. S. Air Force, Department of Defense, or the U. S. Govt.