Positive Affective Environments and Self-Regulation

by Steven Fleisher at CSU Channel Islands

Although challenging at times, providing for positive affective experiences are necessary for student metacognitive and self-regulated learning. In a classroom, caring environments are established when teachers not only provide structure, but also attune to the needs and concerns of their students. As such, effective teachers establish those environments of safety and trust, as opposed to merely environments of compliance (Fleisher, 2006). When trust is experienced, learning is facilitated through the mechanisms of autonomy support as students discover how the academic information best suits their needs. In other words, students are supported in moving toward greater intrinsic (as opposed to extrinsic) motivation and self-regulation, and ultimately enhanced learning and success (Deci & Ryan, 2002; Pintrich, 2003).

Autonomy and Self-Regulation

In an academic context, autonomy refers to students knowing their choices and alternatives and self-initiating their efforts in relation to those alternatives. For Deci and Ryan (1985, 2002), a strong sense of autonomy within a particular academic task would be synonymous with being intrinsically motivated and, thus, intrinsically self-regulated. On the other hand, a low sense of autonomy within a particular academic context would be synonymous with being extrinsically motivated and self-regulated. Students with a low sense of autonomy might say, “You just want us to do what you want, it’s never about us,” while students with a strong sense of autonomy might say, “We can see how this information may be useful someday.” The non-autonomous students feel controlled, whereas the autonomous students know they are in charge of their choices and efforts.

Even more relevant to the classroom, Pintrich (2003) reported that the more intrinsically motivated students have mastery goal-orientations (a focus on effort and effective learning strategy use) as opposed to primarily performance goal-orientations (actually a focus on defending one’s ability). These two positions are best understood under conditions of failure. Performance-orientated students see failure as pointing out their innate inabilities, whereas mastery-oriented students see failure as an opportunity to reevaluate and reapply their efforts and strategies as they build their abilities. Thus, in the long run, mastery-oriented students end up “performing” the best academically.

The extrinsically motivated students perceive that the teacher is in charge, and not themselves, as to whether or not they are rewarded for their work. This extrinsic orientation may facilitate performance, however, it can backfire. These students can become unwilling to put forth a full effort for fear of failure or judgment. These students feel a compulsion for performance, which can result in a refusal to try to meet goals. They may come to prefer unchallenging courses, fail, or drop out entirely. On the other hand, students with intrinsic goal-orientations realize that they are in charge of their reasons for acting. Metacognitively, they are aware of their alternatives and strategies and self-regulate accordingly as they apply the necessary effort toward their learning tasks. These students would sense that the classroom provided an environment for exploring the subject matter in relevant and meaningful ways and they would identify how and where to best apply their learning efforts.

Strategies for the Classroom

As with autonomy (minimum to maximum), motivation and self-regulation exist on a continuum (extrinsic to intrinsic), as opposed to existing at one end or the other. Here are a couple of instructional strategies that I have found that support students in their movement toward greater autonomy and intrinsic motivation and self-regulation.

Knowledge surveys, for example, offer a course tool for organizing content learning and assessing student intellectual development (Nuhfer & Knipp, 2003). These surveys consist of questions that represent the breadth and depth of the course, including the main concepts, the related content information, and the different levels of reasoning to be practiced and assessed. I have found that using knowledge surveys to disclose to student where a course is going and why helps them take charge of their learning. This type of transparency helps students discover ways in which their learning efforts are effective.

Cooperative learning strategies (Millis & Cottell, 1998) provide an ideal counterpart to knowledge surveys. Cooperative learning (for instance, working in groups or teaching your neighbor) offers both positive learning and positive affective experiences. These learning experiences, between students and between teachers and students support the development of autonomy, as well as intrinsic motivation and self-regulation. For example, when students work together effectively in applications of course content, they come to see through one another’s perspectives the relevance of the material, while gaining competency as well as insights into how to gain that competency. When students are aware, by way of the knowledge surveys, of the course content and levels of reasoning required, and when these competencies and related learning strategies are practiced, reflected upon, and attained, learning and metacognitive learning are engaged.

References

Deci, E. L. & Ryan, R. M. (1985). Intrinsic motivation and self-determination in human behavior. New York: Plenum Press.

Deci, E. L. & Ryan, R. M. (2002). Handbook of self-determination research. Rochester, NY: The University of Rochester Press.

Fleisher, S. C. (2006). Intrinsic self-regulation in the classroom. Academic Exchange Quarterly, 10(4), 199-204.

Millis, B. J. & Cottell, P. G. (1998). Cooperative learning for higher education faculty. American Council on Education: Oryx Press.

Nuhfer, E. & Knipp, D. (2003). The knowledge survey: A tool for all reasons. To Improve the Academy, 21, 59-78.

Pintrich, P. R. (2003). Motivation and classroom learning. In W. M. Reynolds & G. E. Miller (Eds.), Handbook of psychology: Educational psychology, Volume 7. Hoboken, NJ: John Wiley & Sons.


Metacognition and Specifications Grading: The Odd Couple?

By Linda B. Nilson, Clemson University

More than anything else, metacognition is awareness of what’s going on in one’s mind. This means, first, that a person sizes up a task before beginning it and figures out what kind of a task it is and what strategies to use. Then she monitors her thinking as she progresses through the task, assessing the soundness of her strategies and her success at the end.

So what does this have to do with specs grading?

In specs grading, all assignments and tests are graded pass/fail, credit/no credit, where “pass” means at least B or better work. A student product passes if it conforms to the specifications (specs) that an instructor described in the assignment or test directions. So either the students follow the directions and “get it right,” or the work doesn’t count. Partial credit doesn’t exist.

For the instructor, the main task is laying out the specs. A short reading compliance assignment may have specs as simple as: “You must answer all the study questions, and each answer must be at least 100 words long.” For more substantial assignments, the instructor can detail the “formula” or template of the assignment – that is, the elements and organization of a good literature review, research proposal, press release, or lab report – or provide a list of the questions that she wants students to answer, as for a reflection on a service-learning or group project experience. Especially for formulaic assignments, which so many undergraduate-level assignments are, models and examples bring the specs to life.

The stakes are higher for students than they are in our traditional grading system. With specs grading, it’s all or nothing. No sliding by with a careless, eleventh-hour product because partial credit is a given.

To be successful in a specs-graded course, students have to be aware of their thinking as they complete their assignments and tests. This means that students, first have to pay attention to the directions, and the directions are themselves a learning experience when they explicitly lay out the formula for different types of work. Especially when enhanced with models, the specs supply the crucial information that we so often gloss over: exactly what the task involves. Otherwise, how should our students know? With clear specs, they learn what reflection involves, how a literature review is organized, and what a research proposal must contain. Then during the task, students need to monitor and assess their work to determine if it is indeed meeting the specs. “Does the depth of my response match the length requirement?” “Am I answering all the reflection questions?” “Am I following the proper organization?” “Have I written all the sections?”

Another distinguishing characteristic of specs grading is the replacement of the point system with “bundles” of assignments and tests. For successfully completing a bundle, students obtain final course grades. And they select the bundle and the grade they are willing to work for. To get a D, the bundle involves relatively little, unchallenging work. For higher grades, the bundles require progressively more work, more challenging work, or both. In addition, each bundle is associated with a set of learning outcomes, so a given grade indicates the outcomes a student has achieved.

If students fail to self-monitor and self-assess, they risk receiving no credit for their work and, given that it is part of a bundle, getting a lower grade in the course. And their grade is important for a whole new reason: because they chose the grade they wanted/needed and its accompanying workload. This element of choice and volition increases students’ sense of responsibility for their performance.

With specs grading, students do get limited opportunities to revise an unacceptable piece of work or to obtain get a 24-hour extension on an assignment. These opportunities are represents by virtual tokens that students receive at the beginning of the course. Three is a reasonable number. This way, the instructor doesn’t have to screen excuses, requests for exceptions, and the like. She also has the option of giving students chances to earn tokens and rewarding those with the most tokens at the end of the course.

Specs grading solves many of the problems that our traditional grading system has bred while strengthening students’ metacognition and sense of ownership of their grades. Details on using and transitioning to this grading system are in my 2015 book, Specifications Grading: Restoring Rigor, Motivating Students, and Saving Faculty Time (Sterling, VA: Stylus).


What Do We Mean by “Metacognitive Instruction”?

by Lauren Scharff (U.S. Air Force Academy*) 

Many of you are probably aware of the collaborative, multi-institutional metacognitive instruction research project that we initiated through the Improve with Metacognition site.  This project has been invigorating for me on many levels. First, through the process of developing the proposal, I was mentally energized. Several of us had long, thoughtful conversations about what we meant when we used the term “metacognitive instruction” and how these ideas about instruction “mapped” to the concept of “metacognitive learning.”  These discussions were extensions of some early blog post explorations, What do we mean when we say “Improve with metacognition”? (Part 1 and Part 2). Second, my involvement in the project led me to (once again) examine my own instruction. Part of this self-examination happened as a natural consequence of the discussions, but also it’s happening in an ongoing manner as I participate in the study as an intervention participant. Good stuff!

For this post, I’d like to share a bit more about our wrangling with what we meant by metacognitive instruction as we developed the project, and I invite you to respond and share your thoughts too.

Through our discussions, we ultimately settled on the following description of metacognitive instruction:

Metacognitive instructors are aware of what they are doing and why. Before each lesson, they have explicitly considered student learning goals and multiple strategies for achieving those goals.  During the lesson, they actively monitor the effectiveness of those strategies and student progress towards learning goals.  Through this pre-lesson strategizing and during lesson monitoring awareness, a key component of metacognition, is developed; however, awareness is not sufficient for metacognition.  Metacognitive instructors also engage in self-regulation. They have the ability to make “in-the-moment”, intentional changes to their instruction during the lesson based on a situational awareness of student engagement and achievement of the learning objectives — this creates a responsive and customized learning experience for the student.

One of the questions we pondered (and we’d love to hear your thoughts on this point), is how these different constructs were related and / or were distinct. We came to the conclusion that there is a difference between reflective teaching, self-regulated teaching, and metacognitive instruction/teaching.

More specifically, a person can reflect and become aware of their actions and their consequences, but at the same time not self-regulate to modify behaviors and change consequences, especially in the moment. A person can also self-regulate / try a new approach / be intentional in one’s choice of actions, but not be tuned in / aware of how it’s going at the moment with respect to the success of the effort. (For example, an instructor might commit to a new pedagogical approach because she learned about it from a colleague. She can implement that new approach despite some personal discomfort due to changing pedagogical strategies, but without conscious and intentional awareness of how well it fits her lesson objectives or how well it’s working in the moment to facilitate her students’ learning.) Metacognition combines the awareness and self-regulation pieces and increases the likelihood of successfully accomplishing the process (teaching, learning, or other process).

Thus, compared to other writings we’ve seen, we are more explicitly proposing that metacognition is the intentional and ongoing interaction between awareness and self-regulation. Others have generally made this claim about metacognitive learning without using the terms as explicitly. For example, “Simply possessing knowledge about one’s cognitive strengths or weaknesses and the nature of the task without actively utilizing this information to oversee learning is not metacognitive.” (Livingston, 1997). But, in other articles on metacognition and on self-regulated learning, it seems like perhaps the metacognitive part is the “thinking or awareness” part and the self-regulation is separate.

What do you think?

——————
Livingston, J. A. (1997). Metacognition: An Overview. Unpublished manuscript, State University of New York at Buffalo. http://gse.buffalo.edu/fas/shuell/cep564/metacog.htm

* Disclaimer: The views expressed in this document are those of the authors and do not reflect the official policy or position of the U. S. Air Force, Department of Defense, or the U. S. Govt.


Fostering Metacognition: Right-Answer Focused versus Epistemologically Transgressive

by Craig E. Nelson at Indiana University (Contact: nelson1@indiana.edu)

I want to enrich some of the ideas posted here by Ed Nuhfer (2014 a, b, c and d) and Lauren Scharff (2014). I will start by emphasizing some key points made by Nuhfer (2014 a):

  • Instead of focusing on more powerful ways of thinking, most college instruction has thus far focused on information, concepts and discipline specific skills. I will add that even when concepts and skills are addressed they, too, are often treated as memorizable information both by students and faculty. Often little emphasis is placed on demonstrating real understanding, let alone on application and other higher-level skills.
  • “Adding metacognitive components to our assignments and lessons can provide the explicit guidance that students need. However, authoring these components will take many of us into new territory…” This is tough because such assignments require much more support for students and many of faculty members have had little or no practice in designing such support.
  • The basic framework for understanding higher-level metacognition was developed by Perry in the late 1960s and his core ideas have since been deeply validated, as well as expanded and enriched, by many other workers (e.g. Journal of Adult Development, 2004; Hoare, 2011.).
  • “Enhanced capacity to think develops over spans of several years. Small but important changes produced at the scale of single quarter or semester-long courses are normally imperceptible to students and instructors alike.”

It is helpful (e.g. Nelson, 2012, among many) to see most of college-level thinking as spanning four major levels, a truncated condensation of Perry’s 9 stages as summarized in Table 1 of Nuhfer (2014 a). Each level encompasses a different set of metacognitive skills and challenges. Individual students’ thinking is often a mix or mosaic where they approach some topics on one level and others at the next.

In this post I am going to treat only the first major level, Just tell me what I need to know (Stages 1 & 2 of Table 1 in Nuhfer, 2012 a). In this first level, students view knowledge fundamentally as Truth. Such knowledge is eternal (not just some current best model), discovered (not constructed) and objective (not temporally or socially situated). In contrast, some (but certainly not all) faculty members view what they are teaching as constructed best current model or models and as temporally and socially situated with the subjectivity that implies.

The major cognitive challenges within this first level are usefully seen as moving toward a more complete mastery of right-answer reasoning processes (Nelson, 2012), sometimes referred to as a move from concrete to formal reasoning (although the extent to which Piaget’s stages actually apply is debated). A substantial majority of entering students at most post-secondary institutions have not yet mastered formal reasoning. However, many (probably most) faculty members tacit assume that all reasonable students will quickly understand anything that is asked in terms of most right-answer reasoning. As a consequence, student achievement is often seriously compromised.

Lawson et al. (2007) showed that a simple test of formal reasoning explained about 32% of the variance in final grades in an introductory biology course and was the strongest such predictor among several options. This is quite remarkable considering that the reasoning test had no biological content and provided no measure of student effort. Although some reasoning tasks could be done by most students, an understanding of experimental designs was demonstrated largely by students who scored as having mastered formal reasoning. Similar differences in achievement have been documented for some other courses (Nelson, 2012).

Nuhfer (2014 b) and Scharff (2014) discuss studies of the associations among various measures of student thinking. From my viewpoint, their lists start too high up the thinking scale. I think that we need to start with the transitions between concrete and formal reasoning. I have provided a partial review of key aspects of this progression and of the teaching moves that have been shown to help students master more formal reasoning, as well as sources for key instruments (Nelson, 2012). I think that such mastery will turn out to be especially helpful, and perhaps essential, to more rapid development of higher level-reasoning skills.

This insight also may helps to resolve a contrast, between the experience of Scharff and her colleagues (Scharff, 2014) and Nuhfer’s perspective (2014 b). Scharff reports: “At my institution we have some evidence that such an approach does make a very measurable difference in aspects of critical thinking as measured by the CAT (Critical Thinking Assessment, a nationally normed, standardized test …).” In his responses, Nuhfer (2014 b) emphasizes that, given how we teach, there is, not surprisingly, very little change over the course an undergraduate degree in higher-order thinking. (“… the typical high school graduate is at about [Perry] level 3 2/3 and the typical college graduate is a level 4. That is only one-third of a Perry stage gain made across 4-5 years of college.”)

It is my impression that the “Critical Thinking Assessment” discussed by Scharff deals primarily with right-answer reasoning. The mastery of the skills underlying right-answer reasoning questions is largely a matter of mastering formal reasoning processes. Indeed, tests of concrete versus formal reasoning usually consist exclusively of questions that have very clear right answers. I think that several of the other thinking assessments that Nuhfer and Scharff discuss also have exclusively or primarily clear right-answers. This approach contrasts markedly with the various instruments for assessing intellectual development in the sense of Perry and related authors, none of which focuses on right-answer questions. An easily accessible instrument is given the appendices of King and Kitchener (1994).

This leads to three potentially helpful suggestions for fostering metacognition.

  • Use one of the instruments for assessing concrete versus formal reasoning as a background test for all of your metacognitive interventions. This will allow you to ask whether students who perform differently on such an assessment also perform differently on your pre- or post-assessment, or even in the course as a whole (as in Lawson et al. 2007).
  • Include interventions in your courses that are designed to help students succeed with formal, right-answer reasoning tasks. In STEM courses, teaching with a “learning cycle” approach that starts with the examination or even the generation of data is one important, generally applicable such approach.
  • Carefully distinguish between the ways that you are helping students master right-answer reasoning and the ways you are trying to foster more complex forms of reasoning. Fostering right-answer reasoning will include problem-focused reasoning, self-monitoring and generalizing right-answer reasoning processes (e.g. “Would using a matrix help me solve this problem?”).

Helping students move to deeper sophistication requires epistemologically transgressive challenges. Those who wish to pursue such approaches seriously should examine first, perhaps, Nuhfer’s (2014d) “Module 12 – Events a Learner Can Expect to Experience” and ask how one could foster each successive step.

Unfortunately, the first key step to helping students move beyond right-answer thinking requires helping them understand the ways in which back-and-white reasoning fails in one’s discipline. For this first epistemologically transgressive challenge, understanding that knowledge is irredeemably uncertain, one might want to provide enough scaffolding to allow students to make sense of readings such as: Mathematics: The Loss of Certainty (Kline, 1980); Be Forewarned: Your Knowledge is Decaying (Arbesman, 2012); Why Most Published Research Findings Are False (Ioannidis, 2005); and Lies, Damned Lies, and Medical Science (Freedman, 2010).

As an overview for students of the journey in which everything becomes a matter of better and worse ideas and divergent standards for judging better, I have had some success using a heavily scaffolded approach (detailed study guides, including exam ready essay questions, and much group work) to helping students understand Reality Isn’t What It Used to Be: Theatrical Politics, Ready-to-Wear Religion, Global Myths, Primitive Chic, and Other Wonders of the Postmodern World (Anderson,1990).

We have used various heavily scaffolded, epistemologically transgressive challenges to produce an average gain of one-third Perry stage over the course of a single semester (Ingram and Nelson, 2009). As Nuhfer (2014b) noted, this is about the gain usually produced by an entire undergraduate degree of normal instruction.

And for the bravest, most heavily motivated faculty, I would suggest In Over Our Heads: The Mental Demands of Modern Life (Kegan, 1994). Kegan attempts to make clear that each of us has our ability to think in more complex ways limited by epistemological assumptions of which we are unaware. This is definitely not a book for undergraduates nor is it one that easily embraced by most faculty members.

REFERENCES CITED

  • Hoare, Carol. Editor (2011). The Oxford Handbook of Reciprocal Adult Development and Learning. 2nd Edition. Oxford University Press.
  • Ingram, Ella L. and Craig E. Nelson (2009). Applications of Intellectual Development Theory to Science and Engineering Education. P 1-30 in Gerald F. Ollington (Ed.), Teachers and Teaching: Strategies, Innovations and Problem Solving. Nova Science Publishers.
  • Ioannidis, John (2005). “Why Most Published Research Findings Are False.” PLoS Medicine August; 2(8): e124. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/ [The most downloaded article in the history of PLoS Medicine. Too technical for many first-year students even with heavy scaffolding?]
  • Journal of Adult Development (2004). [Special volume of nine papers on the Perry legacy of cognitive development.] Journal of Adult Development 11(2):59-161.
  • King, Patricia M. and Karen Strohm Kitchner (1994). Developing Reflexive Judgment: Understanding and Promoting Intellectual Growth and Critical Thinking in Adolescents and Adults. Jossey-Bass.
  • Kline, Morris (1980). Mathematics: The Loss of Certainty. Oxford University Press. [I used the summary (the Preface) in a variety of courses.]
  • Nelson, Craig E. (2012). “Why Don’t Undergraduates Really ‘Get’ Evolution? What Can Faculty Do?” Chapter 14 (p 311-347) in Karl S. Rosengren, E. Margaret Evans, Sarah K. Brem, and Gale M. Sinatra (Editors.) Evolution Challenges: Integrating Research and Practice in Teaching and Learning about Evolution. Oxford University Press. [Literature review applies broadly, not just to evolution]

Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments

This sometimes humorous article by Justin Kruger and David Dunning describes a series of four experiments that “that incompetent individuals have more difficulty recognizing their true level of ability than do more competent individuals and that a lack of metacognitive skills may underlie this deficiency.”  It also includes a nice review of the literature and several examples to support their study.

Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments, Journal of Personality and Social Psychology 1999, Vol. 77, No. 6. 121-1134


Self-assessment and the Affective Quality of Metacognition: Part 2 of 2

Ed Nuhfer, Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029

In Part 1, we noted that knowledge surveys query individuals to self-assess their abilities to respond to about one hundred to two hundred challenges forthcoming in a course by rating their present ability to meet each challenge. An example can reveal how the writing of knowledge survey items is similar to the authoring of assessable Student Learning Outcomes (SLOs). A knowledge survey item example is:

I can employ examples to illustrate key differences between the ways of knowing of science and of technology.

In contrast, SLOs are written to be preceded by the phrase: “Students will be able to…,” Further, the knowledge survey items always solicit engaged responses that are observable. Well-written knowledge survey items exhibit two parts: one affective, the other cognitive. The cognitive portion communicates the nature of an observable challenge and the affective component solicits expression of felt confidence in the claim, “I can….” To be meaningful, readers must explicitly understand the nature of the challenges. Broad statements such as: “I understand science” or “I can think logically” are not sufficiently explicit. Each response to a knowledge survey item offers a metacognitive self-assessment expressed as an affective feeling of self-assessed competency specific to the cognitive challenge delivered by the item.

Self-Assessed Competency and Direct Measures of Competency

Three competing hypotheses exist regarding self-assessed competency relationship to actual performance. One asserts that self-assessed competency is nothing more than random “noise” (https://www.koriosbook.com/read-file/using-student-learning-as-a-measure-of-quality-in-hcm-strategists-pdf-3082500/; http://stephenporter.org/surveys/Self%20reported%20learning%20gains%20ResHE%202013.pdf).  Two others allow that self-assessment is measurable. When compared with actual performance, one hypothesis maintains that people typically overrate their abilities and generally are “unskilled and unaware of it” (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2702783/).  The other, “blind insight” hypothesis, indicates the opposite: a positive relationship exists between confidence and judgment accuracy (http://pss.sagepub.com/content/early/2014/11/11/0956797614553944).

Suitable resolution of the three requires data acquired from paired instruments of known reliability and validity. Both instruments must be highly aligned to collect data that addresses the same learning construct. The Science Literacy Concept Inventory (SLCI), a 25-item test tested on over 17,000 participants, produces data on competency with Cronbach Alpha Reliability .84, and possesses content, construct, criterion, concurrent, and discriminant validity. Participants (N=1154) who took the SLCI also took a knowledge survey (KS-SLCI with Cronbach Alpha Reliability of .93) that produced a self-assessment measure based on the identical 25 SLCI items. The two instruments are reliable and tightly aligned.

If knowledge surveys register random noise, then data furnished from human subjects will differ little from data generated with random numbers. Figure 1 reveals that data simulated from random numbers 0, 1, and 2 yield zero reliability, but real data consistently show reliability measures greater than R = .9 (Figure 1). Whatever quality(ies) knowledge surveys register is not “random noise.” Each person’s self-assessment score is consistent and characteristic.

randomksevenodd1154.eps

Figure 1. Split-halves reliabilities of 25-item KS-SLCI knowledge surveys produced by 1154 random numbers (left) and by 1154 actual respondents (right). 

Correlation between the 1154 actual performances on the SLCI and the self-assessed competencies through the KS-SLCI is a highly significant r = 0.62. Of the 1154 participants, 41.1% demonstrated good ability to self-assess their actual abilities to perform within ±10%, 25.1% of participants proved to be under-estimators, and 33.8% were over-estimators.

Because each of the 25 SLCI items poses challenges of varying difficulty, we could also test whether participants’ self-assessments gleaned from the knowledge survey did or did not show a relationship to the actual difficulty of items as reflected by how well participants scored on each of them. The collective self-assessments of participants revealed an almost uncanny ability to reflect the actual performance of the group on most of the twenty-five items (Figure 2), thus supporting the “blind insight” hypothesis. Knowledge surveys appear to register meaningful metacognitive measures, and results from reliable, aligned instruments reveal that people do generally understand their degree of competency.

knowledgesurveySLCIdifficulty

Figure 2. 1154 participants’ average scores on each of 25 SLCI items correspond well (r = 0.76) to their average scores predicted by knowledge survey self-assessments.

Advice in Using Knowledge Surveys to Develop Metacognition

  • In developing competency in metadisciplinary ways of knowing, furnish a bank of numerous explicit knowledge survey items that scaffold novices into considering the criteria that experts consider to distinguish a specific way of knowing from other ways of thinking.
  • Keep students in constant contact with self-assessing by redirecting them repeatedly to specific blocks of knowledge survey items relevant to tests and other evaluations and engaging them in debriefings that compare their self-assessments with performance.
  • Assign students in pairs to do short class presentations that address specific knowledge-survey items while having the class members monitor their evolving feelings of confidence to address the items.
  • Use the final minutes of the class period to enlist students in teams in creating alternative knowledge survey items that address the content covered by the day’s lesson.
  • Teach students the Bloom Taxonomy of the Cognitive Domain (http://orgs.bloomu.edu/tale/documents/Bloomswheelforactivestudentlearning.pdf) so that they can recognize both the level of challenge and the feelings associated with constructing and addressing different levels of challenge.

Conclusion: Why use knowledge surveys?

  • Their skillful use offers students many practices in metacognitive self-assessment over the entire course.
  • They organize our courses in a way that offers full transparent disclosure.
  • They convey our expectation standards to students before a course begins.
  • They serve as an interactive study guide.
  • They can help instructors enact instructional alignment.
  • They might be the most reliable assessment measure we have.

 

 


Metacognition, Self-Regulation, and Trust

by  Dr. Steven Fleisher, CSU Channel Islands, Department of Psychology

Early Foundations

I’ve been thinking lately about my journey through doctoral work, which began with studies in Educational Psychology. I was fortunate to be selected by my Dean, Robert Calfee, Graduate School of Education at University of California Riverside, to administer his national and state grants in standards, assessment, and science and technology education. It was there that I began researching self-regulated learning.

Self-Regulated Learning

Just before starting that work, I had completed a Masters Degree in Marriage and Family Counseling, so I was thrilled to discover the relevance of the self-regulation literature. For example, I found it interesting that self-regulation studies began back in the 1960s examining the development of self-control in children. Back then the framework that evolved for self-regulation involved the interaction of personal, behavioral, and environmental factors. Later research in self-regulation focused on motivation, health, mental health, physical skills, career development, decision-making, and, most notable for our purposes, academic performance and success (Zimmerman, 1990), and became known as self-regulated learning.

Since the mid-1980s, self-regulated learning researchers have studied the question: How do students progress toward mastery of their own learning? Pintrich (2000) noted that self-regulated learning involved “an active, constructive process whereby learners set goals for their learning and then attempt to monitor, regulate, and control their cognition, motivation, and behavior, guided and constrained by their goals and the contextual features in the environment” (p. 453). Zimmerman (2001) then established that, “Students are self-regulated to the degree that they are metacognitively, motivationally, and behaviorally active participants in their own learning process” (p. 5). Thus, self-regulated learning theorists believe that learning requires students to become proactive and self-engaged in their learning, and that learning does not happen to them, but by them (see also Leamnson, 1999).

Next Steps

And then everything changed for me. My Dean invited Dr. Bruce Alberts, then President of the National Academy of Sciences, to come to our campus and lecture on science and technology education. Naturally, as Calfee’s Graduate Student Researcher, I asked “Bruce” what he recommended for bringing my research in self-regulated learning to the forefront. His recommendation was to study the, then understudied, role and importance of the teacher-student relationship. Though it required changing doctoral programs to accommodate this recommendation, I did it, adding a Doctorate in Clinical Psychology to several years of coursework in Educational Psychology.

Teacher-Student Relationships 

Well, enough about me. It turns out that effective teacher-student relationships provide the foundation from which trust and autonomy develop (I am skipping a lengthy discussion of the psychological principles involved). Suffice it to say, where clear structures are in place (i.e., standards) as well as support, social connections, and the space for trust to develop, students have increased opportunities for exploring how their studies are personally meaningful and supportive of their autonomy, thereby taking charge of their learning.

Additionally, when we examine a continuum of extrinsic to intrinsic motivation, we find the same principles involved as with a scale showing minimum to maximum autonomy, bringing us back to self-regulated learning. Pintrich (2000) included the role of motivation in his foundations for self-regulated learning. Specifically, he reported that a goal orientation toward performance arises when students are motivated extrinsically (i.e., focused on ability as compared to others); however, a goal orientation toward mastery occurs when students are motivated more intrinsically (i.e., focused on effort and learning that is meaningful to them).

The above concepts can help us define our roles as teachers. For instance, we are doing our jobs well when we choose and enact instructional strategies that not only communicate clearly our structures and standards but also provide needed instructional support. I know that when I use knowledge surveys, for example, in building a course and for disclosing to my students the direction and depth of our academic journey together, and support them in taking meaningful ownership of the material, I’m helping their development of metacognitive skill and autonomous self-regulated learning. We teachers can help improve our students’ experience of learning. For them, learning in order to get the grades pales in comparison to learning a subject that engages their curiosity, along with investigative and social skills that will last a lifetime.

References

Leamnson, R. (1999). Thinking about teaching and learning: Developing habits of learning with first year college and university students. Sterling, VA: Stylus.

Pintrich, P. R. (2000). The role of goal orientation in self-regulated learning. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.) Handbook of self-regulation. San Diego, CA: Academic.

Zimmerman, B. J. (1990). Self-regulating academic learning and achievement: The emergence of a social cognitive perspective. Educational Psychology Review, 2(2), 173-201.

Zimmerman, B. J. (2001). Theories of self-regulated learning and academic achievement: An overview and analysis. In B. J. Zimmerman & D. H. Schunk (Eds.) Self-regulated learning and academic achievement: Theoretical perspectives (2e). New York: Lawrence Erlbaum.


Self-assessment and the Affective Quality of Metacognition: Part 1 of 2

Ed Nuhfer, Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029

In The Feeling of What Happens: Body and Emotion in the Making of Consciousness(1999, New York, Harcourt), Antonio Damasio distinguished two manifestations of the affective domain: emotions (the external experience of others’ affect) and feelings(the internal private experience of one’s own affect). Enacting self-assessment constitutes an internal, private, and introspective metacognitive practice.

Benjamin Bloom recognized the importance of the affective domain’s involvement in successful cognitive learning, but for a time psychologists dismissed the importance of both affect and metacognition on learning (See Damasio, 1999; Dunlosky and Metcalfe, 2009, Metacognition, Los Angeles, Sage). To avoid repeating these mistakes, we should recognize that attempts to develop students’ metacognitive proficiency without recognizing metacognition’s affective qualities are likely to be minimally effective.

In academic self-assessment, an individual must look at a cognitive challenge and accurately decide her/his capability to meet that challenge with present knowledge and resources. Such decisions do not spring only from thinking cognitively about one’s own mental processes. Affirming that “I can” or “I cannot” meet “X” (the cognitive challenge) with current knowledge and resources draws from affective feelings contributed by conscious and unconscious awareness of what is likely to be an accurate decision.

“Blind insight” (http://pss.sagepub.com/content/early/2014/11/11/0956797614553944) is a new term in the literature of metacognition. It confirms an unconscious awareness that manifests as a feeling that supports sensing the correctness of a decision. “Blind insight” and “metacognitive self-assessment” seem to overlap with one another and with Damasio’s “feelings.”

Research in medical schools confirmed that students’ self-assessment skills remained consistent throughout medical education (http://files.eric.ed.gov/fulltext/ED410296.pdf.)  Two hypotheses compete to explain this confirmation.  One is that self-assessment skills establish early in life and cannot be improved in college. The other is that self-assessment skill remains fixed in post-secondary education only because it is so rarely taught or developed. The first hypothesis seems contradicted by the evidence supporting brain plasticity, constructivist theories of learning and motivation, metacognition theory, self-efficacy theory (http://files.eric.ed.gov/fulltext/EJ815370.pdf), and by experiments that confirm self-assessment as a learnable skill that improves with training (http://psych.colorado.edu/~vanboven/teaching/p7536_heurbias/p7536_readings/kruger_dunning.pdf).

Nursing is perhaps the discipline that has most recognized the value of developing intuitive feelings informed by knowledge and experience as part of educating for professional practice.

“At the expert level, the performer no longer relies on an analytical principle (rule, guideline, maxim) to connect her/his understanding of the situation to an appropriate action. The expert nurse, with her/his enormous background of experience, has an intuitive grasp of the situation and zeros in on the accurate region of the problem without wasteful consideration of a large range of unfruitful possible problem situations. It is very frustrating to try to capture verbal descriptions of expert performance because the expert operates from a deep understanding of the situation, much like the chess master who, when asked why he made a particularly masterful move, will just say, “Because it felt right. It looked good.” (Patricia Benner, 1982, “From novice to expert.” American Journal of Nursing, v82 n3 pp 402-407)

Teaching metacognitive self-assessment should include an aim toward improving students’ ability to clearly recognize the quality of “feels right” regarding whether one’s own ability to meet a challenge with present abilities and resources exists. Developing such capacity requires practice in committing errors and learning from them through metacognitive reflection. In such practice, the value of Knowledge Surveys (see http://profcamp.tripod.com/KS.pdf and http://profcamp.tripod.com/Knipp_Knowledge_Survey.pdf) becomes apparent.

Knowledge Surveys (Access tutorials for constructing knowledge surveys and obtaining downloadable examples at http://elixr.merlot.org/assessment-evaluation/knowledge-surveys/knowledge-surveys2.) consist of about a hundred to two hundred questions/items relevant to course learning objectives. These query individuals to self-assess by rating their present ability to meet a challenge on a three-point multiple-choice scale:

A. I can fully address this item now for graded test purposes.
B. I have partial knowledge that permits me to address at least 50% of this item.
C. I am not yet able to address this item adequately for graded test purposes.

and thereafter to monitor their mastery as the course unfolds.

In Part 2, we will examine why knowledge surveys are such powerful instruments for supporting students’ learning and metacognitive development, ways to properly employ knowledge surveys to induce measurable gains, and we will provide some surprising results obtained from pairing knowledge surveys in conjunction with a standardized assessment measure.


Thinking about How Faculty Learn about Learning

By Cynthia Desrochers, California State University Northridge

Lately, two contradictory adages have kept me up nights:  “K.I.S.S. – Keep It Simple, Stupid” (U.S. Navy) and “For every complex problem there is an answer that is clear, simple, and wrong” (H.L. Mencken).  Which is it?  Experts have a wealth of well-organized, conditionalized, and easily retrievable knowledge in their fields (Bradford, et al., 2000).  This may result in experts skipping over steps when they teach a skill that has become automatic to them.  But where does this practice leave our novice learners who need to be taught each small step—almost in slow motion—to begin to grasp a new skill?

I have just completed co-facilitating five of ten scheduled faculty learning community (FLC) seminars in a yearlong Five GEARS for Activating Learning FLC.  As a result of this experience, my takeaway note to self now reads in BOLD caps:  (1) keep it simple in the early stages of learning and (2) model the entire process and share my thinking out loud—no secrets hidden behind the curtains!

The Backstory

The Five Gears for Activating Learning project at California State University, Northridge, began in fall 2012. It was my idea, and I asked seven university-wide faculty leaders to join me in a grassroots effort. Our goals were to improve student learning from inside the classroom (vs. policy modifications), promote faculty use of the current research on learning, provide a lens for judging the efficacy of various teaching strategies (e.g., the flipped classroom), and develop a common vocabulary for use campuswide (e.g., personnel communications).  Support for this project came from the University Provost and the dean of the Michael D. Eisner College of Education in the form of reassigned time for me and 3-unit buyouts for each of the eight FLC members, spread over the entire academic year, 2014-15.

We read as a focus book How Learning Works: 7 Research-Based Principles for Smart Teaching (Ambrose, et al., 2010). We condensed Ambrose’s seven principles to five GEARS, one of which is Developing Mastery, which we defined as deep learning, reflection, and self-direction—critical elements of metacognition and the focus of this blog site.

On Keeping It Simple

I have been in education for forty-five years, yet I’m having many light-bulb moments with this FLC group – I’m learning something new, or reorganizing prior knowledge, or having increased clarity.  Hence, I’ve given a lot of thought to the conflict between keeping it simple and omitting some important elements versus sharing more complex definitions and relationships and overwhelming our FLC members. My rationale for choosing simple: If I am still learning about how learning works, how can I expect new faculty—who teach Political Science, Business Law, Research Applications, and African Americans in Film, all without benefit of a teaching credential—to process some eighty years of research on learning in two semesters?

In opting for the K.I.S.S. approach, we have developed a number of activities and tools that scaffold learning to use the five GEARS in our teaching; moreover, each activity or tool models explicitly with faculty some practices we are encouraging them to use with their students.  This includes (1) reflective writing in the form of learning logs and diaries, (2) an appraisal instrument to self-assess their revised (using the GEARS) spring 2015 course design, and (3) a class-session plan to scaffold their use of the GEARS.  [See the detailed descriptions given in the handout resource posted on this site.] I hope to have some results data regarding their use in my spring blog.

Looking to next semester, our spring FLC projects will likely center around not only teaching the redesigned five GEARS course but also disseminating the five GEARS campuswide.  As a direct result of the Daily Diary that FLC members kept for three weeks on others’ use and misuse of the five GEARS, they want to share our work.  [See handout for further description of the Daily Diaries.] Dissemination possibilities include campus student tour guides, colleagues who teach a common course, Freshman Seminar instructors, librarians, and the Career Center personnel.  If another adage is true, “Tell me and I forget, teach me and I may remember, involve me and I learn” (Benjamin Franklin), our FLC faculty will likely move of their own accord along the continuum from a simple to complex understanding of the five GEARS in their efforts to teach the five GEARS to others on campus.

A Word about GEARS

Why is this blog not focusing solely on the metacognition gear, which we call Developing Mastery? The simple answer is that learning is so intertwined that all the GEARS likely support metacognition in some way.  However, any one of the activities or tools we have employed can be modified to limit the scope to your definition of metacognition.  Our postcard below shows all five GEARS:

5_GEARS_postcard


Transparency and Metacognition

by James Rhem (Executive Editor, National Teaching and Learning Forum)

Some readers may know that The National Teaching and Learning FORUM has undertaken a series of residencies on campuses across the country looking at teaching and learning at a variety of institutions, and all the efforts to support and improve it. Currently I’m at the University of Nevada-Las Vegas where Mary-Ann Winkelmas is coordinator of instructional development and research.

One of the things Mary-Ann brought to Las Vegas from her previous work at the University of Chicago and the University of Illinois is something called the Transparency Project, a project carried out in collaboration with AAC&U. To my mind this approach to increasing students’ connections with and understanding of the assignments they’re given in the courses they take seems to have a lot to do with metacognition. Perhaps it’s a homely version, I’m not sure, but I think it’s something those interested in improving student performance through metacognitive awareness ought to know about. So, that’s my post for the moment. Take a look at the impressive body of research the project has already amassed, and the equally impressive results in improved student performance, and see if you don’t agree there’s a relation to metacognitive approaches, something to take note of.

Here’s a link to a page of information on the project with even more links to the research:

http://www.unlv.edu/provost/teachingandlearning


Teaching Perspectives Inventory (TPI): The 5 Perspectives

There are a lot of free surveys/inventories “out there” for all sorts of things, most often related to some aspect of personality. If you use them in a reflective manner, they can help you better understand yourself – your . The TPI (also free) offers a chance for you to reflect on your teaching perspectives (one aspect of metacognitive instruction). The TPI suggests 5 perspectives: Transmission, Apprenticeship, Developmental, Nurturing, and Social Reform.

http://www.teachingperspectives.com/tpi/


The Teaching Learning Group at CSUN

Two years ago, eight faculty at California State University, Northridge, began studying how people learn as a grassroots effort to increase student success by focusing on what instructors do in the classroom. Our website shares our efforts, Five Gears for Activating Learning, as well as supporting resources and projects developed to date (e.g., documents, videos, and a yearlong Faculty Learning Community in progress). Although all five gears interact when people learn and develop expertise, our fifth gear, the Developing Mastery gear, focuses on assisting students in developing their metacognitive skills.

http://www.csun.edu/cielo/teaching-learning-group.html


The Six Hour D… And How to Avoid It

This great essay by Russ Dewey (1997) evolved from a handout he used to give his students. He shares some common examples of poor study strategies and explains why they are unlikely to lead to deep learning (even if they are used for 6 hours…). He then shares a simple metacognitive self-testing strategy that could be tailored for courses across the disciplines.

http://www.psywww.com/discuss/chap00/6hourd.htm


Despite Good Intentions, More is Not Always Better

by Lauren Scharff, U.S. Air Force Academy*

A recent post to the PSYCHTEACH listserv got me thinking about my own evolution as a teacher trying my best to help the almost inevitable small cluster of students who struggled in my courses, often despite claiming to “have studied for hours.” The post asked “Have any of you developed a handout on study tips/skills that you give to your students after the first exam?” A wide variety of responses were submitted, all of which reflected genuinely good intentions by the teachers.

However, based on my ongoing exploration of metacognition and human learning, I believe that, despite the good intentions, some of the recommendations will not consistently lead to the desired results. Importantly, these recommendations actually seem quite intuitive and reasonable on the surface, which leads to their appeal and continued use. Most of those that fall into this less ideal category do so because they imply that “More is Better.”

For example, one respondent shared, “I did correlations of their test scores with their attendance so far, the number of online quizzes they have taken so far, and the combined number of these two things. [All correlations were positive ranging from 0.35 to 0.57.] So I get to show them how their behaviors really are related to their scores…”

This approach suggests several things that all seem intuitively positive: online quizzes are a good way to study and attending class will help them learn. I love the empowerment of students by pointing out how their choice of behaviors can impact their learning! However, the message that more quizzes and simple attendance will lead to better grades does not capture the true complexity of learning.

Another respondent shared a pre-post quiz reflection assignment in which some of the questions asked about how much of the required reading was completed and how many hours were put into studying. Other questions asked about the use of chapter outcomes when reading and studying, the student’s expected grade on the quiz, and an open-ended question requesting a summary of study approaches.

This pre-post quiz approach seems positive for many reasons. Students are forced to think about and acknowledge levels and types of effort that they put into studying for the quizzes. There is a clear suggestion that using the learning outcomes to direct their studying would be a positive strategy. They are asked to predict their grades, which might help them link their studying efforts with predicted grades. These types of activities are actually good first steps at helping students become more metacognitive (aware and thoughtful) about their studying. Yea!

However, a theme running through the questions seems to be, again, “more is better.” More hours. More reading. The hidden danger is that students may not know how to effectively use the learning outcomes, how to read, how to effectively engage during class, how to best take advantage of practice quizzes to promote self-monitoring of learning, or what to do during those many hours of studying.

Thus, the recommended study strategies may work well for some students, but not all, due to differences in how students implement the strategies. Therefore, even a moderately high correlation between taking practice quizzes and exam performance might mask the fact that there are subgroups for which the results are less positive.

For example, Kontur and Terry (2013) found the following in a core Physics course, “On average, completing many homework problems correlated to better exam scores only for students with high physics aptitude. Low aptitude physics students had a negative correlation between exam performance and completing homework; the more homework problems they did, the worse their performance was on exams.”

I’m sure you’re all familiar with students who seem to go through “all the right motions” but who still struggle, become frustrated, and sometimes give up or develop self-doubt about their abilities. Telling students to do more of what they’re already doing if it’s not effective will actually be more harmful.

This is where many teachers feel uncomfortable because they are clearly working outside their disciplines. Teaching students how to read or how to effectively take notes in class, or how to self-monitor their own learning and adjust study strategies to different types of learning expectations is not their area of expertise. Most teachers somehow figured out how to do these things well on their own, or they wouldn’t be teachers now. However, they may never have thought about the underlying processes of what they do when they read or study that allowed them to be successful. They also feel pressures to cover the disciplinary content and focus on the actual course material rather than learning skills. Unfortunately, covering material does little good if the students forget most of the content anyway. Teaching them skills (e.g., metacognitive study habits) offers the prospect of retaining more of the disciplinary content that is covered.

The good news is that there are more and more resources available for both teachers and students (check out the resources on this website). A couple great resources specifically mentioned by the listserv respondents are the How to Get the Most out of Studying videos by Stephen Chew at Samford University and the short reading (great to share with both faculty and students) called The Six Hour D… and How to Avoid it by Dewey (1997). Both of these highlighted resources focus on metacognitive learning strategies.

This reflection on the different recommendations is not meant to belittle the well-intentioned teachers. However, by openly discussing these common suggestions, and linking to what we know of metacognition, I believe we can increase their positive impact. Share your thoughts, favorite study suggestions and metacognitive activities by using the comments link below, or submitting them under the Teaching Strategies tab on this website.

References

Dewey, R. (1997, February 12) The “6 hour D” and how to avoid it. [Online]. Available: http://www.psywww.com/discuss/chap00/6hourd.htm.

Kontur, F. & Terry, N. The benefits of completing homework for students with different aptitudes in an introductory physics course. Cornell Physics Library Physics Education. arXiv:1305.2213

 

* Disclaimer: The views expressed in this document are those of the authors and do not reflect the official policy or position of the U. S. Air Force, Department of Defense, or the U. S. Govt.


Negotiating Chaos: Metacognition in the First-Year Writing Classroom

by Amy Ratto Parks, Composition Coordinator/Interim Director of Composition, University of Montana

“Life moves pretty fast. If you don’t stop and look around once in a while, you could miss it.” John Hughes, Ferris Bueller’s Day Off

Although the movie Ferris Bueller’s Day Off (Hughes, 1986) debuted long before our current first-year college students were born, the combined sentiment of the film remains relevant to them. If we combined Ferris’ sense of exuberant freedom with Cameron’s grave awareness of personal responsibility, and added Sloane’s blasé ennui we might see an accurate portrait of a typical first-year student’s internal landscape. Many of our students are thrilled to have broken out of the confines of high school but are worried about not being able to succeed in college, so they arrive in our classrooms slumped over their phones or behind computer screens, trying to seem coolly disengaged.

The life of the traditional first-year student is rife with negotiations against chaos. Even if we remove the non-academic adjustments of living away from home, their lives are full of confusion. All students, even the most successful, will likely find their learning identities challenged: what if all of their previous academic problem-solving strategies are inadequate for the new set of college-level tasks?

In the first-year writing classroom, we see vivid examples of this adjustment period play out every year. Metacognitive activities like critical reflective writing help students orient themselves because they require students to pause, assess the task at hand, and assess their strategies for meeting the demands of the task. Writing studies researchers know that reflection benefits writers (Yancey, 1998) and portfolio assessment, common in first-year program across the country, emphasizes reflection as a major component of the course (Reynolds & Rice, 2006). In addition, outcomes written by influential educational bodies such as National Council of Teacher’s of English (ncte.org), The Common Core State Standards Initiative (corestandards.org), and Council of Writing Program Administrators (wpacouncil.org) emphasize the importance of metacognitive skills and demonstrate a shared belief in its importance.

But students aren’t necessarily on board. It is the rare student who has engaged in critical reflection in the academic setting. Instead, many aren’t sure how to handle it. Is it busy work from the teacher? Are they supposed to reveal their deep, inner feelings or is it a cursory overview? Is it going to be graded? What if they give a “wrong” reflection? And, according to one group of students I had, “isn’t this, like, for junior high kids?” In this last question we again see the developing learner identity. The students were essentially wondering, “does this reflective work make us little kids or grown ups?”

If we want new college students to engage in the kind of reflective work that will help them develop transferable metacognitive skills, we need to be thoughtful about how we integrate it into the coursework. Intentionality is important because there are a number of ways teachers might accidentally perpetuate these student mindsets. In order to get the most from reflective activities in class, keep the following ideas in mind:

  1. Talk openly with students about metacognition. If we want students to become aware of their learning, then the first thing to do is draw their attention to it. We should explain to students why they might care about metacognitive skills, as well as the benefits of investing themselves in the work. If we explain that reflection is one kind of metacognitive activity that helps us retrieve, sort, and choose problem-solving strategies, then reflection ceases to be “junior high” work and instead becomes a scholarly, collegiate behavior.
  2. Design very specific reflective prompts. When in doubt, err on the side of more structure. Questions like “what did you think about the writing assignment” seem like they would open the door to many responses; actually they allow students to answer without critically examining their writing or research decisions. Instead, design prompts that require students to critically consider their work. For example, “Describe one writing choice you made in this essay. What was the impact of your decision?”
  3. Integrate reflection throughout the semester. Ask students to reflect mid-way through the processes of drafting, research, and writing. If we wait until they finish an essay they learn that reflection is simply a concluding activity. If they reflect mid-process they become aware of their ability to assess and revise their strategies more than once. Also, reflection is a metacognitive habit of mind (Tarricone, 2011; Yancey, 1998) and habits only come to us through repeated activity.

These three strategies are a very basic beginning to integrating metacognitive activities into a curriculum. Not only do they help students evaluate the effectiveness of their attempts at problem solving, but they can also direct the students’ attention toward the strategies they’ve already brought to the class, thereby creating a sense of control over their learning. In the first-year writing classroom, where students are distracted and worried about life circumstance and learner identity, the sense of control gained from metacognitive work is especially important.

 

References

Chinich, M. (Producer), & Hughes, J.H. (Director). (1986). Ferris Beuller’s day off.[Motion picture]. USA: Paramount Pictures.

Reynolds, N., & Rice, R. (2006). Portfolio teaching: A guide to instructors. Boston, MA: Bedford St, Martin’s.

Tarricone, P. (2011). The taxonomy of metacognition. New York: Psychology Press.

Yancey, K.B. (1998). Reflection in the writing classroom. Logan: Utah State University Press.

(2013). First-year writing: What good does it do? Retrieved from http://www.ncte.org/library/nctefiles/resources/journals/cc/0232-nov2013/cc0232policy.pdf

(2014). Frameworks for success in postsecondary writing. Retrieved from http://wpacouncil.org/framework

(2014). English language arts standards. Retrieved from http://www.corestandards.org/ELA-Literacy/introduction/key-design-consideration/


Testing Improves Knowledge Monitoring

by Chris Was, Kent State University

Randy Isaacson and I have spent a great deal of time and effort creating a curriculum for an educational psychology class to encourage metacognition in preservice teachers. Randy spent a number of years developing this curriculum before I joined him in an attempt to improve the curriculum and use the curriculum to test hypotheses regarding improvement of metacognition with training for undergraduate preservice teachers. A detail description of the curriculum can be found in the National Teaching and Learning Forum (Isaacson & Wass, 2010), but I wanted to take this opportunity to give a simple overview of how we structured our courses and some of the results produced by using this curriculum to train undergraduates to be metacognitive in their studies.

With our combined 40+ years of teaching, we our quite clear that most undergraduates do not come equipped with the self-regulation skills that one would hope students would acquire before entering the university. Even more disappointing, is students lack the metacognition required to successfully regulate their own learning behaviors. Creating an environment that not only encourages, but also requires students to be metacognitive is not a simple task. However, it can be accomplished.

Variable Weight-Variable Difficulty Tests

The most important component of the course structure is creating an environment with extensive and immediate feedback. The feedback should be designed to help the student identify specific deficiencies in his or her learning strategies and metacognition.  We developed an extensive array of learning resources which guide the student to focusing on knowing what they know, and when they know it. The first resource we developed is a test format that helps the students reflect and monitor their knowledge regarding the content and items on the test. In our courses we have students judge their accuracy and confidence in their responses for each item and having them predict their scores for each exam

Throughout the duration of the semester in which they were enrolled in the course students are administered a weekly exam (the courses meet Monday, Wednesday and Friday with the exams occurring on Friday). Each examination is based on a variable weight, variable difficulty format. Each examination contained a total of 35 questions composed of 15 Level I questions that were at the knowledge level, 15 Level II questions at the evaluation level, and 5 Level III questions at the application/synthesis level. Scoring of the exam was based on a system that increased points for correct responses in relation to the increasing difficulty of the questions and confidence in responses: Students choose 10 Level I questions and put those answers on the left side of the answer sheet. These 10 Level I questions are worth 2 points each. Ten Level II questions were worth 5 points each are placed on the left side of the answer sheet, and three Level III questions were worth 6 points each are placed on the left. Students were also required to choose the questions they were least confident about and place them on the right side of the answer sheet. These questions were only worth one point (5 of the 15 Level I and II questions, and 2 of the 5 Level III questions). The scoring equaled a possible 100 points for each exam. Correlations between total score and absolute score (number correct out of 35) typically range from r = .87 to r = .94.  Although we provide students with many other resources to encourage metacognition, we feel that the left-right test format is the most powerful influence on student knowledge monitoring through the semester.

The Results

Along with our collaborators, we have conducted a number of studies using the variable weight-variable difficulty (VW-VD) tests as a treatment. Our research questions focus on whether the test format increases knowledge monitoring accuracy, individual differences in knowledge monitoring and metacognition, and psychometric issues in measuring knowledge monitoring. Below is a brief description of some of our results followed.

Hartwig, Was, Isaacson, & Dunlosky (2011) found that a simple knowledge monitoring assessment predicted both test scores and number of items correct on the VW-VD tests.

Isaacson & Was (2010) found that after a semester of VW-VD tests, knowledge monitoring accuracy on an unrelated measure of knowledge monitoring increased.


Promoting Student Metacognition

by Kimberly D. Tanner

This article starts out with two student scenarios with which many faculty will easily resonate (one student with poor and one with good learning skills), and which help make the case for the need to incorporate metacognitive development in college courses. Kimberly then shares some activities and a very comprehensive list of questions that instructors might ask students to answer regarding the planning, monitoring and evaluating of their own learning. While Kimberly makes a point of teaching metacognition within the disciplines, these questions are all generic enough to be used in any discipline. Of note in this article, there is a section that discusses metacognitive instruction, and includes a series of questions that faculty should ask of themselves as they plan, monitor and evaluate their teaching.

CBE—Life Sciences Education; Vol. 11, 113–120, Summer 2012

https://www.lifescied.org/doi/full/10.1187/cbe.12-03-0033


Teaching Metacognition to Improve Student Learning

By: Maryellen Weimer, PhD; published in Teaching Professor Blog October 31, 2012

This blog post offers suggestions for manageable approaches to getting students started in metacognitive types of reflection. Her suggestions are modifications of some shared by Kimberly Tanner in her article on “Promoting Student Metacognition”. Maryellen also astutely points out that, “When you start asking questions about learning, I wouldn’t expect students to greet the activity with lots of enthusiasm. Many of them believe learning is a function of natural ability and not something they can do much about. Others just haven’t paid attention to how they learn.”

http://www.facultyfocus.com/articles/teaching-professor-blog/teaching-metacognition-to-improve-student-learning/


Promoting general metacognitive awareness

This informative article by Gregory Schraw begins with a distinction between knowledge of cognition and regulation of cognition (lots of great references included), continues with a a discussion of generalization and a summary of some additional research that examines the relationship between metacognition and expertise (cognitive abilities), and finishes with several strategies that instructors can use to develop both metacognitive awareness and regulation.

http://wiki.biologyscholars.org/@api/deki/files/87/=schraw1998-meta.pdf 


Metacognitive Strategies: Are They Trainable?

by Antonio Gutierrez, Southern Georgia University

Effective learners use metacognitive knowledge and strategies to self-regulate their learning (Bol & Hacker, 2012; Bjork, Dunlosky & Kornell, 2013; Ekflides, 2011; McCormick, 2003; Winne, 2004; Zeidner, Boekaerts & Pintrich, 2000; Zohar & David, 2009). Students are effective self-regulators to the extent that they can accurately determine what they know and use relevant knowledge and skills to perform a task and monitor their success. Unfortunately, many students experience difficulty learning because they lack relevant knowledge and skills, do not know which strategies to use to enhance performance, and find it difficult to sequence a variety of relevant strategies in a manner that enables them to self-regulate their learning (Bol & Hacker, 2012; Grimes, 2002).

Strategy training is a powerful educational tool that has been shown to overcome some of these challenges in academic domains such as elementary and middle school mathematics (Carr, Taasoobshirazi, Stroud & Royer, 2011; Montague, Krawec, Enders & Dietz, 2014), as well as non-academic skills such as driving and anxiety management (Soliman & Mathna, 2009). Additional benefits of strategy training are that using a flexible repertoire of strategies in a systematic manner not only produces learning gains, but also empowers students psychologically by increasing their self-efficacy (Dunlosky & Metcalfe, 2009). Further, a common assumption is that limited instructional time with younger children produces life-long benefits once strategies are automatized (McCormick, 2003; Palincsar, 1991; Hattie et al., 1996).

In addition to beginning strategy instruction as early as possible, it should be embedded within all content areas, modeled by teachers and self-regulated students, practiced until automatized, and discussed explicitly in the classroom to provide the greatest benefit to students. Pressley and Wharton-McDonald (1997) recommend that strategy instruction be included before, during, and after the main learning episode. Strategies that occur before learning include setting goals, making predictions, determining how new information relates to prior knowledge, and understanding how the new information will be used. Strategies needed during learning include identifying important information, confirming predictions, monitoring, analyzing, and interpreting. Strategies typically used after learning include reviewing, organizing, and reflecting. Good strategy users should possess some degree of competence in each of these areas to be truly self-regulated.

Additional strategies have been studied by Schraw and his colleagues (Gutierrez & Schraw, in press; Nietfeld & Schraw, 2002). They demonstrated that a repertoire of seven strategies is effective at improving undergraduate students’ learning outcomes and comprehension monitoring, a main component of the regulatory dimension of metacognition. Table 1 contains the seven strategies explicitly taught to students. Moreover, these strategies can function not only in contrived laboratory settings but also in ecologically valid settings, such as classrooms.

Table 1. Summary of Metacognitive Strategies and their Relation to Comprehension Monitoring

 

Strategy

LearningProcesses

Hypothesized Influence on Comprehension

Review main objectives of the text and focus on main ideas and overall meaning Review and monitor Enhance calibration through clarifying misunderstandings and tying details to main ideas
Read and summarize material in your own words to make it meaningful; use elaboration and create your own examples Read and relate Enhances calibration by transforming knowledge into something personally meaningful
Reread questions and responses and reflect on what the question is asking; go through and take apart the question paying attention to relevant concepts Review, relate, and monitor Purposefully slowing information processing allows for a more accurate representation of the problem, thus decreasing errors in judgment
Use contextual cues in the items and responses, e.g., bolded, italicized, underlined, or capitalized words Relate Using contextual cues allows the mind to focus on salient aspects of the problem rather than seductive details, thereby increasing accuracy
Highlight text; underline keywords within the question to remind yourself to pay attention to them; use different colors to represent different meanings Review, relate, and monitor Highlighting and underlining can assist one to focus on main ideas and what is truly important, increasing accuracy; however, relying too much on this can be counterproductive and may potentially increase errors
Relate similar test questions together and read them all before responding to any Relate and monitor Relating information together provides a clearer understanding of the material and may highlight inconsistencies that need to be resolved; it may point to information the learner may have missed, increasing accuracy
Use diagrams, tables, pictures, graphs, etc. to help you organize information Review and relate These strategies help simplify complex topics by breaking them down to their constituent parts; this increases accuracy by decreasing errors

Adapted from Gutierrez and Schraw (in press).

However, while the studies by Shaw and colleagues have shown that teachers can effectively use these strategies to improve students’ comprehension monitoring and other learning outcomes, they have not thoroughly investigated why and how these strategies are effective. I argue that the issue is not so much that students are not aware of the metacognitive strategies, but rather that many lack the conditional metacognitive knowledge−that is, the where, when, and why to apply a given strategy taking into consideration task demands. Future research should investigate these process questions, namely when, how, and why different strategies are successful.

Bjork, R. A., Dunlosky, J., & Kornell, N. (2013).  Self-regulated learning: Beliefs, techniques and illusions. Annual Review of Psychology, 64, 417-447.

Bol, L. & Hacker, D. J. (2012). Calibration research: where do we go from here? Frontiers in Psychology, 3, 1-6.

Carr, M., Taasoobshirazi, G., Stroud, R., & Royer, J. M. (2011). Combined fluency and cognitive strategies instruction improves mathematics achievement in early elementary school. Contemporary Educational Psychology, 36, 323–333.

Dunlosky, J., & Metcalfe, J. (2009).  Metacognition. Thousand Oaks, CA: Sage Publications.

Ekflides, A. (2011). Interactions of metacognition with motivation and affect in self-regulated learning: The MASRL model. Educational Psychologist, 46, 6-25.

Grimes, P. W. (2002). The overconfident principles of economics students: An examination of metacognitive skill. Journal of Economic Education, 1, 15–30.

Gutierrez, A. P., & Schraw, G. (in press). Effects of strategy training and incentives on students’ performance, confidence, and calibration. The Journal of Experimental Education: Learning, Instruction, and Cognition.

Hattie, J., Biggs, J., & Purdie, N. (1996). Effects of learning skills interventions on student learning: A meta-analysis. Review of Educational Research, 66, 99-136. doi: 10.3102/00346543066002099

McCormick, C. B. (2003). Metacognition and learning. In W. M. Reynolds & G. E. Miller (Eds.), Handbook of psychology: Educational psychology (pp. 79-102). Hoboken, NJ: John Wiley & Sons.

Montague, M., Krawec, J., Enders, C. & Dietz, S. (2014). The effects of cognitive strategy instruction on math problem solving of middle-school students of varying ability. Journal of Educational Psychology,106,469 – 481.

Nietfeld, J. L., & Schraw, G. (2002). The effect of knowledge and strategy explanation on monitoring accuracy. Journal of Educational Research, 95, 131-142.

Palincsar, A. S. (1991). Scaffolded instruction of listening comprehension with first graders at risk for academic difficulty. In A. M. McKeough & J. L. Lupart (Eds.), Toward the practice of theory-based instruction (pp. 50–65). Mahwah, NJ: Erlbaum.

Pressley, M., & Wharton-McDonald, R.  (1997).  Skilled comprehension and its development through instruction.  School Psychology Review, 26, 448-466.

Soliman, A. M. & Mathna, E. K. (2009). Metacognitive strategy training improves driving situation awareness. Social Behavior and Personality,37, 1161-1170.

Winne, P. H. (2004). Students’ calibration of knowledge and learning processes: Implications for designing powerful software learning environments. International Journal of Educational Research, 41,466-488. doi:http://dx.doi.org/10.1016/j.ijer.2005.08.012

Zeidner, M., Boekaerts, M., & Pintrich, P. R.  (2000).  Self-regulation: Directions and challenges for future research.  In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.),  Handbook of self-regulation (pp. 13-39).  San Diego, CA: Academic Press.

Zohar, A., & David, A. (2009). Paving a clear path in a thick forest: a conceptual analysis of a metacognitive component. Metacognition & Learning4(3), 177-195.