Teacher, Know Thyself (Translation: Use Student Evaluations of Teaching!)

by Guy Boysen, Ph.D., McKendree University

I’ll call him Donald. I am willing to bet that you know a Donald too. Students fled from Donald’s classes in droves. His was the pedagogy of narcissism – “I am the Lord thy teacher and thou shall have no classroom activities unfocused on me!” Donald’s grading system was so subjective, vindictive, and Byzantine as to be barely defensible. Enrollment in his classes always followed the same pattern: intro course full of students who did not know any better at the start of the semester and then decimation by the end of the semester; advanced seminars empty except for a few adoring students with Stockholm syndrome. Asked about his student evaluations, Donald would say “My seminar evals are good, but I don’t even look at my intro evals anymore – they don’t know about teaching.”

Donald calls to mind the classic metacognitive phenomenon of being unskilled and unaware of it (Kruger & Dunnnig, 1999; Lodge, 2016; Schumacher, Akers, & Taraban, 2016). This is something teachers see in students all of the time; bad students overestimate their abilities and therefore don’t work to improve. As illustrated by Donald, this phenomenon applies to teachers as well.

There are a number of wide-ranging characteristics that make someone a model teacher (Richmond, Boysen, & Gurung, 2016), but the use of student evaluations to improve teaching is one that has a strong metacognitive component. Student evaluations provide teachers with feedback so that they can engage in metacognitive analysis of their pedagogical skills and practices. Based on that analysis, goals for improvement can be set and pursued.

Recommendations for Using Student Evals

How should teachers use student evaluations to develop metacognitive awareness of their pedagogical strengths and weakness? Several suggestions can be found in An Evidence-Based Guide for College and University Teaching: Developing the Model Teacher (Richmond, Boysen, & Gurung, 2016).

Set goals for improvement and follow through with them.

Have you ever gotten on a scale and not liked the number staring back at you? Did you just get on and off the scale repeatedly expecting the number to change? No? Well, that trick doesn’t work in teaching either. Collecting student evaluations without using them to set and implement goals for improvement is like a diet that only consists of repeated weigh-ins – the numbers will not change without the application of direct effort. Use your student evaluations, preferably in collaboration with a mentor or teaching expert, to set manageable goals for change. 

Select the correct assessment tool.    

Wouldn’t it be great if we could select our own teaching evaluations? Mine might look something like this.

But wait! You can select your own teaching evaluations. Official, summative evaluations may be set at the institutional level, but teachers can implement any survey they want for professional development purposes. Choose wisely, however.

If you are a numbers person, select a well-researched measure that provides feedback across several dimensions of teaching that are relevant to you. Perhaps the best known of these is the Student Evaluation of Educational Quality (SEEQ), which measures teaching quality across nine different factors (Marsh, 1983). The advantages to this type of measure are that the results can be scientifically trusted and are detailed enough to inform goals for improvement.

Not a numbers person? You might ask for written comments from students. Whatever you want to know about your teaching, you can simply ask – believe me, students have opinions! Although analyzing student comments can be laborious (Lewis, 2001), they can offer unequalled richness and specificity. Beware of asking for general feedback however. General questions tend to elicit general feedback (Hoon, Oliver, Szpakowska, & Newton, 2015). Rather, provide specific prompts such as the following.

  • What should I/we STOP doing in this class?
  • What should I/we START doing in this class?
  • What should I/we CONTINUE doing in this class? 

Don’t wait until the end of the semester.

Imagine if Donald could get feedback from the students who drop his classes. Perhaps he could make pedagogical changes to reach those students before they flee. Guess what, he can! Formative assessment is the key.

Teachers often allow official, end-of-semester student evaluations to serve as their only feedback from students. The problem with this approach is that the feedback comes too late to make midsemester course corrections. This is analogous to the metacognitive importance of providing students with early feedback on their performance. You wouldn’t expect students to succeed in your course if a final exam was the only grade, would you? Well, don’t put yourself in the same position. Model teachers ask for student feedback both at the end of the semester (i.e., summative) and early enough in the semester to make immediate improvements (i.e., formative).

Make changes large and small.

Student evaluations can be used to inform revisions to all levels of pedagogy. Imagine that students report being absolutely bewildered by a concept in your class. Potential responses to this feedback could be to change (a) the time spent on the concept in class, (b) scaffolding of knowledge needed to understand the concept, (c) the availability of study aids related to the concept, (d) the basic instructional technique used to teach the concept, or (e) the decision to even include the concept in the course. For model teachers, student feedback can inform changes large and small.

Conclusion

Every single semester students comment on my evaluations that they want the tests to be multiple choice rather than short answer/essay, and every semester I tell students that I will not be changing the test format because students do not study as hard for multiple-choice tests. Thus, my point is not that model teachers incorporate all student feedback into their courses. However, failure to respond should be a sound and intentional pedagogical choice rather than a Donald-like failure of metacognition – don’t be caught unskilled and unaware.

References

Hoon, A., Oliver, E., Szpakowska, K., & Newton, P. (2015). Use of the Stop, Start, Continue method is associated with the production of constructive qualitative feedback by students in higher education. Assessment & Evaluation in Higher Education, 40, 755-767. doi:10.1080/02602938.2014.956282

Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77, 1121-1134.

Lewis, K. G. (2001). Making sense of student written comments. New Directions for Teaching and Learning, 87, 25-32.

Lodge, J. (2016) Hypercorrection: Overcomming overconfidence with metacognition. Retreived from https://www.improvewithmetacognition.com/hypercorrection-overcoming-overconfidence-metacognition/

Marsh, H. W. (1983). Multidimensional ratings of teaching effectiveness by students from different academic settings and their relation to student/course/instructor characteristics. Journal of Educational Psychology, 75, 150-166. doi:10.1037/0022-0663.75.1.150

Richmond, A. S., Boysen, G. A., Gurung, R. A. R. (2016). An evidence-based guide for college and university teaching: Developing the model teacher. Routledge.

Schumacher, J. R., Akers, E. & Taraban, R. (2016). Unskilled and unaware: A metacognitive bias. Retrieved from https://www.improvewithmetacognition.com/unskilled-unaware-metacognitive-bias/


Hitting the Metacognitive Target with Learning Objectives

by Guy A. Boysen, McKednree University (gaboysen@mckednree.edu)

Imagine that you and your colleagues have just retired to the pub for a well-deserved pint at the end of a long week of work in the knowledge factory. After a few refreshing sidartps, you hear the challenge of “Darts!” Rather than playing the usual game of Cricket or 301, the challenger proposes a new competition but does not bother to share the rules. So, you lob darts at random, sometimes hearing “Nice shot!” and other times “Too bad, mate!” Without a clear target to aim for, however, there is no way for you to improve your performance. You lose, and the next round is on you.

If that sounds frustrating, imagine how students feel when they don’t know what to aim for in their efforts at learning – that is, how they feel in classes without clear learning objectives. Learning objectives refer to statements of what students should be able to do after an educational experience. High-quality learning objectives are clear, measurable, and focused on student outcomes rather than instructional methods (Boysen, 2012). Consider these examples.

  • Students in Spanish will be able to ask grammatical questions to solicit various forms of information from Spanish speakers.
  • Students who complete library training will be able to identify peer-reviewed journal articles using the EBSCO database.
  • Students in Statistics will be able to compute means and standard deviations using hand calculations.
  • Readers of this blog will be able to describe the relation between learning objectives and metacognition.

In a straightforward way, learning objectives let students know what they need to know – this is an essential tool for the metacognitive skill of being able to self-assess progress toward educational goals.

Just as you will never win at darts without knowing where to aim, students cannot intentionally evaluate where they are in the learning process without objectives. For example, students in Spanish who are unaware of the learning objective to ask various grammatical questions might mistakenly believe that they are muy bueno with “¿Que pasa?” as their only query. In contrast, students who are aware of the learning objective can more effectively use metacognition by self-assessing their ability to do things like ask for food, directions, the time, or an add/drop slip. Although research is needed to determine if there is a direct link between learning objectives and metacognition, there is longstanding evidence that providing students with learning objectives leads to increased learning (Duell, 1974; Rothkopf & Kaplan, 1972).

Learning objectives clearly have potential as metacognitive tools for helping students assess their own learning, so how do the best college teachers use them? Well, according to An Evidence-Based Guide for College and University Teaching: Developing the Model Teacher (Richmond, Boysen, & Gurung, 2016), there are two fundamental questions that model teachers can say “Yes!” to with regard to learning objectives.

  • Do you “articulate specific, measurable learning objectives in your syllabi or other course documents?” (p. 197)

Model teachers know that, for every one of their readings, activities, tests, and papers, students can determine the learning objective and use it to consider whether or not they are achieving the intended goal. The syllabus is an especially important metacognitive tool. It is the place to introduce students to the concepts of metacognition and learning objectives. In fact, you can even use it to establish learning objectives about the development of metacognition itself (see here for more on metacognitive syllabi; Richmond, 2015).

  • Do you “provide constructive feedback to students is that is related to their achievement of learning objectives?” (p. 197)

Model teachers recognize that students may be unskilled and unaware (Taraban, 2016), so they frequently offer opportunities for objective evaluation. Evaluations such as quizzes, tests, and rubric scores help to keep students’ self-assessment of learning grounded in reality (see Was, 2014 and Taraban, 2014 for more on feedback). For example, students may be 100% confident in their ability to ask questions in Spanish – that is until an oral examination. Struggling to stammer out a modest “¿Que hora es?” and nothing else should lead students to a clearer awareness of their current abilities.

In summary, don’t let your students lob random intellectual darts at mysterious learning targets. Be a model teacher by providing them with clear learning objectives and feedback on their success so that they can hone their metacognitive skills!

References

Boysen, G. A. (2012). A guide to writing learning objectives for teachers of psychology. Society for the Teaching of Psychology Office of Teaching Resources in Psychology Online. Retrieved from https://legacy.berea.edu/academic-assessment/files/2015/02/Guide-to-Writing-Learning-Objectives-for-Teachers-of-Psychology-Boysen-2012.pdf

Duell, O. P. (1974). Effect of type of objective, level of test questions, and the judged importance of tested materials upon posttest performance. Journal of Educational Psychology, 66, 225–323.

Richmond, A. S. (2015, March 6th). The metacognitive syllabus. Retrieved from https://www.improvewithmetacognition.com/metacognitive-syllabus/

Richmond, A. S., Boysen, G. A., Gurung, R. A. R. (2016). An evidence-based guide for college and university teaching: Developing the model teacher. Routledge.

Rothkopf, E. Z., & Kaplan, R. (1972). Exploration of the effect of density and specificity of instructional objectives on learning from text. Journal of Educational Psychology, 63, 295–302.

Taraban, R. (2014, December 10th). Mind the feedback gap. Retrieved from https://www.improvewithmetacognition.com/mind-the-feedback-gap/

Taraban, R. (2016, April 1st). Unskilled and unaware: A metacognitive bias. Retrieved from https://www.improvewithmetacognition.com/unskilled-unaware-metacognitive-bias/

Was, C. (2014, August 28th). Testing improves knowledge monitoring. Retrieved from https://www.improvewithmetacognition.com/testing-improves-knowledge-monitoring/