Teacher, Know Thyself (Translation: Use Student Evaluations of Teaching!)

FacebooktwittermailFacebooktwittermail

by Guy Boysen, Ph.D., McKendree University

I’ll call him Donald. I am willing to bet that you know a Donald too. Students fled from Donald’s classes in droves. His was the pedagogy of narcissism – “I am the Lord thy teacher and thou shall have no classroom activities unfocused on me!” Donald’s grading system was so subjective, vindictive, and Byzantine as to be barely defensible. Enrollment in his classes always followed the same pattern: intro course full of students who did not know any better at the start of the semester and then decimation by the end of the semester; advanced seminars empty except for a few adoring students with Stockholm syndrome. Asked about his student evaluations, Donald would say “My seminar evals are good, but I don’t even look at my intro evals anymore – they don’t know about teaching.”

Donald calls to mind the classic metacognitive phenomenon of being unskilled and unaware of it (Kruger & Dunnnig, 1999; Lodge, 2016; Schumacher, Akers, & Taraban, 2016). This is something teachers see in students all of the time; bad students overestimate their abilities and therefore don’t work to improve. As illustrated by Donald, this phenomenon applies to teachers as well.

There are a number of wide-ranging characteristics that make someone a model teacher (Richmond, Boysen, & Gurung, 2016), but the use of student evaluations to improve teaching is one that has a strong metacognitive component. Student evaluations provide teachers with feedback so that they can engage in metacognitive analysis of their pedagogical skills and practices. Based on that analysis, goals for improvement can be set and pursued.

Recommendations for Using Student Evals

How should teachers use student evaluations to develop metacognitive awareness of their pedagogical strengths and weakness? Several suggestions can be found in An Evidence-Based Guide for College and University Teaching: Developing the Model Teacher (Richmond, Boysen, & Gurung, 2016).

Set goals for improvement and follow through with them.

Have you ever gotten on a scale and not liked the number staring back at you? Did you just get on and off the scale repeatedly expecting the number to change? No? Well, that trick doesn’t work in teaching either. Collecting student evaluations without using them to set and implement goals for improvement is like a diet that only consists of repeated weigh-ins – the numbers will not change without the application of direct effort. Use your student evaluations, preferably in collaboration with a mentor or teaching expert, to set manageable goals for change. 

Select the correct assessment tool.    

Wouldn’t it be great if we could select our own teaching evaluations? Mine might look something like this.

But wait! You can select your own teaching evaluations. Official, summative evaluations may be set at the institutional level, but teachers can implement any survey they want for professional development purposes. Choose wisely, however.

If you are a numbers person, select a well-researched measure that provides feedback across several dimensions of teaching that are relevant to you. Perhaps the best known of these is the Student Evaluation of Educational Quality (SEEQ), which measures teaching quality across nine different factors (Marsh, 1983). The advantages to this type of measure are that the results can be scientifically trusted and are detailed enough to inform goals for improvement.

Not a numbers person? You might ask for written comments from students. Whatever you want to know about your teaching, you can simply ask – believe me, students have opinions! Although analyzing student comments can be laborious (Lewis, 2001), they can offer unequalled richness and specificity. Beware of asking for general feedback however. General questions tend to elicit general feedback (Hoon, Oliver, Szpakowska, & Newton, 2015). Rather, provide specific prompts such as the following.

  • What should I/we STOP doing in this class?
  • What should I/we START doing in this class?
  • What should I/we CONTINUE doing in this class? 

Don’t wait until the end of the semester.

Imagine if Donald could get feedback from the students who drop his classes. Perhaps he could make pedagogical changes to reach those students before they flee. Guess what, he can! Formative assessment is the key.

Teachers often allow official, end-of-semester student evaluations to serve as their only feedback from students. The problem with this approach is that the feedback comes too late to make midsemester course corrections. This is analogous to the metacognitive importance of providing students with early feedback on their performance. You wouldn’t expect students to succeed in your course if a final exam was the only grade, would you? Well, don’t put yourself in the same position. Model teachers ask for student feedback both at the end of the semester (i.e., summative) and early enough in the semester to make immediate improvements (i.e., formative).

Make changes large and small.

Student evaluations can be used to inform revisions to all levels of pedagogy. Imagine that students report being absolutely bewildered by a concept in your class. Potential responses to this feedback could be to change (a) the time spent on the concept in class, (b) scaffolding of knowledge needed to understand the concept, (c) the availability of study aids related to the concept, (d) the basic instructional technique used to teach the concept, or (e) the decision to even include the concept in the course. For model teachers, student feedback can inform changes large and small.

Conclusion

Every single semester students comment on my evaluations that they want the tests to be multiple choice rather than short answer/essay, and every semester I tell students that I will not be changing the test format because students do not study as hard for multiple-choice tests. Thus, my point is not that model teachers incorporate all student feedback into their courses. However, failure to respond should be a sound and intentional pedagogical choice rather than a Donald-like failure of metacognition – don’t be caught unskilled and unaware.

References

Hoon, A., Oliver, E., Szpakowska, K., & Newton, P. (2015). Use of the Stop, Start, Continue method is associated with the production of constructive qualitative feedback by students in higher education. Assessment & Evaluation in Higher Education, 40, 755-767. doi:10.1080/02602938.2014.956282

Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77, 1121-1134.

Lewis, K. G. (2001). Making sense of student written comments. New Directions for Teaching and Learning, 87, 25-32.

Lodge, J. (2016) Hypercorrection: Overcomming overconfidence with metacognition. Retreived from https://www.improvewithmetacognition.com/hypercorrection-overcoming-overconfidence-metacognition/

Marsh, H. W. (1983). Multidimensional ratings of teaching effectiveness by students from different academic settings and their relation to student/course/instructor characteristics. Journal of Educational Psychology, 75, 150-166. doi:10.1037/0022-0663.75.1.150

Richmond, A. S., Boysen, G. A., Gurung, R. A. R. (2016). An evidence-based guide for college and university teaching: Developing the model teacher. Routledge.

Schumacher, J. R., Akers, E. & Taraban, R. (2016). Unskilled and unaware: A metacognitive bias. Retrieved from https://www.improvewithmetacognition.com/unskilled-unaware-metacognitive-bias/