Testing Improves Knowledge Monitoring

FacebooktwittermailFacebooktwittermail

by Chris Was, Kent State University

Randy Isaacson and I have spent a great deal of time and effort creating a curriculum for an educational psychology class to encourage metacognition in preservice teachers. Randy spent a number of years developing this curriculum before I joined him in an attempt to improve the curriculum and use the curriculum to test hypotheses regarding improvement of metacognition with training for undergraduate preservice teachers. A detail description of the curriculum can be found in the National Teaching and Learning Forum (Isaacson & Wass, 2010), but I wanted to take this opportunity to give a simple overview of how we structured our courses and some of the results produced by using this curriculum to train undergraduates to be metacognitive in their studies.

With our combined 40+ years of teaching, we our quite clear that most undergraduates do not come equipped with the self-regulation skills that one would hope students would acquire before entering the university. Even more disappointing, is students lack the metacognition required to successfully regulate their own learning behaviors. Creating an environment that not only encourages, but also requires students to be metacognitive is not a simple task. However, it can be accomplished.

Variable Weight-Variable Difficulty Tests

The most important component of the course structure is creating an environment with extensive and immediate feedback. The feedback should be designed to help the student identify specific deficiencies in his or her learning strategies and metacognition.  We developed an extensive array of learning resources which guide the student to focusing on knowing what they know, and when they know it. The first resource we developed is a test format that helps the students reflect and monitor their knowledge regarding the content and items on the test. In our courses we have students judge their accuracy and confidence in their responses for each item and having them predict their scores for each exam

Throughout the duration of the semester in which they were enrolled in the course students are administered a weekly exam (the courses meet Monday, Wednesday and Friday with the exams occurring on Friday). Each examination is based on a variable weight, variable difficulty format. Each examination contained a total of 35 questions composed of 15 Level I questions that were at the knowledge level, 15 Level II questions at the evaluation level, and 5 Level III questions at the application/synthesis level. Scoring of the exam was based on a system that increased points for correct responses in relation to the increasing difficulty of the questions and confidence in responses: Students choose 10 Level I questions and put those answers on the left side of the answer sheet. These 10 Level I questions are worth 2 points each. Ten Level II questions were worth 5 points each are placed on the left side of the answer sheet, and three Level III questions were worth 6 points each are placed on the left. Students were also required to choose the questions they were least confident about and place them on the right side of the answer sheet. These questions were only worth one point (5 of the 15 Level I and II questions, and 2 of the 5 Level III questions). The scoring equaled a possible 100 points for each exam. Correlations between total score and absolute score (number correct out of 35) typically range from r = .87 to r = .94.  Although we provide students with many other resources to encourage metacognition, we feel that the left-right test format is the most powerful influence on student knowledge monitoring through the semester.

The Results

Along with our collaborators, we have conducted a number of studies using the variable weight-variable difficulty (VW-VD) tests as a treatment. Our research questions focus on whether the test format increases knowledge monitoring accuracy, individual differences in knowledge monitoring and metacognition, and psychometric issues in measuring knowledge monitoring. Below is a brief description of some of our results followed.

Hartwig, Was, Isaacson, & Dunlosky (2011) found that a simple knowledge monitoring assessment predicted both test scores and number of items correct on the VW-VD tests.

Isaacson & Was (2010) found that after a semester of VW-VD tests, knowledge monitoring accuracy on an unrelated measure of knowledge monitoring increased.