The Challenge of Deep Learning in the Age of LearnSmart Course Systems

by Lauren Scharff, Ph.D. (U. S. Air Force Academy)

One of my close friends and colleague can reliably be counted on to point out that students are rational decision makers. There is only so much time in their days and they have full schedules. If there are ways for students to spend less time per course and still “be successful,” they will find the ways to do so. Unfortunately, their efficient choices may short-change their long-term, deep learning.

This tension between efficiency and deep learning was again brought to my attention when I learned about the “LearnSmart” (LS) text application that automatically comes with the e-text chosen by my department for the core course I’m teaching this semester. As a plus, the publisher has incorporated learning science (metacognitive prompts and spacing of review material) into the design of LearnSmart. Less positive, some aspects of the LearnSmart design seem to lead many students to choose efficiency over deep learning.

In a nutshell, the current LS design prompts learning shortcuts in several ways. Pre-highlighted text discourages reading from non-highlighted material, and the fact that the LS quiz questions primarily come from highlighted material reinforces those selective reading tendencies. A less conspicuous learning trap results from the design of the LS quiz credit algorithm that incorporates the metacognitive prompts. The metacognition prompts not only take a bit of extra time to answer, but students only get credit for completing questions for which they indicate good understanding of the question material. If they indicate questionable understanding, even if they ultimately answer correctly, that question does not count toward the required number of pre-class reading check questions. [If you’d like more details about the LS quiz process design, please see the text at the bottom of this post.]

Last semester, the fact that many of our students were choosing efficiency over deep learning became apparent when the first exam was graded. Despite very high completion of the LS pre-class reading quizzes and lively class discussions, exam grades on average were more than a letter grade lower than previous semesters.

The bottom line is, just like teaching tools, learning tools are only effective if they are used in ways that align with objectives. As instructors, our objectives typically are student learning (hopefully deep learning in most cases). Students’ objectives might seem to be correlated with learning (e.g. grades) or not (e.g. what is the fastest way to complete this assignment?). If we instructors design our courses or choose activities that allow students to efficiently (quickly) complete them while also obtaining good grades, then we are inadvertently supporting short-cuts to real learning.

So, how do we tackle our efficiency-shortcut challenge as we go into this new semester? There is a tool that the publisher offers to help us track student responses by levels of self-reported understanding and correctness. We can see if any students are showing the majority of their responses in the “I know it” category. If many of those are also incorrect, it’s likely that they are prioritizing short-term efficiency over long-term learning and we can talk to them one-on-one about their choices. That’s helpful, but it’s reactionary.

The real question is, How do we get students to consciously prioritize their long-term learning over short-term efficiency? For that, I suggest additional explicit discussion and another layer of metacognition. I plan to regularly check in with the students, have class discussions aimed at bringing their choices about their learning behaviors into their conscious awareness, and positively reinforcing their positive self-regulation of deep-learning behaviors.

I’ll let you know how it goes.

——————————————–

Here is some additional background on the e-text and the complimentary LearnSmart (LS) text .

There are two ways to access the text. One way is an electronic version of the printed text, including nice annotation capabilities for students who want to underline, highlight or take notes. It’s essentially an electronic version of a printed text. The second way to access the text is through the LS chapters. As mentioned above, when the students open these chapters, they will find that some of the text has already been highlighted for them!

As they read through the LS chapters, students are periodically prompted with some LS quiz questions (primarily from highlighted material). These questions are where some of the learning science comes in. Students are given a question about the material. But, rather than being given the multiple choice response options right away, they are first given a metacognitive prompt. They are asked how confident they are that they know the answer to the question without seeing the response options. They can choose “I know it,” “Think so,” “Unsure,” or “No idea.” Once they answer about their “awareness” of their understanding, then they are given the response options and they try to correctly answer the question.

This next point is key: it turns out that in order to get credit for question completion in LS, students must do BOTH of the following: 1) choose “I know it” when indicating understanding, and 2) answer the question correctly. If students indicate any other level of understanding, or if they answer incorrectly, LS will give them more questions on that topic, and the effort for that question won’t count towards completion of the required number of questions for the pre-class activity.

And there’s the rub. Efficient students quickly learn that they can complete the pre-class reading quiz activity much more quickly if they chose “I know it” to all the metacognitive understanding probes prior to each question. If they guess at the subsequent question answer and get it correct, it counts toward their completion of the activity and they move on. If they answer incorrectly, LS would give them another question from that topic, but they weren’t any worse off with respect to time and effort than if they had indicated that they weren’t sure of the answer.

If students actually take the time to take advantage of rather than shortcut the LS quiz features (there are additional ones I haven’t mentioned here), their deep learning should be enhanced. However, unless they come to value deep learning over efficiency and short-term grades (e.g. quiz completion), then there is no benefit to the technology. In fact it might further undermine their learning through a false sense of understanding.


Metacognition in Academic and Business Environments

by Dr. John Draeger, SUNY Buffalo State

I gave two workshops on metacognition this week — one to a group of business professionaIMG_20160107_103443740_HDRls associated with the Organizational Development Network of Western New York and the other to faculty at Genesee Community College. Both workshops began with the uncontroversial claim that being effective (e.g., in business, learning, teaching) requires finding the right strategy for the task at hand. The conversation centered around how metacognition can help make that happen. For example, metacognition encourages us to be explicit and intentional about our planning, to monitor our progress, to make adjustments along the way, and to evaluate our performance afterwards. While not a “magic elixir,” metacognition can help us become more aware of where, when, why, and how we are or are not effective.


As I prepared for these two workshops, I decided to include one of my favorite metacognition resources as a key part of the workshop. Kimberly Tanner (2012) offers a series of questions that prompt metacognitive planning, monitoring, and evaluation. By way of illustration, I adapted Tanner’s questions to a familiar academic task, namely reading. Not all students are as metacognitive as we would like them to be. When asked to complete a reading assignment, for example, some students will interpret the task as turning a certain number of pages (e.g., read pages 8-19). They read the words, flip the page, and the task is complete when they reach the end. Savvy students realize that turning pages is not much of a reading strategy. They will reflect upon their professor’s instructions and course objectives. These reflections can help them intentionally adopt an appropriate reading strategy. In short, these savvy students are engaging in metacognition. They are guided by Tanner-style questions in table below.  

 

Table: Using metacognition to read more effectively (Adapted from Tanner, 2012)

Task Planning Monitoring Evaluating
Reading What do I already know about this topic?

How much time do I need to complete the task?

What strategies do I intend to use?

What questions are arising?

Are my strategies working?

What is most confusing?

Am I struggling with motivation or content?

What other are strategies are available?

To what extent did I successfully complete the task?

To what extent did I use the resources available to me?

What confusions do I have that still need to be clarified?

What worked well?

Big picture Why is it important to learn this material?

How does this reading align with course objectives?

To what extent has completing this reading helped me with other learning goals? What have I learned in this course that I could use in the future?

 

After considering the table with reference to student reading, I asked the business group how the table might be adapted to a business context. They pointed out that middle managers are often flummoxed by company initiatives that either lack specificity or fail to align with the company’s mission and valIMG_3929ues. This is reminiscent of students who are paralyzed by what they take to be an ill-defined assignment (e.g., “write a reflection paper on what you just read”). Like the student scrambling to write the paper the night before, business organizations can be reactionary. Like the student who tends to do what they’ve done before in there other classes (e.g., put some quotations in a reflection paper to make it sound fancy), businesses are often carried along by organizational culture and past practice. W hen facing adversity, for example, organizational structure often suggests that doing something now (anything! Just do it!) is preferable to doing nothing at all. Like savvy students, however, savvy managers recognize the importance of explicitly considering and intentionally adapting response strategies most likely to further organizational goals. This requires metacognition and adapting the Tanner-style table is a place to start.

When I discussed the Tanner-style table with the faculty at Genesee Community College, they offered a wide-variety of suggestions concerning how the table might be adapted for use in their courses. For example, some suggested that my reading example presupposed that students actually complete their rIMG_3939eading assignments. They offered suggestions concerning how metacognitive prompts could be incorporated early in the course to bring out the importance of the reading to mastery of course material. Others suggested that metacognitive questions could be used to supplement prepackaged online course materials. Another offered that the he sometimes “translates” historical texts into more accessible English, but he is not always certain whether this is good for students. In response, someone pointed out that metacognitive prompts could help the faculty member more explicitly formulate the learning goals for the class and then consider whether the “translated” texts align with those goals.

In both business and academic contexts, I stressed that there is nothing “magical” about metacognition. It is not a quick fix or a cure-all. However, it does prompt us to ask difficult and often uncomfortable questions about our own efficacy. For example, participants in both workshops reported a tendency that all of us have to want to do things “our own way” even when this is not most effective. Metacognition puts us on the road towards better planning, better monitoring, better acting, and better alignment with our overall goals.

 

Thinking about thinking in both business and academic environments Share on X

References

Tanner, K. D. (2012). Promoting student metacognition. CBE-Life Sciences Education, 11(2), 113-120.


Quantifying Metacognition — Some Numeracy behind Self-Assessment Measures

Ed Nuhfer, Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029

Early this year, Lauren Scharff directed us to what might be one of the most influential reports on quantification of metacognition, which is Kruger and Dunning’s 1999 “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments.” In the 16 years that since elapsed, a popular belief sprung from that paper that became known as the “Dunning-Kruger effect.” Wikipedia describes the effect as a cognitive bias in which relatively unskilled individuals suffer from illusory superiority, mistakenly assessing their ability to be much higher than it really is. Wikipedia thus describes a true metacognitive handicap in a lack of ability to self-assess. I consider Kruger and Dunning (1999) as seminal because it represents what may be the first attempt to establish a way to quantify metacognitive self-assessment. Yet, as time passes, we always learn ways to improve on any good idea.

At first, quantifying the ability to self-assess seems simple. It appears that comparing a direct measure of confidence to perform taken through one instrument with a direct measure of demonstrated competence taken through another instrument should do the job nicely. For people skillful in self-assessment, the scores on both self-assessment and performance measures should be about equal. Seriously large differences can indicate underconfidence on one hand or “illusory superiority” on the other.

The Signal and the Noise

In practice, measuring self-assessment accuracy is not nearly so simple. The instruments of social science yield data consisting of the signal that expresses the relationship between our actual competency and our self-assessed feelings of competency and significant noise generated by our human error and inconsistency.

In analogy, consider signal as your favorite music on a radio station, the measuring instrument as your radio receiver, the noise as the static that intrudes on your favorite music, and the data as the actual sound mix of noise and signal that you hear. The radio signal may truly exist, but unless we construct suitable instruments to detect it, we will not be able to generate convincing evidence that the radio signal even exists. Failures can lead to the conclusions that metacognitive self-assessment is no better than random guessing.

Your personal metacognitive skill is analogous to an ability to tune to the clearest signal possible. In this case, you are “tuning in” to yourself—to your “internal radio station”—rather than tuning the instruments that can measure this signal externally. In developing self-assessment skill, you are working to attune your personal feelings of competence to reflect the clearest and most accurate self-assessment of your actual competence. Feedback from the instruments has value because they help us to see how well we have achieved the ability to self-assess accurately.

Instruments and the Data They Yield

General, global questions such as: “How would you rate your ability in math?” “How well can you express your ideas in writing?” or “How well do you understand science?” may prove to be crude, blunt self-assessment instruments. Instead of single general questions, more granular instruments like knowledge surveys that elicit multiple measures of specific information seem needed.

Because the true signal is harder to detect than often supposed, researchers need a critical mass of data to confirm the signal. Pressures to publish in academia can cause researchers to rush to publish results from small databases obtainable in a brief time rather than spending the time, sometimes years, needed to generate the database of sufficient size that can provide reproducible results.

Understanding Graphical Depictions of Data

Some graphical conventions that have become almost standard in the self-assessment literature depict ordered patterns from random noise. These patterns invite researchers to interpret the order as produced by the self-assessment signal. Graphing of nonsense data generated from random numbers in varied graphical formats can reveal what pure randomness looks like when depicted in any graphical convention. Knowing the patterns of randomness enables acquiring the numeracy needed to understand self-assessment measurements.

Some obvious questions I am anticipating follow: (1) How do I know if my instruments are capturing mainly noise or signal? (2) How can I tell when a database (either my own or one described in a peer-reviewed publication) is of sufficient size to be reproducible? (3) What are some alternatives to single global questions? (4) What kinds of graphs portray random noise as a legitimate self-assessment signal? (5) When I see a graph in a publication, how can I tell if it is mainly noise or mainly signal? (6) What kind of correlations are reasonable to expect between self-assessed competency and actual competency?

Are There Any Answers?

Getting some answers to these meaty questions requires more than a short blog post, but some help is just a click or two away. This blog directs readers to “Random Number Simulations Reveal How Random Noise Affects the Measurements and Graphical Portrayals of Self-Assessed Competency” (Numeracy, January 2016) with acknowledgments to my co-authors Christopher Cogan, Steven Fleisher, Eric Gaze and Karl Wirth for their infinite patience with me on this project. Numeracy is an open-source journal, and you can download the paper for free. Readers will likely see self-assessment literature in different ways way after reading the article.