Blog Post Heading
Blog Post Content
By Heather Morton
Senior Editor, MindEdge Learning
A few days ago, my student and I were considering an SAT problem that asked us to decide where to put the word “however”—at the end of the second sentence in a passage or at the beginning of the third sentence. My student thought it should go at the beginning of the third sentence. I was confident it should go at the end of the second, but I didn’t know why.
Evidently, I’m not alone. In studies of people with significant expertise in areas that may be even more important than grammar, research shows that the experts don’t know what they’re doing. According to a study by George Velmahos et al., surgeons who taught medical students tended to describe only what was visible in their actions and to omit key aspects of their decision-making process. It was as if they were watching themselves perform surgery and could only explain what they saw, not what they thought. And a study by Richard E. Clark et al. argues even more broadly that “experts” can’t tell you 70% of what they do.
Generally speaking, researchers have identified two reasons for the inability of experts to explain themselves. The first is “the curse of knowledge,” which isn’t really about expertise at all. Rather, it’s about how we fail to appreciate our superior knowledge.
In 1990, Elizabeth Newton conducted a now-famous study in which she divided pairs of participants into “tappers” and “listeners.” The tappers were asked to tap out the rhythm to well-known songs, such as Happy Birthday; the listeners had to guess the song. While the tappers predicted that the listeners would guess the correct song 50% of the time, listeners in fact only guessed correctly 2.5% of the time. Newton’s conclusion: We underestimate how much of an advantage our knowledge gives us over others.
The “curse of knowledge” has important implications in economics. Researchers have noted that people with better information often fail to take full advantage of it—overbidding for an item, say, even when they know that it is worth less than the seller thinks. In learning science, the curse of knowledge means that teachers can forget that they know more than their students.
But there’s a second reason for the inability of experts to explain their expertise, and it is specific to the idea of expertise itself. Because of the way we learn, much of what we know really well, like walking or driving, is inaccessible to our conscious thought.
Our conscious way of learning, by studying procedures, is a laborious way to acquire knowledge. Once it’s acquired, it’s stored as chunks that can be performed automatically, without effort. The difference between conscious knowledge and automated knowledge is the difference between learning to drive and driving or learning to chop an onion and cooking.
If you’ve been cooking for some time, you’re only conscious of a small part of what you’re doing. You might add seasoning, chop, dice, add more water, remove the lid, and adjust the temperature without conscious awareness. Experts rely on these automated chunks of knowledge even more than intermediate practitioners or novices—because having so many of those chunks is what makes them experts. I imagine we would all prefer a surgeon who has performed the surgery so often they act automatically for most of it, allowing their attention to focus only on what is new or requires judgment, to a surgeon who is conscientiously tracking every step.
Given that experts are relatively bad at understanding their own actions, instructional designers draw on a set of psychological techniques called cognitive task analysis (CTA) to uncover what experts are thinking as they perform a complex task. The techniques vary, but the basis of almost any CTA is repeatedly going through the task at increasing levels of detail with (ideally multiple) subject matter experts.
The first time through, the instructional designer gets the basic outline of the task from the subject matter expert (SME). After reviewing the SME’s initial account, they identify decision points, cues, goals, and other needed information and, in a second interview, ask follow-up questions, such as What could you have done at this decision point? or Why did you choose this action? The designers then go through the task with the SME as many times as needed, each time eliciting more detail and verifying the accuracy of their understanding with the SME.
Gathering information is only the beginning of passing on the SME’s expertise to others. The designers then sort through all the data they’ve acquired to determine the best way to teach the process to learners.
Research shows that CTA makes a big difference in students’ ability to acquire experts’ knowledge. In the Velhamos study, surgical residents were divided into two groups. One was taught traditionally by expert surgeons. The other was taught using information elicited through CTA from the same expert surgeons who were teaching the control group. The residents exposed to the CTA-based research made 60%-to-70% better decisions with patients.
Getting back to the grammar discussion that I had with my SAT student: in that interaction, I was both the instructional designer and the SME. Nonetheless, I floundered about for a reason to explain why “however” should go at the end of the second sentence. I noticed that logically, it made sense as a transition between the first and second sentences, so I went with that as the explanation. Of course, that’s not the reason I knew the answer—but at least it sounded right.