Learners Need to Focus on Errors

Dec 9, 2016 by

Let’s move on to the excellent article It’s Not How Much; It’s How: Characteristics of Practice Behavior and Retention of Performance Skills, by Duke et al. (2009), which is another dive into analyzing what leads to good retention of learning in music education.  Just to be different, I’ll start with the conclusion, and then circle around to the study construction.

“The results showed that the strategies employed during practice were more determinative of performance quality at retention than was how much or how long the pianists practiced”

17 students (advanced piano performance students) were given a 3-measure passage from a difficult Concerto to learn. The students each followed the following protocol for practice:

  • 2-min warmup of their choice
  • As much time as they need with the passage, a metronome, a pencil, and a piano, to lean the passage well and play it confidently at the target tempo.
  • A 24-hour break away from the music, with a promise not to practice it at all

The next day, the students had the same 2-minute warmup period (asking that they do not warm up with the practiced passage). Then they were asked to play straight through the 3 measures at the target tempo 15 times without stopping (these are the retention trials).

After recording numerical data about the practice sessions and retention trials, the researchers watched the recorded practice sessions to make detailed observations about the techniques each participant used to practice. Then they ranked the retention trials of the 17 participants, examining tone, character, and expressiveness of musical performance.

Here are the practice variables that were NOT related significantly to the retention trial rankings:

  • total time practiced
  • total number of times through the passage during practice
  • total number of correct and near-correct times through the passage

Here are the practice variables that were significantly related to the retention trial rankings:

  • number of complete incorrect performance trials (positive correlation)
  • percentage of all complete trials that were correct (negative correlation)
  • percentage of complete trials that were correct and near-correct (negative correlation)

Three of the participants were ranked as clearly superior to the other 14 students. Examining the practice sessions from these three students yielded a set of 8 strategies (see p. 317 of the article for the full list), a few of which are summarized here :

  • early in practicing, the playing was hands-together
  • when errors appeared, they were addressed immediately
  • the precise location and source of each error was identified accurately, rehearsed, and corrected
  • the tempo was varied
  • target passages were repeated until stabilized (no more errors)

While the other 14 players would employ some of the 8 strategies, it was only the top 3 that used all of them. In other words, the top-performing practicers had very good metacognition skills. They could accurately identify their mistakes, sometimes anticipating them in advance, and knew what kinds of actions to take to work in the direction of correcting the mistakes and avoiding them in the future.

One of my key takeaways from this article was a quote from the literature review, which I think is much more widely applicable to many subjects (not just music):

“making practice assignments in terms of time practiced instead of goals accomplished remains one of the most curious and stubbornly persistent traditions in music pedagogy”

In mathematics, we assign lists of problems, and students are to do all the problems regardless of their mastery over time. I’m wondering if we shouldn’t give the students a suggested list to practice from, with goals about what they should learn, and tell them to practice until they are reasonably sure they have accomplished the goals. I might go so far as to ask them to do the assignment in pen, crossing out and correcting mistakes as they go, so they can see the progress towards mastery of the topic.

I’m afraid that our relentless focus on time or quantity practiced might be handicapping students’ metacognitive abilities. Do our students actually know when they have mastered a topic? Do they transparently remember the mistakes they make after erasing them and overwriting them? I’m reminded of the Trevor Ragan video about Blocked and Random Practice that says that practice is going to be ugly. I think practice that produces learning IS going to be ugly, and somehow we need to help students be okay with this.

Challenge: Think about how to structure student out-of-class practice so that the focus is not on time or quantity, but on the error analysis and thoughtful correction. Try to make the whole purpose of the assignment around accurately identifying mistakes and working towards mastery of the material rather than “checking off” the to-do items on a list.

Note: A weekly bite of learning and challenge goes out every week. If you’d like to have it delivered to your inbox, sign up at Weekly Teaching Challenge.

Reference:

Duke, R. A., Simmons, A. L., & Cash, C. D. (2009). It’s not how much; it’s how characteristics of practice behavior and retention of performance skills. Journal of Research in Music Education, 56(4), 310-321.

Possibly Related Posts:


read more

Can we Teach Students to Understand Math Tests?

Nov 22, 2009 by

A few weeks ago, I gave a test where the grades were less than stellar. Whenever this happens, I try to sit down and reflect on whether the poor test grades were a result of something I did differently in class, a poorly written test, or a result of poor studying habits. After careful reflection and analysis of my own, I was pretty sure that this was the result of lack of studying (a theory which was verified … later in this blog post).

I was dreading the task of passing back these tests and prepared myself for an onslaught of questions aimed at trying to discredit the test (or the teaching).  Then I got one of those great last-minute ideas that come to you right before you walk in to face the students.  Maybe I should let them “pick apart” the test BEFORE they see their own tests.  The class in question is Math for Elementary Teachers (MathET) and I figured that a detailed test analysis would not be an inappropriate topic for us to spend class time on.

First, I made some blank copies of the test (enough for each group to have one).  I also created a handout with every single learning objective and assignment that I had given the students for each of the sections on the test (these are all available in their Blackboard shell, but  I compiled these in a paper-based handout that was 3 pages single-spaced). Here is what one section looked like:

5.1 Integers
• Describe operations on signed numbers using number line models.
• Demonstrate operations on signed numbers using colored counter models.
• Explain why a negative times a negative is a positive.
• Add, subtract, multiply, and divide signed numbers (integers).
• Know the mathematical properties of integers (closure, identity, inverse, etc.)
• Complete #1, 3, 7, 9, 15, 17, 19, 21, 23, 25, 37
• Be able to model addition, subtraction, and multiplication of signed numbers using a number-line model
• Be able to model addition, multiplication, and division of signed numbers using a colored counter model (why not subtraction? because subtraction is really the addition of a negative – treat it so)
• Read the blog posts about why a negative times a negative is a positive and be able to paraphrase at least two arguments in your own words.
• Study for your Gateway on Signed Numbers (be able to add, subtract, multiply, or divide signed numbers)
• Play with Manipulative: NLVM Color Chips Addition
• Play with Manipulative: NLVM Circle Game

When we met in class, I counted the class into groups of 3 students each. Each group received a copy of the learning objectives & assignments (by section) and a blank copy of the test. All of these were un-stapled so that the group could share and divide up the pages as they wanted.

Task #1: Look at all the objectives and assigned tasks/problems. Determine where these objectives, tasks, and problems showed up on the exam. (20-30 min)

Objective: Make the connection between what I tell them they need to learn/do and what shows up on the test.

Task #2: What didn’t show up on the exam? We discussed why these objectives might have been left off (for example, maybe it was not something that I emphasized in class) (5 min)

Objective: Make the connection between what is likely to show up on an exam and what is not likely to show up (with the caveat that any of the learning objectives are really fair game).

Task #3: Make a chart that shows how the points for each test question were distributed between the sections that were covered on the exam. We then compared the results from each group and compared these results to my analysis of the point-distribution for the test. (5-10 min)

Objective: Clearly see that it is necessary to study ALL sections, not just a couple of them.

Task #4: I passed back the individual tests. Each student was given three questions to answer as they looked through their tests. 1. Where were the gaps in your knowledge? 2. What mistakes should you have caught before turning in your test? (read directions more carefully, do all the problems, etc.) 3. What can you do to better prepare for the NEXT test?  I collected these, made copies, and passed them back to the students the next class. (10 minutes)

Objective: Take responsibility for your own studying.

It was here that students surprised me by being honest on Question #3. Most of them confessed that they had not studied at all, but now realized that they needed to start studying. I cannot help but wonder whether I would have gotten the same result if I had simply passed back the tests with no analysis.

Task #5: At the same time the students were looking over their own exam, I passed around one more blank copy of the test and asked students to write their score  for each problem (no names) on that problem so that we could see what the score distributions looked like.  When this was done, I placed the pages on the document camera one-by-one so that they could see the scores problem by problem.

Objective: To show the students that for many questions, students either get the question almost completely right, or completely wrong (you know it or you don’t).

Task #6: We had already covered the first section of the next unit, so I had the students begin a set of “How to start the problem” flashcards. On the front of the flash card, they wrote a “test question” for the new unit. On the back of the card, they wrote some tips for starting the problem and details they might otherwise forget.

Objective: To begin to see tests from the perspective of a test-writer instead of a test-taker.

Take-home assignment: I told each student that they must come to the next test with at least 5 flash cards per section (35+ flash cards). I suggested that a great way to study would be to swap cards with each other and practice with someone else’s questions. I checked to see if they carried through (although I assigned no consequences if they didn’t). All but two carried through. Anyone want to guess how those two fared?

Results: I just finished grading the latest test (signed numbers, fractions, and decimals – not easy topics), and the test results were almost all A’s and B’s (instead of C’s and D’s on the previous test). Most students left the test with a smile on their face, and several finally got the “A” they had been trying for all semester.

Three questions for you, my busy and wise readers:
1. Will the students persist in better study habits now? For example, will they study like this for the next exam?
2. Would this detailed test analysis have had the same effect if the class hadn’t just crashed & burned on a test?  (i.e. is it necessary to “fail” first)
3. Was this a good use of class time? Would it be good use of class time in a different math course, like Calculus?

Possibly Related Posts:


read more