Learning at Scale Slides from ICTCM

Mar 11, 2017 by

Learning at Scale: Using Research To Improve Learning Practices and Technology for Teaching Math

In the last 5 years, there has been a rise in what we might call “large-scale digital learning experiments.”  These take the form of centralized courses, vendor-created courseware, online homework systems, MOOCs, and free-range learning platforms. If we mine the research, successes, and failures coming out of these experiments, what can we discover about designing better digital learning experiences and technology for the learning of mathematics?

 

Possibly Related Posts:


read more

Clickety Click Click: Awful Measures for Learning

Dec 19, 2016 by

I feel a little inspired by Sam Ford’s post The Year We Talk About Our Awful Metrics. Ford writes about the need for change in metrics of online media publications, but we could just as easily be discussing the metrics of learning management systems, ed-tech vendor platforms, and institutional analytics.

Ford argues that we need to “get serious” about better forms of measurement in 2017. As long as we are measuring metrics with little meaning, we aren’t really improving learning.

Let me give you a few examples to illustrate the similar problems in education.

Page Clicks

As in, how many pages of the ebook has the student accessed? Because the student must read every page they access, right? And they don’t just scroll through pages to see roughly what the book has in it? Realistically, I think we all acknowledge these inevitabilities, but that doesn’t stop us from creating blingy dashboards to display our metric wares.

Consider the following scenarios.

Scenario 1: Student A has clicked on 55 pages whereas student B has only clicked on 10 pages. This means:

a. Student A has read more than Student B. Student A is a more engaged student.

b. Student B was reading deeply and Student A was skimming.

c. Student A reads faster than student B.

d. Student A read more online. Student B borrowed a book from a friend and read more on paper.

e. None of the above. Who knows what it really means.

Scenario 2: Student A has clicked on 55 pages whereas student B has only clicked on 10 pages. Both students spent 2 hours in the eReader platform.

a. Student A has read more than Student B. Student A is a more engaged student.

b. Student B was reading deeply and Student A was skimming.

c. Student A reads faster than student B.

d. Student A read more online. Student B borrowed a book from a friend and read more on paper.

e. None of the above. Who knows what it really means.

In either case, how much do we really know about how much Students A and B have learned? Nothing. We know absolutely nothing. These metrics haven’t done a thing to see what either student is capable of recalling or retrieving from memory. There is nothing to help us to see whether the student can make sensical decisions related to the topics and nothing to show whether concepts can be transferred to new situations. Page clicks are a bad metric. All they tell me is that students log in more every Sunday night than on any other night (and that metric has been the same for a decade now).

But wait … there are more metrics …

Attendance

We can measure attendance – whether it be logging in to the LMS or physically showing up in the classroom. Surely this is a valuable measure of learning?

Again no, it’s not a measure of learning. It’s potentially a necessary condition of a necessary-but-not-sufficient metric for learning. Yes, we do need students to show up in some way to learn. In very active face-to-face classrooms that engage all students in learning activities, I might go so far as to say that showing up is a good measure of learning, but this is still the exception rather than the norm. And even if the classroom is active, learning is more effective with certain kinds of activities: those involving interaction, those involving varied practicethose where students have to learn to recognize and remedy their own errors.

Attendance, by itself, does not measure learning.

Time to Complete

At organizations where the learning is assessed directly (CBE and MOOCs, for example), there is often a metric around the “time to complete” a course.  This is a particularly dangerous metric because of the extreme variability. Again, let’s look at two scenarios.

Scenario 1: Course 1 is a 4-credit course that takes (on average) 45 days to complete. Course 2 is a 4-credit course that takes (on average) 30 days to complete.

a. Course 1 is poorly designed and Course 2 is well-designed.

b. Course 1 is harder than Course 2.

c. Course 1 and Course 2 seem about equal in terms of difficulty and design.

d. None of the above.

Scenario 2: Course 1 is a 4-credit course that takes (on average) 45 days to complete and requires students to turn in 4 papers. Course 2 is a 4-credit course that takes (on average) 30 days to complete and requires students to pass 2 exams.

a. Course 1 is poorly designed and Course 2 is well-designed.

b. Course 1 is harder than Course 2.

c. Course 1 and Course 2 seem about equal in terms of difficulty and design.

d. Students procrastinate more on writing papers than on taking exams.

e. None of the above.

In either case, what does the “time to complete” actually tell us about the quality of learning in the courses? If we were comparing two Calculus I courses, and they were taught with different platforms, equivalent assessment, and the same teacher, I might start to believe that time-to-complete was correlated with design, learning quality, or difficulty. But in most cases, comparing courses via this metric is like comparing apples to monkeys. It’s even worse if that data doesn’t have any kind of context around it.

Number of Clicks per Page

This is one of my favorites. I think you’ll see the problem as soon as you read the scenario.

Scenario 1: Page A got 400 clicks during the semester. Page B got only 29 clicks.

a. Page A has more valuable resources than Page B.

b. Students are accidentally wandering to Page A.

c. Page A is confusing so students visit it to reread it a lot.

d. Page B was only necessary for those students who did not understand a prerequisite concept.

e. Page A is more central in the structure of the course. Students click through it a lot on their way to somewhere else.

Scenario 1: Page A contains a video on finding the derivative using the Chain Rule and got 400 clicks during the semester. Page B contains a narrative on finding the derivative using the power rule and got only 29 clicks during the semester. 

a. Page A has more valuable resources than Page B.

b. Page A is a more difficult topic than Page B, so students revisit it a lot.

c. The video on Page A is confusing so students watch it on multiple occasions trying to figure it out.

d. Page B was only necessary for those students who did not understand a prerequisite concept.

e. Page A is more central in the structure of the course. Students click through it a lot on their way to somewhere else.

Number of clicks per page is meaningless unless there is a constructive relationship between pages. For example, if we are looking at 5 pages that each contain one resource for learning how to find the derivative using the chain rule, the comparison of data might be interesting. But even in this case, I would want to know the order the links appear to the students. And just because a student clicks on a page, it doesn’t mean they learned anything on the page. They might visit the page, decide they dislike the resource, and go find a better one.

Completion of Online Assignments

Surely we can use completion of assignments as a meaningful metric of learning? Surely?

Well, that depends. What do students access when they are working on assignments? Can they use any resource available online? Do they answer questions immediately after reading the corresponding section of the book? Are they really demonstrating learning? Or are they demonstrating the ability to find an answer? Maybe we are just measuring good finding abilities.

Many online homework platforms (no need to name names, it’s like all of them) pride themselves on delivering just-in-time help to students as they struggle (watch this video, look at this slide deck, try another problem just like this one). I think this is a questionable practice. It is important to target the moment of impasse, but too much help means the learning might not stick. Impasse is important because it produces struggle and a bit of frustration, both of which can improve learning outcomes. Perfect delivery of answers at just the right moment might not have strong learning impact because the struggle stops at that moment. I don’t think we know enough about this yet to say one way or another (correct me if you think I’m missing some important research).

Regardless, even completion of assignments is a questionable measure of learning. It’s just a measure of the student’s ability to meet a deadline and complete a task given an infinite number of resources.

Where do we go from here?

Ford hopes that the ramifications of 2016 will foster better journalism in 2017 in ways that people read, watch, or listen to more intentionally, maybe even (shock!) remembering a story and the publisher it came from the next day.

I hope that education can focus more on (shock!) finding meaningful ways to measure whether a student actually learned, not just whether they clicked or checked off tasks. Reflecting on my own online learning experiences in the last year, I am worried. I’m worried we have fallen so deep down the “data-driven decisions” rabbit hole that we are no longer paying attention to the qualitative data that orbits the metrics. Good instructors keep their finger on the pulse of the learners, ever adjusting for those qualitative factors. But as the data ports up to departments, institutions, and vendors, where does that qualitative picture go?

I will close with a few goals for institutions, instructors, and vendors for 2017:

  1. Demand better learning metrics from ed-tech vendors. What that measure is really depends on the platform. Begin asking for what you really want.
  2. Build more integrations that pass quality learning data from the ed-tech vendor to the institution. Sometimes the platform does have better metrics, but the institution can’t access them.
  3. Create metrics that measure learning mastery over time in your own courses. This means choosing a few crucial concepts and probing them repeatedly throughout the learning experience to ensure the concept is sticking.

These are all concepts I hope to continue exploring with more research and more detail over the next year. If you want to join on that journey, consider subscribing here.


Possibly Related Posts:


read more

The Importance of Findability for Learners

Dec 16, 2016 by

How do you feel when you go to find information on a website, and you just can’t find it? This happens to me all the time when I want to find out what some new ed-tech wonder product does, and I visit the website and can’t see any screenshots, any descriptions, or any videos of the product in action. I find it incredibly frustrating and this story generally ends by me giving up on even signing up for a trial. The same thing happens to students when they go to find information and it is buried in a non-sensical place.

As everyone finishes a semester, and prepares documents and course shells for the next, it seems a good time to share this article, The Impact of Findability on Student Motivation, Self-Efficacy, and Perceptions of Online Course QualityWhile the research targeted online courses, many face-to-face courses are now accompanied by a myriad of resources that live in an LMS course shell and I think there are also implications for findability in course packets and syllabi as well.

For this article, one of the researchers, Dr. David Robins, User Experience Design professor at Kent State University, has presented the study in a webinar format available on YouTube. Their research question: What happens when students have trouble finding components of a course?


 

The researchers took two courses that were well-designed and passed Quality Matters standards, and then “broke” them in terms of findability. The broken courses still technically passed QM standards, but the components were harder to find. Students were asked to perform scenario-based tasks in the online courses.

Sidenote: If you’ve never seen a standard software usability test, here’s a nice “findability fail reel” for a mobile website with questionable usability.

I don’t think anyone will be surprised to find that poor findability correlated with decreased self-efficacy and decreased motivation. However, there was an interesting set of actionable findings regarding navigation and visual design that came from researchers watching participants attempt to navigate the courses. Consider looking for these types of things in your course or syllabus and then improving them:

  • navigation items that are not grouped into logical categories
  • poor labeling (e.g. using the file name instead of a true description)
  • poor categorization (e.g. placing an exam review under “Course Documents” instead of in the section labeled “Prepare for the Exam”)
  • deeply buried content (e.g. syllabus is buried four levels deep)

This article also got me thinking about whether the most important items of a syllabus might be presented in a more 21st-century-friendly manner. There is a whole rabbit hole of syllabi created as infographics on the Interwebs.

Probably your university is still going to want an old-fashioned text version, but maybe students could use more visual infographics for what I would consider the top-5 syllabus items of interest to students:

  • How is this course graded?
  • What are tests like?
  • Are there any projects or papers?
  • Do I have to attend class?
  • Is there group work?

As well as the additional syllabus items that instructors want them to know:

  • What are you going to learn?
  • Why should you care about what you are going to learn in this class?
  • How strict is this instructor on deadlines?
  • What is considered good/bad behavior in this class?
  • What are the instructor’s pet peeves? (come on, that’s a real thing and whole chapters of your syllabi get devoted to these issues)

Challenge: Take a fresh look at your syllabus and/or course shell. Assume that you do have findability issues and look for them. If you don’t think you have them, had over the questions above to a friend or family member and see how long it takes them to find the key components. Revise and improve the findability of important components to lower student frustration for the next semester.

Note: A weekly bite of learning design and a challenge goes out every week. If you’d like to have it delivered to your inbox, sign up at Weekly Teaching Challenge.

Reference:

Simunich, B., Robins, D. B., & Kelly, V. (2015). The Impact of Findability on Student Motivation, Self-Efficacy, and Perceptions of Online Course Quality. American Journal of Distance Education, 29(3), 174-185.


Possibly Related Posts:


read more

Why Random Practice is Important

Nov 28, 2016 by

As educators, we often find ourselves in the uncomfortable position of trying to explain why students don’t seem to have learned what we know we’ve taught them. Economics instructors ask math instructors, “How come these students who have taken College Algebra still don’t understand slope?” Science teachers ask English instructors, “How come students still don’t understand basic grammar rules when they write in my science class?” The key here is to understand that students aren’t learning skills in a way that helps them to transfer the skills to new situations – the learners have compartmentalized the skill to a particular domain and it doesn’t get sufficient escape velocity due to lack of random or varied practice.

In sports, there has been some eloquent research showing that random practice leads to more transferrable and long-lasting skills than blocked practice. It’s worth taking a short dive into this research area.

shea-and-morgan-research

The gains shown in blocked practice erode when we look at longer timelines. Random practice provides short-term gains AND holds up in the long-term.

Watch the 16-min video “Motor Learning: Blocked vs Random Practice” by Trevor Ragan. He does a lovely job of walking through some of the motor learning research that very eloquently shows that “random practice” is more effective for transference and long-term retention than “blocked practice.” This is basically the same concept as massed vs varied practice discussed in cognitive science.

If you’re interested in reading the research that Ragan touches on in the video, you can find some of it in these papers:

Shea, J. B., & Morgan, R. L. (1979). Contextual interference effects on the acquisition, retention, and transfer of a motor skill. Journal of Experimental Psychology: Human Learning and Memory, 5(2), 179.

Hall, K. G., Domingues, D. A., & Cavazos, R. (1994). Contextual interference effects with skilled baseball players. Perceptual and motor skills, 78(3), 835-841.

In education we are really good at having students practice the “Do” of the “Read, Play, Do” process that Ragan describes in the video. “Do” skills are orderly and easy to monitor and assess. How can we shift to the messier strategy of having students practice all three parts of the process? For students you teach, what is the equivalent to practicing basketball shots from a variety of distances with different blockers around them?

Weekly Teaching Challenge: Consider all the topics you teach next week and design one new activity that focuses on “random” practice instead of “blocked” practice.

If you’d like the weekly teaching challenge delivered to your inbox each Friday, sign up to receive the Challenge here.

Possibly Related Posts:


read more

AMATYC Keynote Notes: Durable Learning

Nov 22, 2016 by

In the 2016 AMATYC keynote, I covered three main themes:

  1. Interaction & Impasse
  2. Challenge & Curiosity
  3. Durable Learning (this post)

Three triangles surrounding a central triangle with the letters C, I, and D

Here are references and resources for Durable Learning:

What is durable learning? The learning design practices that make learning “stick” over the long-term. These include (but are not limited to) spaced repetition, knowledge retrieval, interleaving, and varied practice.

A really good book on the subject of durable learning is “Make It Stick” by Brown, Roediger, and McDaniel.

We also took a dive into some cognitive science and again, there is a fantastic, easy-to-read book I recommend “Cognitive Development and Learning in Instructional Contexts” by James Brynes.

We explored the idea of a schema – a mental representation of what all instances of something have in common (plural is schemata). In particular, schemata help you to categorize your experiences,  they help you remember what you are experiencing, they help you to comprehend what you are experiencing, and are important in developing the ability to problem solve.

Visual representation (with no numbers) of distribution - shown as a set of arcs

A schema for distribution

When confronted with a new situation, learners try to run a schema they already have. This leads to all sorts of interesting misconceptions.

not-distribution

By engaging the learner in varied practice, we hope to modify the existing schema.

No numbers representation of distribution with visual arcs and plus-minus signs to hold the spaces

A better mental schema for distribution because the spaces are now held by plus-minus signs

To help the learners refine schema, we can abandon massed practice for varied practice. In massed practice, the learner does nothing but activate the exact same schema over and over. In varied practice, the learner has to distinguish between different schemata in order to successfully complete the practice.

massed-practice-and-varied-practice

There is a lengthier talk I gave on cognitive science in the context of algebra called “Algebra is Weightlifting for the Brain” (not the world’s best recording, but you’ll hear more about the ideas of Information Processing Theory and see plenty of math examples).

We didn’t quite get to interleaving in the talk, but we will cover that during the teaching challenge.

What is the Teaching Challenge?

For the next year, I will send you a teaching challenge every week to help us, together, change the way students learn and engage. The challenge will be delivered each week by email and will include:

  1. Something to learn or ponder
  2. Best practices shared by participants in previous challenges
  3. A new challenge

Sign up for the teaching challenge here. All are welcome.

Possibly Related Posts:


read more

AMATYC Keynote Notes: Interaction and Impasse

Nov 19, 2016 by

Thursday I had the honor of providing the opening keynote for the AMATYC Conference in Denver, “Learning Math is Not a Spectator Sport.” I expect the video of the talk will be available to share next week, and rather than provide the slides (124 mostly stick-figure drawings), I’ll point you to some resources that will likely give you the information you’re looking for between now and when the full presentation becomes available.

Selfie with room full of participants in the background

Keynote Selfie

We covered three main themes:

  1. Interaction & Impasse (this post)
  2. Challenge & Curiosity
  3. Durable Learning

I’ll provide resources for each of these categories, starting with Interaction and Impasse, in this post.

Interaction and Impasse

 

Possibly Related Posts:


read more