Thursday, February 19, 2015

Constant Assessment: Making Data Useful

Assessment isn't about assigning a number to determine how a kid can perform on a particular day. Assessment is about figuring out what a kid does and doesn't understand (so far).

Actually, lots of assessment is about that first thing. Educators spend lots of time, energy, and money assigning numbers to kids. What about this: We shouldn't spend most of our assessment energy on the pedagogical equivalent of autopsies. We should spend more time developing, performing, and using formative assessments that help teachers teach and students learn.

In an earlier post, I said that we should assess more and test less. Here are a couple folks who have some ideas that seem related to this idea.

Bernard Bull (on Twitter @bdean1000) has Educational Publishers & Content Providers: The Future Is About Analytics, Feedback & Assessment, which puts the idea of a constant stream of assessment data into the proper context. It isn't about testing. It isn't about bypassing teachers. It's about using data to help students learn and teachers teach more effectively.
"... each action individually and collectively becomes a new data point that can be mined and analyzed for important insights."
His diagram showing how interactive content could be part of an effective part of a learning system is a call to action.

There's also Kristen DiCerbo's (on Twitter @kristendicerbo) work including these articles: How More Data Helps UsWhy an Assessment Renaissance Means Fewer TestsAll Fun & Games? Understanding Learner Outcomes Through Educational Games, and many others. I found Dr. DiCerbo's research when I searched Google for "invisible assessment," which is the same as what I have called "sneaky assessment." Whether it's "sneaky" or "invisible," this sort of assessment is where we should be headed.

eLearning content developers (such as myself) need to do a better job of making this happen, and teachers need to a) demand data-producing interactive content, and b) commit to really using it to help students learn more effectively.

Monday, February 16, 2015

Assess More; Test Less

Think about driving from Paris through pastoral northern France to Arromanches in Normandy.

If someone asked you how the drive was, would you simply report your successful, on-time arrival? Probably not. You'd probably mention the lovely scenery or how the trip made you feel or some other salient detail from the trip.

While you're on the drive, would you only care about your car's speed? Probably not. You'd also care about various aspects of your car's health as well (e.g., fuel level, engine temp, and RPM). If you don't pay attention to various aspects of your car's health, it could leave you stranded by the side of the road.

For years, online curriculum developers have focused on objective-driven courses. We measure how much content students have mastered, and we keep them aimed at consuming more content so they can master more content. We've been focused on the destination.

We need to spend more time thinking about the journey, and to do that, we need to make better use of modern technology and data. We need to stop testing so much, and start assessing more. Much more. Actually, we need to assess all the time. Let's group the kinds of things we should assess into two general categories:

Type 1 States: Content Knowledge and Readiness
These assessment are tied to specific objectives with a goal of mastery. We do a decent job of assessing content knowledge, but I don't think that many people do a robust enough job of assessing readiness. With tools like knowledge spaces, we should be able to figure out exactly what skills and knowledge a student needs to have in order to be ready to learn any topic. Why not figure that out ahead of time?

Type 2 States: Metacognition, Motivation, and Perseverance
As a student goes through any lesson, we should find ways to measure how they are reacting to the lesson. These three aspects go up and down through the course of a lesson, unit, and course. You don't master them any more than you master the fuel in your car's tank, but they are critical to engaging with content and truly mastering any field. Games and problem-based learning can be powerful tools for helping students learn skills, but perhaps they can be even more helpful when developing and measuring students' motivation and perseverance.

We need to measure Type 1 States frequently via truly formative assessment.

We need to measure Type 2 States continuously through every activity of a lesson.

Once we can measure both types of states well, we'll be able to help students learn and teachers teach more effectively.

Thursday, February 12, 2015

Training: Measuring Effectiveness with Kirkpatrick's Levels

I have been a devotee of Kirkpatrick's 4 levels (he called them steps, but they are often called levels) of assessing the effectiveness of training since I read about them in the late 90s. I've used Kirkpatrick's levels as a framework for training evaluation systems for several projects at various companies.

Don Clark's Performance Juxtaposition site has Kirkpatrick's Four Level Evaluation Model, which describes the model quite well. Kirkpatrick's original article can be read here and Kirkpatrick Partners (the official site) summarizes the levels this way (my comments about how each level is assessed in parens):
  • Level 1: Reaction - To what degree participants react favorably to the training (smile sheets)
  • Level 2: Learning - To what degree participants acquire the intended knowledge, skills, attitudes, confidence and commitment based on their participation in a training event (end-of-course test/project)
  • Level 3: Behavior - To what degree participants apply what they learned during training when they are back on the job (follow-up survey/interviews about how trainee transferred training to the job)
  • Level 4: Results - To what degree targeted outcomes occur as a result of the                     training event and subsequent reinforcement (follow-up analysis to determine how trainee's organization benefited from trainee being trained)
We trainers like to see all top marks on our smile sheets (the surveys that assess Level 1), but most of us realize that only means the attendees liked us enough to not hurt our feelings. High scores on smile sheets can indicate good food, long breaks, or a great sense of humor.

Trainers also like to see students actually learn something. I used to do training for systems integrators and programmers, so I really liked to see them actually create a working web app. This allowed me to assess what they learned (Level 2). High scores on an end-of-training assessment can indicate either an easy test or that learning happened, but even in the latter case do not guarantee that learning helped trainees in their jobs.

What we need to be able to assess is how well trainees are able to transfer knowledge from training to their jobs (Level 3), and ultimately how that translates into better results for their organization (Level 4). Following up with trainees and their supervisors isn't easy, but every organization should commit to it for critical training efforts. This sort of analysis benefits the organization receiving the training (did we get what we needed?) as well as the training organization (are we really being effective?).

BTW: Clark's Big Dog, Little Dog blog and his twitter feed @iOPT are worthwhile reads for anyone in the training field.

Monday, February 9, 2015

What an Instructional Designer Can Learn from StudyBass

My wife bought me an electric bass for xmas. I have never played a musical instrument, so this represents a big new challenge.

When I wanted to figure out some of the basics of how to play, I wandered around checking out YouTube videos and sites with music to play, but eventually I found StudyBass.com. This works, and I think it's because it reflects pedagogy I have used and valued for years.

To be honest, many teachers and content developers could learn from StudyBass. Here is the StudyBass instructional model in a nutshell:
  1. Theory is presented, but is connected to skills and to well-known songs from various genres so you can do what the theory says and hear how it applies to familiar songs. Hearing the roots and fifths pattern in Under Pressure by Queen is a pretty powerful way to verify how the theory and practice can lead to something great.
  2. Each video of how to perform a skill is an animation that shows the finger positions very clearly. This is more helpful than watching someone do it in a live-action video because human instructors usually go too quickly and it's hard to see exactly what they are doing (their hands get in the way of seeing what their fingers are doing).
  3. Each practice exercise is supported by various modes. You can a) hear it, b) read it in standard sheet music notation, c) read it in bass tab (which is very simple), and d) see it in alpha tab (which is like a bridge between the over-simplified bass tab and the pure sheet music). All these different modes allow for multiple learning styles, but also allow a student to gradually decrease the scaffolding support. You can start with the alpha tab (using bass tab as a fall-back option when you're stuck), then move to only the sheet music or only the audio.
  4. Exercises progress in a logical order for each lesson. Just about every lesson has exercises, and they are great.
The only thing I would do differently is to start each lesson with a YouTube video of a great song that makes use of the upcoming concept. Motivating with the goal can be powerful.

What StudyBass has helped me appreciate even more than before:
  • Connecting theory, practice, and real-world applications is a great way to motivate and build deep understanding.
  • Students benefit from guided practice that includes scaffolding they can gradually reduce.
  • When it comes to visuals, sometimes less is more.
  • Small chunks of instruction can lead to larger chunks of guided practice and then unguided practice.
Now to get back to learning chord roots and how they connect to A Hard Days Night by The Beatles.

Thursday, February 5, 2015

Angela Lee Duckworth: It's all About Grit

Reading articles about the misuse of the SAT reminded me of Angela Lee Duckworth's TED talk titled The Key to Success? Grit
Some of my strongest performers did not have stratospheric I.Q. scores. Some of my smartest kids weren't doing so well.
Dr. Duckworth studied grit at West Point, in K-12 schools, at spelling bees, and at private companies. The main predictor of success in these diverse contexts was grit. There are articles all over the place about Dr. Duckworth (a MacArthur Fellow); here is a link to an article at National Geographic: Grit Trumps Talent and IQ. One of her key points is that we need to figure out how to measure and instill grit. We understand how to measure IQ, but we don't understand how to measure grit, which is almost certainly more important.

I hope Dr. Duckworth and others like her figure out some good ways to measure and develop grit. We need to learn how to help every student be more gritty.

Monday, February 2, 2015

The Ivy Testocracy and the Unfortunate Misuse of the SAT

Salon brings us an article by Lani Guinier Ivy League’s meritocracy lie: How Harvard and Yale cook the books for the 1 percent, which is an excerpt from her book The Tyranny of the Meritocracy: Democratizing Higher Education in America.

First of all, let me be clear: I have nothing against elite schools, but ....

Ahh, for the days when elite schools were filled with people who inherited their privilege. Back then, most students at elite schools knew they were fortunate to have been born into "good" families, while those who didn't get in could rest easy that they were simply less fortunate.

Now, those at elite schools believe they deserve their good fortune while those who don't get in think they are less deserving. It's really too bad on both sides. I'm not saying that people at elite schools aren't smart. I'm simply agreeing with Dr. Guinier that a student's socioeconomic status is an excellent predictor of their aptitude as measured by the SAT.

Guinier's article reminded me of something I read in Atlantic Quarterly back in the mid-90's, Nicholas Lemann's The Great Sorting. One quote from Lemann's article stands out to me:
"Broad-scale testing in America was intended to be two things at once: a system for selecting an elite and a way of providing universal opportunity.... An irony of the American meritocracy, now that it has been in operation long enough to produce not just future leaders but present ones, is that the leaders chosen by a mechanism designed to be perfectly open and fair are widely regarded as a pampered, out-of-touch, undemocratic in-group...."
Also, SAT scores are not a particularly good predictor of later success. As Guinier says,
"... college admissions officers at elite universities today ... when asked what predicts life success—[say] that, above a minimum level of competence, “initiative” or “hunger” are the best predictors."
Organizations such as schools and businesses are not looking for people with high IQs or great SAT scores. What they want are people who are driven. People with grit. The problem is that we don't know how to measure "grittiness" well. We're great at measuring IQ and "aptitude," but those traits are much less helpful.

Much of this criticism of the SAT is mirrored by Todd Balf's article in the New York Times Magazine, The Story Behind the SAT Overhaul, which uses much of the same information to explain why David Coleman is overhauling the SAT. I hope Coleman's work is effective at changing the dynamic between the SAT, colleges, and students of every stripe. We need to move away from this testocracy.

Merit and Diversity in College Admissions

The recent Supreme Court ruling against race-conscious university admissions has everyone thinking about racism, privilege, equity, merit, ...