Let's Stop Trying to Measure Learning

by Julie Winkle Giulioni | Posted | Learning

Let's Stop Trying to Measure Learning

Welcome to the Learning Lab! Perhaps in a mythical laboratory with probes attached to learners' brains, researchers can at long last measure knowledge or insights. But today, despite our continued efforts, in the lab of life, learning remains nearly impossible to quantify with any real meaning.

Sure... you can administer "level 2" or knowledge gain tests and people can bubble in the forms, but how does that translate to an ambiguous workplace where the actual work is considerably more dynamic and complex? In most cases, it simply doesn't.

So, perhaps it's time to once and for all accept this reality. We can let go of trying to assess learning itself, and instead turn our attention squarely toward performance, behavior and action. The good news is that there are plenty of common-sense alternatives available to learning and development professionals interested in collecting, measuring and evaluating results from investments in learning. Also good news? No brain probes.

Good news: there are other ways to measure results in #learning! @julie_wg
Tweet this

Here are three strategies to measure results in learning:

Self-reporting

Participants are in the best possible position to share how the learning experience has informed and changed their performance. A simple questionnaire (administered after sufficient time has passed allowing for new behaviors to begin to take hold) can offer profound insights, focusing on what participants have done differently and the effect that has had on their work and on those around them. Because the employees are not as vested in the outcome as the L&D department might be, this information tends to be unbiased, accurate and truthful. And, while these results may not always be numerically measurable, the data can certainly serve to verify the efficacy of a learning intervention.

Critical incident interviews

Whereas self-reporting puts the onus on learners and requires a small investment of their time, critical incident interviews shift responsibility toward learning and development professionals. The way this works is with one-on-one interviews using a sampling of participants. During the interviews, the L&D pro focuses on specific examples of what was learned and how it was put into practice. Typical critical incident interview prompts include:

  • Tell me about a time when you used what you learned in training
  • Specifically, what did you do?
  • What happened as a result?
  • Specifically, how did the information/skills/models/tools help you?
  • What was the benefit of the approach you took?
  • What could have happened/gone wrong if you'd not taken that approach?

Consolidated notes from multiple critical incident interviews paint a vivid picture of how learning is being translated into practice by participants - and how it's helping them, their departments and the broader organization. These notes also tend to highlight themes and trends that can inform program improvements and future training efforts.

Business outcomes

Perhaps the most direct approach to evaluating the impact of learning is by way of business outcomes. While it would be handy (and compelling) to point to the bottom line and let training take credit for what is found there, too many other factors and influences concurrently play into that. But, taking a step or two back from the bottom line is frequently possible - and instructive.

1. Start by identifying a business metric that's already in place and that aligns with your learning outcomes. (Note: Focus on metrics that currently exist versus creating new ones. It not only saves you effort, but it also lets you focus on what's already been established as critical measures to track.)

If the learning focuses on....

Consider an existing business metric such as...

Innovation

New product ideas

New hire orientation/integration

Turnover within one year

Sales prospecting

Volume at the top of the sales funnel

Career development

Talent pipeline

2. Estimate the lag time required for the learning to take hold and to begin to contribute to the identified metric. Depending upon the nature of the content or skill, this could be weeks, months or in some cases, years.

3. Track your identified numbers over time. Whenever possible, isolate the cohort having participated in the learning experience so you can compare its results to those who did not. And stay abreast of other factors affecting the organization or industry that might introduce non-training related influences on your metrics.

Those in the learning and development function owe it not just to their organizations but to themselves to identify and report on the significant contributions they make to the business. And they can do that in the most meaningful way by taking their product - learning - to its logical and trackable conclusion - real results founded in performance, behavior and action.

Talent Management Strategy Template

Get step-by-step guidance for creating an effective talent management strategy!


Free Download
Cover of the book
Cover of the book

Talent Management Strategy Template

Get step-by-step guidance for creating an effective talent management strategy!


Free Download


Related Articles


Close [x]

Get our Saba Blog Digest email delivered right to your inbox.

Join over 100,000 of your HR peers: