Blog

How to Measure Learning Impact: The Kirkpatrick Model and Beyond

Gabriella Eriksson

Training effectiveness measurement starts with asking the right questions. The Kirkpatrick Model gives L&D teams a proven framework for evaluating learning at four levels — from learner reaction to measurable business results. This guide covers the Kirkpatrick Model in depth, compares it with other leading evaluation frameworks, and gives you practical steps to build a measurement approach that works in your organization.

What is learning impact — and why measuring it matters

Learning impact refers to the measurable effect a training initiative has on individuals and the organization. This includes changes in knowledge, behavior, on-the-job performance, and business outcomes.

Most organizations are comfortable measuring completion rates and satisfaction scores. Fewer manage to connect learning to behavioral change or business results — and that gap is where real L&D credibility is won or lost.

For learners, knowing their development is tracked and valued drives engagement. For L&D teams, measurement is what turns learning from a cost into an investment. For organizational leadership, it answers the question they always eventually ask: is this actually working?

Common misconceptions worth addressing

  • "It's obviously valuable — I don't need data to prove it." Without evidence, learning programs are vulnerable to budget cuts the moment results are needed.
  • "Measuring takes too much time and resources." Measurement doesn't have to be exhaustive. Even lightweight follow-ups reveal patterns that improve future programs.
  • "Bad results reflect poorly on us." On the contrary — identifying what doesn't work is how programs improve.

The Kirkpatrick Model: the most widely used framework

Developed in the 1950s by Donald Kirkpatrick and refined over decades, the Kirkpatrick Model is the most recognized framework for evaluating training effectiveness. It organizes measurement into four levels, each building on the previous one.

Level 1: Reaction

How did participants respond to the training? Did they find it relevant, engaging, and worthwhile?

This is the most commonly collected level — typically via post-course surveys. While easy to gather, reaction data alone tells you little about whether learning actually occurred. Its real value is in identifying friction: content that felt irrelevant, sessions that felt too long, or formats that didn't work for the audience.

Level 2: Learning

What knowledge, skills, or attitudes changed as a result of the training? This level is measured through assessments, pre/post-tests, practical exercises, or skill demonstrations.

Level 2 answers a critical question: did participants actually learn anything? Without this step, a positive reaction score might simply reflect a well-liked presenter rather than meaningful knowledge transfer.

Level 3: Behavior

Are participants applying what they learned back on the job? This is where learning transfer becomes measurable — through manager observations, 360-degree feedback, performance reviews, or follow-up assessments several weeks after training.

Level 3 is often the hardest to measure, but also the most meaningful. A training program that scores well on Levels 1 and 2 but shows no behavioral change hasn't yet delivered real impact. Common barriers include lack of opportunity to apply skills, insufficient manager support, or training that wasn't designed with transfer in mind.

Level 4: Results

What organizational outcomes can be linked to the training? Examples include reduced error rates, higher sales conversion, improved customer satisfaction scores, faster onboarding, or lower employee turnover.

Level 4 is the hardest to isolate — many variables influence business results simultaneously. But even directional evidence (a team that completed the training performs better than one that didn't) is valuable. The key is establishing baseline metrics before training begins so you have something to compare against afterward.

The Phillips ROI Model: adding a fifth level

The Phillips ROI Model extends the Kirkpatrick framework by adding a fifth level that converts Level 4 business results into a financial return on investment:

ROI (%) = [(Program Benefits – Program Costs) / Program Costs] × 100

This level is particularly useful when making the business case for major learning investments. It forces L&D teams to define both the monetary benefits of training (productivity gains, reduced recruiting costs, avoided compliance fines) and its full costs (development, delivery, learner time).

For more on how training ROI is calculated and communicated, see our glossary entry on ROI of training.

Other frameworks worth knowing

The Learning Transfer Evaluation Model (LTEM)

Developed by researcher Will Thalheimer, LTEM addresses a core weakness in the Kirkpatrick Model: that most organizations stop at Level 1 or 2 and never measure actual transfer. LTEM breaks evaluation into eight tiers, with an explicit focus on whether learning leads to changed performance in real work contexts.

Its primary contribution is distinguishing between knowing (declarative knowledge), being able to do (demonstrated competence), and actually doing on the job — distinctions that Kirkpatrick's four levels can blur.

The 70-20-10 model as evaluation context

The 70-20-10 model is not an evaluation framework, but it provides useful context for measurement. If 70% of development happens through experience and 20% through social learning, measuring only the 10% that is formal training gives an incomplete picture of how skills develop. Organizations using this lens tend to invest in learning analytics that capture a broader range of learning signals — not just course completions.

Practical metrics to track

Regardless of which framework you use, these learning metrics are worth building into your measurement approach:

  • Learner satisfaction: post-training surveys, Net Promoter Score for courses
  • Knowledge retention: quizzes, spaced repetition assessments, follow-up tests 4–6 weeks after training
  • Behavioral application: manager observations, self-assessment, 360-degree feedback
  • Business outcomes: productivity indicators, error rates, sales performance, compliance audit results, retention data
  • Completion and engagement: rates that reveal whether content is being accessed and finished

Common challenges — and how to address them

  • No baseline data: Define what you're measuring before training begins. Without pre-training benchmarks, post-training comparisons are meaningless.
  • Measuring too late: Reaction data collected immediately after training often reflects mood rather than learning. Follow up 4–6 weeks later for behavioral data.
  • Stakeholders who don't ask for data: Proactively share impact evidence. Don't wait to be asked — build reporting into your L&D workflow so results are visible by default.
  • Attribution is hard: You rarely prove causation between training and results. But you can demonstrate correlation and direction of effect, which is enough to inform decisions.

How Learnifier supports measurement

Learnifier's built-in reporting and analytics give you visibility across all the metrics that matter: completion rates, quiz results, engagement patterns, and certificate status. You can filter by learner, team, course, or period — without needing to export data into separate spreadsheets.

For L&D teams working toward Level 3 and 4 measurement, Learnifier's data can be connected to HR systems via API, giving you a fuller picture of how training connects to performance over time.

Want to see how measurement works in practice? Book a demo or explore how Learnifier helps organizations like Alligo build measurable learning cultures and Hector Rail demonstrate concrete training ROI.

Where to go next

Measurement doesn't stand alone — it's part of a broader L&D strategy. These resources are worth reading alongside this guide:

The complete solution for impactful learning

See how Learnifier can help you align learning with your business goals and understand how your people learn best.

Book a demo

Curious about course creation?

Try Learnifier for free and start building your own courses today.

Start free trial

Explore Learnifier in action

Try our intuitive, user-friendly platform for onboarding, training, and knowledge sharing.

Free trial

Build scalable onboarding
that works

Create engaging programs that deliver results – with less manual work.

Read more

Simplify compliance management

Create certification programs, automate follow-ups and reporting – all in one platform.

Read more
>

In this post

What is learning and development?

What is learning and development?