Measuring Impact

1st March, 2024 6 min read

Why are some Learning and Development practitioners so shy of working with data and why do they gravitate towards unhelpful metrics? Let’s measure what matters instead.

Measuring Impact

The clue is in the job title: People Manager, Chief People Officer, People Development Advisor - Learning and Development professionals are all about people. But data not so much.

The whole raison d’être of a learning practitioner is to focus on the human beings that work in their organisations. So, it’s not surprising that there’s often a reluctance to get to grips with learning data. Numbers and data analysis are the Achilles’ Heel of the Learning Department.

But we know that data is important. We need to demonstrate the value of our learning programmes and we need to provide evidence of performance improvement. So we do what’s easiest for us: we record the number of people attending a workshop; we measure completion rates on e-learning modules; we add up Continuous Professional Development (‘CPD’) minutes; we monitor usage rates in our digital learning libraries and, if we’re lucky, we capture engagement rates in our internal communications and promoted links to learning resources.

But I’d argue that these really aren’t the best things to measure. Of course, we want people to be engaged, we want them to be using our resources and participating in our programmes. We want to know which people, and how many, successfully complete our programmes. But none of these pieces of data tell you anything about observable differences, or the actual impact of your learning offering. Attending a workshop or watching a video is one thing. Going off and doing it for real, in your actual working life is much more important. (Incidentally, that’s why our resources library is called ‘WATCH & GO®’ with the emphasis very much on the ‘go’.) If you don’t go off and do it, you don’t learn from the experience, and nor can you reflect on and review that experience – all part of the well-understood learning cycle.

Unreliable data

What’s worse than unhelpful statistics are unreliable data. And that’s another concern I have. In our desire to prove usage, we cling to things like ‘completion rates’ which can, at times, be rather dubious. How well does your Learning Management System guard against people ‘fiddling’ the system? I saw a third-party platform the other day where people self-certified that they’d completed an activity, the argument being that if they cheat, they only cheat on themselves. That’s true of course. But it means the data can be wrong. I saw another example on a completely different system recently where the person uploading the programmes wasn’t doing it correctly, so the LMS was unable to record the usage stats properly. Completion rates are a minefield and yet people continue to focus on them as if they are the Holy Grail of learning. Even if your own system is reasonably robust in this area, how confident are you that all the data reports you receive are a genuine reflection of what’s actually happening? As we know, ‘rubbish in, equals rubbish out’.

Focus on what matters

What matters above all else? I’d argue it’s the observable difference we make to the performance of the people in our organisation. This, after all, is why L&D people are employed. And it’s why the data reports we provide must show evidence of how we are achieving it.

‘Soft’ people skills, in leadership, management, communication and team working are notoriously difficult to measure. But what if we focus on observable outcomes instead of completion and attendance rates?

A few years ago, (when people still used telephones!), I conducted a telephone survey of some of our customers’ learners. It was a hugely fascinating and rewarding experience. People told me exactly what they thought about the learning provision and support in their organisation – they went far beyond my original questions about our digital resources and insisted on telling me about their experience of ‘learning’ more generally. On a particularly memorable occasion, one manager told me how our videos had ‘completely changed the way’ she managed her people, describing the benefits to her, her people and her organisation. That type of evidence is worth many thousands of ‘bums on seats’ measures. When you show a genuine interest in their views, people want to share their opinion, particularly when they know that their views feed into a review to improve the offering. And the value of the data is immense. It’s a case of quality over quantity.

Even in the domain of people skills, it’s possible to identify tangible evidence of behavioural change. If people take away and implement just one key idea from a learning resource, and learn from the experience of doing it, the value of that resource is repaid many, many times over. And that’s where we need to focus our measurement. What did they do differently? What was the impact? And what are the lasting benefits? Herein lies the value and the payback of your learning offering and, when related to corporate-wide objectives, it’s how you demonstrate your worth to your organisation.

Learning matters not merely because it’s a ‘good thing’. But because it enables growth, understanding, effective team working and above all, an unequivocal contribution to organisational performance.

But you have to measure it.

The sort of evidence you are looking for falls into these key areas:

Look for topics that lend themselves to distinct measurement, like staff turnover, or absence levels. Link it to results from your employee surveys and actively go looking for examples of impact – that means getting out there and talking to people and finding out what they have done and what the results have been.

I’d like to conclude with a reference to last month’s blog on being Purpose Driven. You need to measure outcomes purposefully. And you need to let your colleagues in other parts of your organisation know what you are doing.

If people - and their managers - have a clear sense of purpose about what they are learning and why, they are already tuned in to the importance of outcomes. The benefit of this is that they perceive learning as a means through which they achieve their purpose, rather than something unrelated they need to ‘make time for’ in a busy day.

Providing evidence of learning impact means measuring the right things and focusing on observable differences which contribute to organisation goals. Ask yourself, what really matters when you’re measuring the impact of learning in your organisation?

Catherine de Salvo
1 March 2024

You can contact Catherine at catherine@scottbradbury.co.uk or via LinkedIn