If traditional learning evaluation is dead, long live…what?

Corporate Learning

Posted by Josh Haims on July 16, 2014

When I discuss learning evaluation and how to measure the impact of investments made in development with business leaders, everyone seems to agree: How learning effectiveness is evaluated and communicated today is not working and should be completely rethought based on the transformation going on in corporate learning environments. Let’s talk about what’s happening — I’d like to get your thoughts as well.

The corporate learning landscape is dramatically different from the way it was even 5 years ago. Classroom-based, instructor-led training has given way to new forms of collaborative, on-demand learning driven by:

    • The arrival of mobile learning. After years as a developing modality, I think it’s safe to say mobile learning is here and here to stay. As of January 2014, 90% of American adults have a cell phone, 58% have a smartphone, 32% have an e-reader, and 42% own a tablet computer.1 The workplace BYOD (bring your own device) movement is also growing, with nearly 60% of employees worldwide participating.2 With those numbers, it’s not surprising that nearly one-third (31%) of organizations reported delivering some kind of mobile learning in 2013 — more than triple the number (7%) reported in 2007.3
    • Fast growth in social learning. The exponential rise of social media is well documented. As of late 2013, 73% of online adults were using social networking sites.4 Social learning is a natural extension of that, allowing people to share what they know and tap into and learn from the knowledge of others, as we’ve discussed both here on HR Times and in greater detail here. While social learning is not yet as prevalent as mobile learning, it’s growing fast. More than a third (34%) of companies we surveyed are investing in social learning tools, and that number climbs to more than half (52%) in companies with more than 10,0000 employees.5
    • The rise of intelligent learning. On its way up is the idea of using data and analytics to make better decisions about what, when, how, and to whom learning is delivered. The goal is to make it faster and easier for learners to find what matters to them, and to target learning to their needs. This is important because so many workers routinely spend a lot of time on activities other than their actual jobs. The typical knowledge worker spends only 39% of the work week on job-specific tasks, with the remainder filled with things like reading and answering email (28%), searching for and gathering information (19%), and communicating and collaborating internally (14%).6 Learners often complain that current Learning Management Systems (LMSs) are hard to use, particularly for users now accustomed to the speed and ease of navigating the Web and social media.

LMS developers are getting the message, and we’ve been seeing a lot of progress in improving both the learner experience and learning administration. We’re also getting better and more sophisticated about using data and analytics to generate suggestions and recommendations for users (just as many commercial websites do) based on a host of factors ranging from their profile (knowledge, skills, certifications) to what communities they belong to and their behaviors online (searches, interests, ratings).

With these changes to the learning environment as a backdrop, we arrive at our dilemma: Tried and true learning evaluation methodology, including the longtime gold standard, the Kirkpatrick method, isn’t designed for how learning is happening today. These more linear, classroom-focused models are outdated in today’s much more fluid environment, and we’re rethinking how we measure learning effectiveness as a result.

A few new models have emerged to try to fill the gap, including Bersin by Deloitte’s High-Impact Learning Measurement and the Center for Talent Reporting’s Talent Development Reporting Principles. But as yet, there’s no definitive, widely used replacement for the Kirkpatrick model.

So where does that leave L&D professionals? How do you measure the effectiveness of your L&D investments? How do you measure the development ecosystem that is breaking beyond the boundaries of the corporate intranet — and the boundaries of what can be tracked and reported? I’d like to hear your thoughts and learn more about how your organization is thinking about and acting on this challenge.

Some conversation starters:

  • What is your organization’s position on the Kirkpatrick method?
  • What alternatives are you seeing or perhaps trying? Do you think they will stand up over time?
  • Do we really need to worry so much about measuring learning impact and effectiveness and ROI? Is it time to recognize that learning is a key lever an organization can pull and stop the debate about whether it adds value? (We know it does.)

We’re not suggesting formal learning is dead or that we do away with measuring the impact of some learning courses. But given the current learning environment, there has to be something else, too. Let’s talk about it. I look forward to reading your comments and insights and keeping the discussion going.


Josh Haims is a principal in the Human Capital practice of Deloitte Consulting LLP, with more than 14 years of human capital consulting experience. He leads Deloitte’s Learning Solutions practice and is the co-lead of the global Learning Services team.

1<http://www.pewinternet.org/fact-sheets/mobile-technology-fact-sheet/
2http://blog.magicsoftware.com/2013/01/the-state-of-byod-2013-devices.html
3Bersin by Deloitte, Mobile Learning is Finally Going Mainstream, 3/2011; ASTD, Going Mobile, 6/2013
4http://www.pewinternet.org/fact-sheets/social-networking-fact-sheet/
5Bersin by Deloitte, Corporate Learning Factbook 2013, 1/2013
6McKinsey Global Institute/International Data Corporation, The Social Economy, 7/2012

As used in this document, “Deloitte” means Deloitte Consulting LLP, a subsidiary of Deloitte LLP. Please see www.deloitte.com/us/about for a detailed description of the legal structure of Deloitte LLP and its subsidiaries. Certain services may not be available to attest clients under the rules and regulations of public accounting.

Leave a comment

1 Comment

  1. Dan Klein

     /  July 22, 2014

    This is a huge issue. I’m both teaching and working in this area, and I would love to discuss my ideas (which a a little unorthodox) with other colleagues. Here are some comments…
    1. The fact that we deal with informal training shouldn’t affect the way we conduct evaluation. Personally, I don’t think that there is a difference between formal and informal training. In order to do both one should make the same basic decisions that follow three basic questions: a) Why? (why to invest resources in the training event), b) What? (what are the training content c) How? (how to teach). This is true also for the different instructional models (ADDIE, Dick & Carey, Agile, SAM etc.) which also follow the same basic decisions.
    2. Kirkpatrick made a lot of sense 50 years ago, yet it never was the right evaluation model for the training. It is best used in the educational realm but not in business. It sees effectiveness mainly as a change in the learner (levels 1,2,3), while in business there is only one relevant level = results (level 4). It’s assumptions regarding training are also wrong, especially the assumption that training = course and training on job (after the course) is not part of the training process. I don’t think that there is a need for special evaluation model for training. We can evaluate exactly as other units in the organization measure themselves = measuring effectiveness, efficiency and quality. Thinking on evaluation in this framework made for me a lot of sense and gave me insights regarding the way we can show ROI – the system I developed really works! It shows to the management how much we contribute to the organization. The system is different from the usual evaluation by three major things: a) It doesn’t focus on performance (which I acknowledge as important yet, as everyone knows, hard to pinpoint the training’s contribution) b) It concentrates on cutting the damages instead of increasing the benefits (there is a lot of money in cutting damages and it is relatively easy to prove contribution) c) It defines differently the basic goal of the training department (managing competencies).

    Dan Klein
    (I know that to some extent people react to what you write based on what you do, thus: I have almost 30 years of experience in training & learning, I worked in the air force training R&D unit (position of a major), and also for 11 years in the training department of a major global Hi-tech company. For the last 10 years I teach in the academia)

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: