Webinar - the three elephants

transcript

Paul Matthews webinar on the 3 elephants in the L&D room.

This webinar was run internally for the L&D team in a global company in 2019.

Transcript

Elephants. Why elephants? My third book was on learning transfer, and I wrote this book because there was obviously a huge gap, I felt, anyway. I was talking to so many organizations as an L&D consultant and L&D strategist, and they just, somehow it wasn’t happening. And so therefore, if you’re not doing sort of learning transfer out of a training course. And by learning transfer, I mean what you do to make sure that whatever you do do in a training course actually ends up being implemented and activated in the workplace, and makes a difference. And I started calling it the elephant in the room, the thing that people are ignoring. And then I thought about the previous two books I’d written, and realized that they were about elephants as well. So, all those years I’d been writing about elephants, and never knew it. Hence, the picture of the three elephants.

So anyway, let’s get into this. We’ll click here. This is an interesting map because there’s a pin there in Illinois. Some of you are from the US, no doubt, and so you may be aware of this town called Bloomington. In the town of Bloomington, there’s a suburb called Normal, which I find really quite amusing. As far as I’m aware, this is the only place in the world that’s actually — normal. As you leave there, as you drive away from the suburb of Normal, you see this sign. For me, this is a nice metaphor of the fact that learning and development over so long has been focusing on learning events.

They’ve been focusing on doing the training, on adding the e-learning course. It’s been a very content delivery, event delivery, event management-focused, process-focused role. And what I see needs to happen is that learning and development needs to get much more focused on programs, on extending much beyond the event. They’re going to be starting to think about a performance program, as to opposed to a learning event, and get focused on performance as an outcome, as opposed to learning as an outcome.

And some companies are doing this much better than others. Some are hardly doing it at all. So the level of what I would call learning maturity varies immensely in all the different organizations I speak to, right from the public sector, government, right through to private sector and different areas like pharmaceuticals or retail. Then I also do work with the military. So it’s fascinating how all of these approach all this a bit differently, but there’s still that fundamental thing, is you need to be start focusing on performance programs, as opposed to learning events. And then that needs to be baked into your strategy of how you move forward. So we’re going to touch on a few of those bits and pieces, but that’s kind of the overarching theme of what we’re looking at here. And of course, as part of a program, then you are thinking about learning transfer, and I’ll have a little challenge for you later about that.

So, this is Mike, he’s a friend of mine. We’ll talk about Mike later. I’ve got a story about Mike, so just keep your ears open for that story, but first what we’re going to do is we’re going to talk about where it all starts. And this is typically what happens out in the organization and operations in the business, where a manager says, “Performance isn’t happening. I’m not getting the KPI’s I want. I’m not getting the performance I need from my team. It’s either not good enough now, or I anticipate it not being good enough in the future because of the introduction of a new system.” So, even if there’s something right now, we might have this anticipation of poor performance given pending changes.

So, that’s where it starts, and then what do they ask for? What do they want? Whenever I do this in a conference and have a show of hands, thousands of people pretty much inevitably, what they’re actually asking for is the big myth. It’s what I call the big myth, and typically that’s about training. What I mean by that is they’re saying, “These people aren’t working. They’re not performing well. In some sense, I think they’re broken. Please L&D or learning and development, can you fix them?”

And what must be going on in their head to even ask that question is this little equation here, where you expose them so some content, that’s what training is, and then that means learning happens and so on and so forth. But if you look at all of those equivalences, all of those equal signs, not a single one of those actually is totally true all the time, and that’s really interesting. So effectively, their request for training, for the big myth, which is on this screen, is founded on a whole bunch of false assumptions, and yet they never examine those assumptions. So fundamentally, we have to help them examine them, and also help them on how to do that, and ideally of course, get them doing it for themselves before they ever come to L&D. We’ll come on to that.

Now, the only way this can actually work really, is if you sprinkle it with pixie dust or magic fairy dust. And I’m told that this is what it looks like, that this is some pixie dust. Now I don’t know, because I’ve never seen any, so if any of you still have a pixie dust dealer that you know of, can you send me a private message because I’m looking for a pixie dust dealer somewhere. Anyway, so that’s that. But with that pixie dust doesn’t exist, so effectively what we’re doing is we’re going to put a bunch of training in place, the magic is not going to happen because we don’t have the pixie dust, and then we won’t get buy and large a lot of the results that we are hoping to get or wanting to get from that training intervention, from that e-learning or from that formal learning program.

So, I’m basically saying stop wasting your money. There’s so much training and learning that happens out in the world. Hundreds of billions of dollars of it each year that really is wasted, and should never have been started. So, that’s what we’re going to talk about here. By the way, I’m not saying training is bad, I’m just saying it’s an overused tool in many cases. So let’s start stepping back a little bit, and looking at this whole system. Let’s talk about the performance system. And this is more like a systems approach. So we’ve got some outputs that we want, that the managers want, that operations wants.

We’ve got some inputs into a system where there are people and processes and gears and levers and all sorts of things happening. What we tend to think of is this as kind of a black box almost, this thing happening. But what we have to do is dig into that, and look at it and saying, “If we’re not getting the outputs that we want, what’s going on inside the box or perhaps what’s wrong with the inputs?” So what we have to do is start thinking about this from a systems point of view. And how do we dig into that box, and how do we diagnose what might be going wrong in there. So that’s what we’re going to look at now.

Is if we want performance coming out from the people in the box, what they’re going to be doing is that there’s got to be capability at the point of work. And that capability at the point of work, there’s two primary inputs into that. And this phrase at the point of work is really quite important. It’s a phrase that I first heard used by Gary Wise when he was head of learning at Xerox in the States, and Gary and I swap emails on a fairly regular basis. But it’s a very descriptive term. It’s the work at the point – in other words, in the context of work at that point in time, when they’re being tasked to do a particular job, so we want capability at that point in time, at that point of work. But in order to get that, we’ve got to start thinking about, well, what is that actually mean? And I find that there’s a lot of confusion over that in the workplace, and particularly with L&D. This is where Mike’s going to help us out with a little story. Are you sitting comfortably? Here’s a story.

Once upon a time, you were taking a seven-year-old boy to football practice. Maybe it was your son, your neighbour’s son. Anyway, you ended up with taxi duties, and you’re taking this seven-year-old, little Johnny, off to football practice. And you’re driving out of your driveway, and you hear a noise under the hood of your car. And you’re worried about that because you’ve not heard that before, so you take it around the corner to the garage, to the service workshop, and here’s Mike. He’s listening to see what that noise actually is. He knows you, you take the car there all the time, it’s your regular workshop. And he turns around, and he says, “Listen, it’s not a big deal. There’s a small plastic part, and it’s got a crack through it. That’s what you can hear rattling. I’ve just got to replace that part, and you’ll be on your way in a few minutes. No big deal, and it’s okay Johnny, you’ll get to football practice in time.”

So he goes away, and then he comes back a couple of minutes later and says, “I’m really sorry, but we don’t have that spare part in stock, so I can’t fix your car right now. But what I would recommend highly is that you do not drive it across town to the football pitches. You need to take your car straight home, back around the corner, and park it up. And I’ll come and fix it first thing in the morning, and we’ll do it for free because it’s our fault. We should have had this little part. I’ll come and fix it on your driveway in the morning for free, but just take it straight home because the danger of doing more damage is too high. Don’t risk driving it anywhere.” So, you are now driving back around the block. Little Johnny’s pretty sad, perhaps even crying. My question to you is, was Mike the mechanic capable of fixing your car? It’s an interesting question. Just take a moment. Was he capable? Yes or no? What would be your answer?

When I do this at conferences, it’s really interesting how the answers split. Some people say yes, some people say no. If you said yes, I’ve got a second question for you, what would little Johnny say if we asked him. “Little Johnny, stop crying. Was Mike the mechanic capable of fixing our car?” What would little Johnny say? The answer is usually pretty obviously “No.” Because he’s not going to football practice. He doesn’t really care why Mike couldn’t fix the car. He just knows he couldn’t, and you’re still driving a broken car, and you’re going home. Now, I mean I’m guessing what you’ll do is take a taxi so little Johnny will get to football practice, and everything ends happily ever after. But it’s an interesting question.

And those of you who answered yes, Mike was capable, I think you’re actually answering a slightly different question to the one I asked. You’re probably answering the question, ‘was he competent?’ Because he’s clearly a competent mechanic. You can tell that by that picture. That looks like a competent mechanic, doesn’t it? I mean that’s what competent mechanics look like. So, he’s competent, but in the moment when asked to do a job, at the point of work, he was rendered incapable by the lack of a spare part. So he was not able to do the job. He didn’t have the ability at that point in time to do the job, so he was competent but not capable. Really interesting.

So let’s, at least for the purposes that we’ve got now, define this word capability as can the worker do the job in front of them? Yes or no? The reason I make this as important is, some people switch those words around and use them almost interchangeably. I remember being in the board room of a very large retail chain once, and HR and L&D and organization, the training people, people development side of the board room for want of a better term was saying, “Yes, we trained them. They’re capable. They can do the job.” And operations was saying, “Well, no they’re not because they come back after training, and they can’t do the job.” There was this whole argument going on around capability, and yet they were talking about different things because in the classroom, you can get someone to be competent, but you can’t necessarily guarantee capability until you’ve put them into the point of work. We’ll describe why that is in a moment.

Here’s another little diagram or a picture that will help you understand this difference. This is Dennis, he’s a friend of Mike’s. You can see Dennis is a very competent donkey. Competent donkeys can pull their cart really, really well. This is Dennis, he’s got several degrees in pulling carts, but right now he can’t do it. He’s been rendered incapable of pulling that cart by a manager who’s done a rather stupid job of loading up the cart in the wrong way. So, we have a competent donkey rendered incapable by a stupid manager. Interesting isn’t it? And of course, we could send Dennis off to donkey school and do yet another degree on pulling a cart, but that isn’t going to help because the problem lies elsewhere. So, get clarity around this difference between capability and competence, and if somebody starts to use those words or those concepts in a conversation, get clarity from them what they mean by that. It’s really important.

Anyway, let’s go back to our little diagram here. Now we’ve got a definition around that word capability. One of those arrows that leads into that, one of the inputs, is the performer themselves. In other words, is the performer competent? And are they bringing at least a threshold level of competence to the point of work? Are they ready? And knowledge and facts, fairly obviously skills, insight and understanding on how to use knowledge and skills. The mental state is things like motivation, which by the way is not engagement. They’re two different things, although they’re highly correlated. Motivation’s quite a transient thing that comes and goes on a day to day, hour by hour, almost minute by minute basis during the day, depending on what phone call you just had or what email you just received, whereas engagements are much slower changing thing that underpins all of that. But they are highly correlated. So, motivation and engagement are different, and that’s another confusion I often see in organizations. And the physical state is just, are you physically capable of doing the job? You might need a certain level of strength or height or manual dexterity or intellectual horsepower or whatever.

So, that’s one side of that equation. The other side is the stage on which that performer is performing. In other words, the environment that they are in at the point of work. Does that environment support them, or does it hinder them in doing their job? So in the case of Mike the mechanic, clearly his environment wasn’t supporting him. He didn’t have the spare part, so he was then rendered incapable because the environment was the problem. He was competent, but the environment wasn’t. So that’s interesting, where we start thinking about the performer, we do lots of things like competency frameworks and all sorts on the performer, but we never really seem to look at the environment that the performer is operating within, and do a competency analysis of that. It’s really interesting.

Now, in my research and the work I’ve done with people around this, and performance diagnostics and consultancy work, I’ve discovered or realized, and research I’ve seen, is that something like 70 to 80 per cent of your performance problems will arise from the right-hand side of that diagram. But what’s interesting is most of the time, when a manager has poor performance out there in the business, they come looking for training, and we end up delivering something from the top two bullet points on the left-hand side, which are very small part of the possible number of reasons that that performance might be poor in the first place. So what we have to do is get real clarity about why the performance is not what we want it to be before we start delivering training or, indeed, anything. So that’s this performance consultancy process.

I mean just think back for yourself for the last month. And you’ve had some tasks you’ve done that you’ve delegated to yourself, or someone’s delegated it to you. How many of those tasks did you manage to do effectively, and how many did you fail to do on time or on budget or at the quality you wanted? In other words, how many times did you not perform during that last month? And then of those times you did not perform adequately, how many of those reasons were to do with you as the performer, and how many were to do with the environment that you’re operating within? And most people I ask that say, “Well, most of the time I can’t do what I was trying to do, it was something outside of me. It was my laptop broke down, Skype didn’t work. We had this morning a little problem, the IT provision spare parts weren’t available, or my manager was overloading my cart all wrong and I couldn’t pull it properly.” So a lot of the time, it comes from that right-hand side of what’s there.

But the problem is they keep asking for training. So actually, the core of my second book was on capability, was this whole process of, how do you take a manager asking for training through that diagnostics process to help differentiate what they want from what they need? Because what they want is that training. They just want a silver bullet. They want you to come and fix their people. They want you to have pixie dust, and to sprinkle it all over their people, and everything is magically okay. Whereas there’s probably a lot of other things that need to happen as part of that process. So, really what we’re talking about here is one of the elephants and this is performance diagnostics.

Now, we have to get some clarity here as L&D people because I find a lot of L&D people get confused over this. They say, “Oh, I’ve been doing that. I’ve been aligning my training to the business. I’ve been doing all this discussion with the business to make sure the training I’m doing is right, and fits with their strategy.” The problem there is that they aren’t doing what I call learning consultancy, not performance consultancy. Performance consultancy starts from the premise that, “I have no idea what might be going wrong. I just know that we’re not getting the performance we want, so let’s dig into that and find out why.” If out of that process you end up with some output that says, “There’s some skills and some knowledge missing.” Then we could go into the need for some training or learning-type interventions, and then we start doing learning consultancy because learning consultancy starts from the premise that there is a learning need.

And too often, L&D people automatically think they’re going to solve the problem with a learning tool. It’s a bit like that old Abraham Maslow quote, If I have a hammer, everything I look at looks like a nail. And L&D people tend to have this, and then they don’t do this performance consultancy. And what that tends to mean is that people come to L&D asking for training, asking for interventions, and maybe even saying, “What can you do for us that will help us with this performance problem?”

There’s all this pixie dust myth floating about in people’s minds. And what performance consultancy will do is add a filter around L&D, which means those requests for training, those requests for learning intervention, will have to go through quite a rigorous process of proving that there is actually a learning need there in the first place. And so what happens is you will get a lot of requests for training will bounce off that filter, they won’t get through it. And therefore, you end up saving a lot of money and resources in learning and development, which you can then start using to be proactive L&D, rather than reactive, which most L&D departments are.

The other part of this is on that bottom line there. If you went into a doctor surgery, and they sort of looked up and said, “Right, I’m going to prescribe you some penicillin.” Without asking you questions or doing any diagnostics, that would be malpractice. And yet we do that in learning and development all the time, where we just say, “Yeah, we’ll do that training.” Because somebody asked for it. So, this is the first elephant, is performance diagnostics, and we’ll come on to how this is important to learning transfer in a little bit.

Now, we’re under the second elephant now. Some of you obviously will be aware of the 70:20:10 model. It’s a lovely model. I think it’s brilliant in many, many ways. Its primary use to my way of thinking is to help a senior management team understand that so much learning happens outside of the formal learning and training space. And they resonate with that because they’ll say, “Oh, yes now, I get it. A lot of what I know to do my job, I learned outside the job.” And so therefore, it is in many ways, the best way to help a senior management team, who are not learning and development trained, to understand that learning and development must expand their remit beyond the classroom.

And I think that’s really important, is that the senior team gets to know how to utilize L&D effectively because most senior teams don’t. They have this L&D department, but they have no idea how to use that department as a tool in terms of meeting the company’s vision and executing the company’s strategy. This is something that’s really sad in a way, because it means they don’t know how to use the tool, so that means L&D can’t perform well. There’s all sorts of things around that. We might come on to some of that later if we have some time.

So here’s some examples of informal learning. You copy what people are doing. You have a talk about it afterwards. But I also don’t like the model very much at all. The reason I don’t like it is not so much that the model has a fault, it’s that people take it too seriously. They start using it like a recipe. In fact, it’s a bit like the Pareto principle, the 80:20 rule that we talk about: 80% of your results comes from 20% or whatever. And nobody really believes the 80:20 figures are true, and yet somehow people invest magic and accuracy into these 70:20:10 figures. It’s not. They’re just rough ideas.

They redid this research, by the way, when these figures were first put out. I hear there were about five or six factors to start with, and they divided it up and decided that would be too difficult, so they reduced it to three arbitrarily, and that’s why they ended up with three different areas. And then they reproduced the research a year later. The first one was 192 people, and I think only three of them were women and they were all successful executives. They reproduced it later mostly with women, and the figures changed a bit. It was still predominantly experiential, but a lot more social and less formal even, down to 5%. So, those figures are in no way kind of magic or anything. They’re just a tool or a way to help people understand that the majority of learning that people do, and the majority of what they know in order to do their job, they learn on the job experientially or whatever. So that’s kind of the informal learning bit.

One of the metaphors I use with informal learning is this kind of engine running in the basement. That every organization when it starts has this massive informal learning engine running automatically in the basement, and you can’t switch it off. You can’t stop informal learning. We’re a learning species. If we were not a learning species, we’d have been consigned to the evolutionary dustbin a long time ago. So, this engine is just running, and we can’t turn it off, which is a good thing because if you did manage to turn the engine off, the company or the organization would be dead in the water in a couple of months, because so much of what we do to respond to how the world around us is moving and changing is involved with learning on the job. People are learning all the time. You cannot stop them learning.

So what you can do though is you can go down, find the engine. You can polish it up. You can make it all shiny like this one. You can give it better fuel. You can do all sorts of things to help it run more effectively, and to help informal learning flow and operate better within the organization. Because they’re going to learn anyway informally, so why don’t we make sure they’re learning the right things from the right sorts of resources. So there’s a lot of stuff around how you can support informal learning effectively, and get it running in the right direction. So, that’s kind of the informal learning thing.

And we know this about doing things. I mean, you ask most people in the street how do you learn stuff, they’ll say, “I learn by doing things.” Some of them might say, “I read a book, then I go and do it.” It’s all about doing it, and that’s a really important part of learning. So there’s some people a long time ago like Aristotle said this, but that experiential thing is also something that we do ourselves. So here’s a little guy who’s just having an experience with a garden hose. See, we could have put him on a health and safety course for garden hoses, but until he actually tries it, it’s unlikely he’s really going to believe it. I hope no babies were harmed in the manufacturing of this photograph, but this whole concept of learning by doing is a vital part of who we are and what we do.

So, that’s the informal learning, which is the second elephant, the little one in the middle. It’s more visible than the others typically in most organizations because of the 70:20:10, the work that Charles Jennings has been doing, but what you’ve got to realize is you need to do a lot more than 10 plus. And the 10 plus is a phrase that Charles Jennings uses for the formal learning that is then modified a bit in a blended way, so that some of it’s delivered experientially, some of it’s delivered collaboratively and so on. It’s like people are starting to use it as a recipe, but all they’re really doing is just expanding the range of their formal learning. There’s so much that you learn informally, which you can never learn in a formal intervention. So it’s not like taking the existing curriculum, and just rebuilding it to span a greater number of modalities or delivery methods. All you’re doing there is just an expanded or modern view of blended learning. Nothing fancy about that. So designing for the whole hundred or the whole lot, the whole 100 per cent, is a different story.

Now, what’s really interesting is most learning transfer happens outside of the classroom in the workflow. And that’s the third elephant. So, you do have to get sort of a handle on this informal learning, and how it works and what’s going on, before you can really start effectively handling the third elephant, which is learning transfer. Now we’re back to pixie dust again because, of course, so much of current thinking around learning transfer relies on pixie dust. As I said, unless one of you knows a pixie dust dealer, I’m out of it. So we’re back to that same problem, and this is exemplified in this little diagram here, this little cartoon, where you have a training course, and then people somehow think that magic happens and then performance improves. What we have to do is dig into that magic happens step and say, “Well, we don’t have pixie dust to sprinkle on that, so what can we do? Is there a way that we can manufacture the magic effectively, and repeatedly, so that we can do it knowing that we don’t have any pixie dust?” And there is, and that’s really interesting.

There are 12 levers in there, and that’s really good news because it means we can go and pull those levers, and we can make a difference. Now, these 12 levers of transfer effectiveness come from Dr Ina Weinbauer-Heidel. She’s based in Vienna and Austria. I’ve been working with Ina for a couple of years now because we both were writing on learning transfer. She’s got a book out that is about the 12 levers, so I would highly recommend you get a copy of her book. We’ve spoken together at conferences. We’re at a conference again in a few months, later this year in England speaking together. It’s really cool having someone else in the space working on it.

But her story is that she was operating as a designer, and doing some delivery at the big Austrian leadership school. Very expensive, very exclusive leadership school. People pay lots of money to go there. Then she was looking for a number of reasons, for a PhD thesis, and started wondering, well, how can I prove that what I’m doing really works? And discovered that she couldn’t, which must have been a bit of a blow. So, she went back through all of the learning transfer research and, believe it or not, it goes back over 110 years.

So learning transfer is not a new idea. It’s not something that’s been only just recently researched. It’s a long, long-standing research topic. When she had unpicked all the literature and huge amounts of it, I’ve seen the reference list, she came up with over a hundred different determinants of learning transfer and said, “Well, that’s probably one of the reasons not much of this transfer research is being used. Because it’s just the output is too complex and too overwhelming.” So, her PhD changed into, how can I reduce this massive output down to something that’s usable by practitioners?

And that’s what she’s done, done a brilliant job of it. She’s come up with these 12 levers. So she’s taken all of these different determinants. She’s looked at the ones that were very similar and joined them together. She’s looked at ones that had a low determinant factor and said, “Okay, that’s not actually producing that much, so let’s get rid of it.” She’s also taken some factors that had a high correlation with learning transfer, but weren’t that usable in practice.

I mean, one of the examples she gives there, for example, is IQ. So you can’t really line people up at the entrance to a classroom and say, “What’s your IQ? Oh, you’re too stupid to train. We won’t train you today.” So she got rid of that as a factor. And she’s ended up with these twelve factors, and here they are. What I’ll do is just very briefly describe these, so that you get a sense of what’s there, but you can get her book to dig into these, and I also do a bunch of stuff in my book as well. Although my book tends to focus more on the strategic aspects of learning transfer, whereas hers is much more about the work that a facilitator and practitioner can do within their area of control.

So for the trainee or the delegate themselves, there’s three areas. It’s all about the mindset of the person going into the training. Number one, there is transfer motivation. Do I want to do this? Do I want to transfer? Do I want to use what I’m learning? Self-efficacy is, can I do it? Do I have enough resources? Do I have enough support? Do I feel I can do it even though I haven’t done it before? And transfer volition is about, I will keep doing it even though I don’t really feel like doing it anymore. That’s like, I will go to the gym today, even though I don’t feel like doing it because I know it’s a good idea. I should do it, so I will do it. It’s that whole volition, and that’s interesting. In the old days, they used to have number one and number three, while motivation kind of mushed together with volition. And the researchers in that field have now realized they’re two very separate things, and you need to separate them out.

Then the second section is about the design of the training itself. One of the things that we’ve been talking about already today is the content relevance, there number five, which is the performance diagnostics. So how can we make sure, before we put someone in a training room, that what we’re going to train them is absolutely relevant to what they need. In other words, have we done our diagnostics properly? Our performance diagnostics. Before we ever get into learning consultancy, and then decide on our instructional design and how we might do that, which is the learning consultancy piece. So, content relevance is really … Just think about it. If you were in a program, and the content wasn’t relevant, why would you bother? Why would you even think about it after you left the room? So, the relevance is hugely importance, which is why that first elephant is hugely important as a part of learning transfer.

And then active practice, you want as far as possible enable people to do what they can do. And this falls into the whole region of near and far transfer. As far as possible in the classroom, can we have circumstances or scenarios that match as closely as possible what they’re going to do outside? I mean, a classic example is handling a bit of machinery like a hand drill. So you have the same hand drill in the classroom as they will then have to use outside for a health and safety training, for example. That’s called near transfer because what they do in the classroom is so close to what they’re going to do outside, that they don’t have to translate or think about it, whereas far transfer is something quite different. Again, there’s a lot of that in my book.

Number seven’s interesting, is the transfer planning. And this is about making sure that there is a whole program in place, a whole journey in place. It’s a bit like when you turn your Sat Nav on in the car, and you get this blue or coloured line that shows you on the map where you are and where you’re going to get to, with a long line in between. You kind of know your journey, and you feel comforted by the fact that you know your journey.

So you’ve got to think about the whole program that will lead to performance, which is what we talked about right early on, remember, about performance programs. You’ve got to be thinking about that whole program, and designing the whole program with all the steps involved. One or two of those steps might well be a live training or an e-learning course or something much more formal, but you’ve got that whole program. The participants need that comfort of that blue line like on the Sat Nav, that if I keep following this, I will get to where the destination is, even if there are a few little deviations along the way. So that’s really important, that you have that whole thing stitched together.

And it also brings up another point that’s interesting. Is when you’re talking about a training event, never ever, by the way, say “It’s a one-day training course.” You should say, “It’s like a four-day program, one day of which happens to be in the classroom.” What you need to be doing is setting expectations of the level of involvement that people would have to have in order for it to be successful. As soon as you say “It’s a one day training.” They will put aside, and their manager will put aside, one day in their mind to do the training.

And they’ll do nothing afterwards because they think, “Well, I’ve been to the training. I’ve done my one day. It’s all done.” So you have to set expectations and say, “No, it’s a four day program. One day of it’s in the classroom, and then there’s three other days of work to do over the next six months.” Spread into tasks, in other words, here’s our entire planning, our entire journey, and this is what we’ve got to do, and this is where we’re going to get to, and this is what success looks like and so on. So, that’s that whole planning piece.

And then at an organization level, there’s those things up there. And this relates often to the peers and supervisors that surround them, and then the way the supervisor might get involved with enabling them to do certain tasks by giving them the space to do it and the time to do it, but also delegating tasks to them that means they’ll have to use their newfound knowledge. So, the opportunity to apply what they’ve recently learned.

And then the number 12 there, we’ll talk about this a little later too, is transfer expectation. What does the organization expect out of a delegate when they come back from a training course? Because a lot of delegates, if you ask them coming out of a training room, “What would happen if you didn’t do anything with what you’ve just learned in the classroom?” And a lot of them would say, “Well, nothing really.” And that’s not acceptable. There’s got to be accountability and consequences if they go in a classroom and then just do nothing. I don’t see that as being acceptable. So, how can you set up a culture within the organization that automatically expects people to apply what they learn? Anyway, that’s just a bit of a brief overview of that.

So what I want to do now is we’ll dig in to a couple of those, but we’ll come back to our performance system. And what we’re doing is saying, “We have a system. We’ve done our performance consultancy, we’ve done our diagnostics, and we have realized that we need to apply some training into that system.” But of course, in order to apply the training into the system, we have to make abundantly clear what we’re doing with that, and how the system needs to behave in response to that training. Because that’s effectively what you’re doing. You’re putting this extra input into a system in order to change the system, and thereby change the outputs. So, that gets us on to learning transfer. It’s clearly if we’re going to put the training into the system, we want to transfer it into the system effectively, so it does actually change the outputs.

Chap called Robert Brinkerhoff, he was a professor at West Michigan State University. His research over many years, he averaged out, it was about one out of six people will fully implement, or satisfactorily implement might be a better term, what they did on a training course. In other words, we would consider that delegate a success. They’ve embedded and learned, and they’re operationalizing what we taught them sufficiently. Another two or three out of those six will practice, play a little bit, and probably slip mostly back to their old behaviours. Then a couple of people out of those six won’t even try.

There’s lots of other research, lots of other figures, and the figures vary a bit depending on the parameters and how the research was done, but they’re all low. I mean, look at them, five to 30%, these are appalling figures. My background is engineering, I’m a mechanical engineer originally, and if I was building machinery with this kind of failure rate, I’d be killing people on a regular basis. I don’t find that acceptable personally. So, that’s part of my mission, is to get out there and say, “How can we make training and learning things actually produce better results?” by focusing on performance and output.

Brinkerhoff also created the Success Case Method, which was a way of looking at how to measure learning, or how to look at the results of it. That sits up there alongside the Kirkpatrick model, and other iterations of that. So let’s stick with our African theme. We’ve got some delegates, here they are leaving the training course, and there’s always one clown in every group isn’t there? So there’s the delegates leaving. Now what has to happen next? On their blue line, their Sat Nav, their plan of their journey, what happens next? They have to start using and practicing what they’ve just learned on the training course. Because if they don’t, it’s not going to stick, it’s not going to be used, and it will just fade away, and then they might as well have not have been on the course at all. So, the thing they have to start doing is practicing what they learned.

Just imagine for a moment that these little guys here were on a training course on how to bottle feed a tiger. So, they’ve got to go and practice bottle feeding tigers. You see, you can teach someone on a training course the temperature of the milk and the bottle, how to fill the bottle, even how to take a little tiger cub out of a cage and put it back again. But until you hold a little tiger cub in one hand and a bottle of milk in the other, and then join the two together, it’s pretty much impossible to really understand what you’re doing with that. Kind of cool. I’d like to do that, wouldn’t you?

Anyway, there’s so much of what we have to do, we have to experiment. We have to try it out. We have to be given the opportunity to do it. And the only person that’s really going to be able to do that for us out in the workflow is our manager, which is why the involvement of the line manager is so incredibly critical for that whole process. If the manager doesn’t get involved, it’s almost impossible for that learner to really do much with what they’ve learned. And yet so many managers don’t, and so many organizations, their learning culture is not mature enough for their managers to understand their role in that whole process.

So there’s some simple things you can get managers doing, even just as a first step, but they have to be really quite relentless in their following up people, in terms of, what did you do? And there’s a couple of questions you can encourage them to ask is keep asking the delegate, “What’s the best thing you’ve done since that training course with what you learned on the course? What’s the best thing?” And then you ask, “What’s the next thing you’re going to do as a result of what you learned on training course? What’s the best thing? What’s the next thing?” Just two very simple questions. If managers, if all they did was that and then kind of followed up a little bit, it would make a vast difference, but there’s so much more the managers can do as well. We don’t really have time to go into it here. I would get Ina’s book on that because she’s got a lot of very practical examples, and ways to kind of feed all that in together in the system.

So let’s look at something else because we’re going to scoot around this a little bit. The learning stack. This is a model I developed many years ago based on some work that other people did. And I was asked to do some work about reflection and learning, and how the two are joined together. And I wrote a bunch of articles for magazines and stuff. And it occurred to me that really learning doesn’t happen without reflection. Reflection is always a component there somewhere within the learning mix. And I’ve said this now on the conference stages I’ve spoken on all around the world for many, many years, and no one’s ever challenged me on it, so I actually think it might even be true. So, if reflection is a fundamental, critical success factor within any kind of learning process, can we modify or manipulate the level of reflection, and thereby manipulate the amount of learning we’re getting? I think the answer is yes, and this is the model I developed as a result of that.

So if you look at the bottom stone there mostly submerged, I call this subconscious reflection. This is level one, subconscious reflection, which sounds like a bit of an oxymoron, but what I’m meaning is that practice makes perfect. If I keep doing something regularly, I’ll get better at it. I’m not even thinking about what I’m doing, I’m just practicing it, and slowly my internal unconscious targeting mechanism will lead me towards better output what I’m doing.

Level two is where I do bring it up to conscious awareness, and I’m actually thinking to myself, what happened? What went wrong? What went right? How did I succeed? How did I fail? Who can do it better? How do they do it? It’s all that questioning approach internally.

Level three is where I externalize that to a journal or a diary, or maybe it’s to the dog I’m taking for a walk, or a colleague at the next desk. And the reason that’s another higher level of reflection is, in order to put it out to the world in my in language, it has to engage a whole bunch of different neuro-networks and language centres in the brain in order to get it out there. That’s a much larger new network focus and activity, so that’s why to me, it’s on a higher level of reflection, in order to create understandable language.

The fourth level is again we’re externalizing, but now we are thinking, what I externalize, someone’s going to judge. In other words, it might by my boss. It might be a coach. It might be something I’m going to put on a blog. And so, I’m going to think twice before I put it out there because I have an awareness that what I’m putting out there is going to be judged in some way.

And in the fifth level, you know that old adage, the best way to learn something is to teach it? I don’t think that’s completely true. I tend to work on the basis the best way to learn something is to prepare the lesson plan to teach it, because that’s where I have to think about it really carefully in a very different way to the way I normally would. The delivery of that lesson plan probably doesn’t add much to that experience in terms of my learning. It might well, obviously, for the people that I’m delivering it to. In order to think about something differently in a way that I wouldn’t normally, that actually helps me learn it much more in much more depth. Even those of you who are doing instructional design, that’s something that’ll be very familiar with.

So, that’s the learning stack. And this comes about, if I am going to feed that tiger cub, after I’ve done it, how do I reflect on it? Who do I talk to about it? And can I teach someone else to do that? So these are all the things that you might like to feed into that learning transfer journey, as a way of increasing the reflection and quality and quantity of it. The other thing you can do with this, by the way, is any learning intervention you’re doing, whatever it is, you can say, “Well, where is this getting us to on this chart?” If you just give people a bit of PDF to learn … Someone’s got a microphone open. Perhaps they can close that down that would be cool. Any learning you’re doing so if you give them a PDF or something to read, what you’re really doing is getting them up to level two, which is really why people reading PDFs don’t do very much, or even e-learning if it’s relatively straightforward and boring. Anyway, that’s another thing.

So we learn by doing, learn by thinking about the doing, and that tends to start focusing on the idea of delivering activities to people, rather than delivering lots of content. We’ll have a look at a right way and a wrong way to deliver activities. Because this activity delivery, I hear it so much from people, is that, “Well, we’re giving them all these things to do after the training, but they’re not doing them.” I’m sure that’s resonating with some people. So in this case, I fall back on a behaviour model by a chap called B.J. Fogg, who’s a professor at Stanford. He talks about this model of amongst other things he does, he’s a great guy, fascinating work, but he talks about a behaviour, there must be three things that occur at the same moment in time. There must be a prompt, which is the P, which is the call to action. There must be a sense of an ability to do it, and some motivation.

So just imagine for a moment that I gave you a prompt, which was, “Please fetch me a glass of water.” So that’s some kind of prompt. Your immediate internal reaction is to say, “Well, can I do that? Is that hard to do? Easy to do? Is that going to be frustrating?” And so on. In other words, “What’s my ability to do it?” Then, hard on the heels of that comes the next bit of thinking, which is, “What is my motivation to do it? Is it low or is it high?” So what you’ll get is the idea that prompts that are focused up in the top right hand corner of this graph will succeed, and prompts which sit down in the lower left corner will fail. So we’ve got an action line.

And so often, what I see when people have activities and things that follow a training and say “we’re doing learning transfer, we have all these activities” is the prompts are all below the action line, and people just aren’t doing them. Usually it’s because those activities and tasks are too big. You’ve got to deliver lots of little activities often in order for this to really work. Then of course, each one looks individually easy to do, and you don’t need massive motivation in the moment to go and do it. There’s a whole lot of stuff around this, on how to design activities, how to work with delivering the prompt in a certain way to increase the motivation at the point of prompt delivery, and so on and so forth, but we just don’t have time here today. Sorry about that, but any questions, I’m happy to fill that in.

This is another aspect. We talked about cultural expectations earlier. And the reason there’s a magnet picture in there is you probably, or maybe you’ve done that thing where you’ve got iron filings on top of a piece of paper with magnets underneath. And I often use this as a metaphor for culture or values in an organization. Is you can’t see culture, you can’t see values, but what you can see is the way that people are behaving. I can’t see the magnet, but I can see the way all the iron filings are lining up in response to that magnet. And of course, the obvious conclusion beyond that is, what if there were two or three magnets under that paper? You’d end up with a mess. And so, this is a great way to say, to talk about congruence and uni-direction of culture and things in an organization.

But at this point, what’re we’re looking at culture for is, what are the transfer expectations in the culture? And this was the lever number 12 in the 12 levers by the way. The cultural expectations. So following this webinar, what are the expectations on you? Will anybody notice whether you do something or not following this webinar? I often find this. I talk to organizations about learning transfer and I say, “You know what? Your biggest challenge is going to be? It’s going to be learning transfer because I can come and teach this stuff, I can come and help you with it, but unless you go and do it, you’re going to get nothing from it.”

So you need to go and do something. So, right now this is my challenge to you, is to go and do something with what you’ve got from the webinar. Buddy up, get together with someone else who’s on the webinar and say, “I will notice what you do and don’t do, and we’ll talk about it.” So how are you going hold yourself accountable? I’d suggest you get someone else who is on the webinar, and hold each other accountable. So are there forces that are stopping you from doing anything with this stuff? If so, I suggest you talk to Joe about it and say, “Hey, listen, I’d love to do this but I can’t because X Y, here are the barriers.”

Okay, so that’s just a few highlights. There are our three elephants, performance diagnostics on the left, informal learning in the middle and the big guy, learning transfer on the right. And I say the big guy because he’s the one that typically cost the most money. And actually is one of the easiest ones to solve quite frankly. But what’s interesting is these three – there’s a fourth one. There’s a sneaky elephant hiding in the bushes and he’s huge and to me this is the brand of L&D the reputation of L&D. Because if you don’t get those first three elephants right, and embed those into your strategy and make sure you’re learning & development strategy includes those, then this fourth sneaky elephant will come bouncing out of the bushes and cause big problems.

Because you see, if the brand of L&D is such that it’s not taken seriously by the rest of the organization, then you’re going to find it very difficult to deliver an effective service to the organization. They will have a very limited view of what you can do and I’m sure you can do so much more than what they currently ask you for. So this is the brand of L&D, it’s a whole new branch of stuff and I think all those four elephants need to be line items in your L&D strategy, so I would recommend you go back to your strategy and say, “Can I find all those four elephants in there? Are we doing something about them all? If not, why not?” So there’s my challenge to you there.