Training Evaluation – are you getting it right?

Most of us devote significant effort to collecting feedback and assessment of our training. We seek to improve our delivery, our materials and the end user experience while measuring the business impact to support future training resources. For now, let’s think about the latter.

Of course, any business wants to have a clear picture of the effect of all of its initiatives and training is no exception, but this raises the question what do managers actually want us to tell them?

What is really interesting is that the very things that executives report that their organisations measure now are the same things they are least interested to record (see Figure 1. What CEOs Really Think). Conversely their top three priorities are things they report are not currently reported.

Measure We Currently Measure This We Should Measure This in the Future My Ranking of the Importance of This Measure
Inputs: Last Year, 78,000 employees received Formal Learning 94% 85% 6
Efficiency: Formal Training costs $2.15 per hour of learning consumed 78% 82% 7
Reaction: Employees rated our training very high, averaging 4.2 out of 5 53% 22% 8
Learning: 92% of participants increased knowledge and skills 32% 28% 5
Application: At least 78% of employees are using the skills on the job 11% 61% 4
Impact: Our programs are driving our top five business measures in the organisation 8% 96% 1
ROI: Five ROI studies were conducted on major programs, yielding an average of 68% ROI 4% 74% 2
Awards: Our Learning and Development program won an award from the ATD 40% 44% 3

Figure 1: What CEOs Really Think, Jack J. Phillips, chairman, ROI Institute

 

It’s no secret that effective and holistic measurement can be tricky. For instance, as trainers we’re all familiar with the Kirkpatrick model of measurement and we’re also acutely aware of just how difficult it is to achieve more than level 1 or 2. As a result, even these well-known approaches such as this can fail to achieve real outomes in practice.

Sometimes measurement is just too difficult, or we’re worried about what the evaluation might uncover about our design. Perhaps the process of measuring is too expensive.; few of us have the resources to measure what we would like, the way we’d like to measure it.

The moment we select our metrics, the compromises begin. We may want to conduct behavioural change analyses – but most often the only practical option is to conduct post-training feedback surveys. Yet if we accept that evaluation of training is a fundamental part of good learning design, it must be a priority.

We have to pick the right metrics to measure. Easier said than done, I know. After all, few of us are data analysts. We might know what we want to see, but working out what data will provide that information is not as simple. Even after selecting metrics will our choices work to support our arguments given that our superiors might have opted for a different selection?

So, what to do?

My advice – think like an Instructional Designer and do your action mapping!

We begin with the end in mind. Ask questions such as:

  • What does success look like?
  • How will things look if the learning has been a total success?
  • What would consitiute an outstanding ROI, or what KPI do we want to see to establish success?

Isolate the successful impact of the learning and backwards map from there – what will success look like, how do you measure success, what are the behaviour changes that lead to this success. From this you can determine a baseline against which to measure growth and development and a way to measure change relative to that baseline.

If we work back from our end goals to determine our metrics, our choices become clearer. For example:

  • If a course is intended to reduce the time taken for people to perform a task, rather than measure their score on the quiz, why not ask participants to come to class with a measure of how long on average that task took in the preceding week, then ask them again a week after the course?
  • If the course is focussed on improving quality, can we collect data from the complaints desk, the support department or returns processing department before and after training?

What’s interesting is that these kinds of data may well be collected already in your organisation. A little lateral thinking might allow you to access metrics that allow you to report “This course cost us $1,500 to deliver and returned $15,000 to the business the following month”. Imagine what that does for a training budget application!

Most business leaders want success stories and to know the value of those successes. While a quiz on completion asking whether trainees had fun is more straightforward, the measurement is of little value. The process of learning is a means to end and the outcome of training is inevitably prioritised usinng the results you collect on the metrics you select.

Approaching the task like this allows us not only to build a stronger case for resources but also to improve and implement successful training programs. Every part of your organisation is tasked with measuring its level of success and improving its contribution over time. Learning and development is no exception – quality evaluation aligned to clear goals is a core part of our job.

Scenario-based learning – a quick guide

As a former lecturer, I say this with a degree of sheepishness but I hate being lectured to! I want my learning to be individual and fill me with curiosity. It should allow me to explore, collaborate, discuss.

When I design eLearning and professional training, I always try to remember that – to treat others as I’d wish to be treated. This is where scenario-based learning is so crucial. Through scenario-based learning, participants can confront challenging, real-world decisions. They can be provided an opportunity to explore in an environment where mistakes can be made safely. They can replicate errors they may be making on the job and can see the consequences of their mistakes.

That’s the theory – but why is it so difficult to make scenarios that don’t make us cringe??

My view is that there are a number of reasons.

Apply Instructional Design Principles

Don’t regard scenarios as a filler – treat them as you would any other core content, and ensure there is a well defined problem. What is the issue, how does it manifest and what are the behaviours you need to change. All very standard ID – yet too often overlooked when we make scenarios.

After identifying and analysing the problem in need of a design solution, if you can’t move from that to explain why a scenario’s the ideal approach, question why you’re using one.

To help you answert that, remember that scenario-based learning is generally used to address observed or anticipated problematic behaviour rather than knowledge gaps. The latter tend to lend themselves to job aids and other tools.

If you build a scenario from the standpoint that it’s to address a knowledge gap it’s likely to involve your characters using too much explanation in their diualogue – their words will sound contrived as you attempt to have people explain concepts to your audience in the context of a fictitious discussion with one another.

On the other hand, if your issue involves modifying behaviour and relates to complex decision making, read on – you have a candidate for scenario-based learning!

Our first step is to decide which path to take, remembering that there are two broad categories of eLearning scenario.

Path 1: Mini scenarios
Mini scenarios consist of just one question that presents a realistic, challenging workplace decision. The feedback reveals the respective realistic consequences of each provided response while the consequence could be immediate or far in the future. Mini scenarios can be applied when you would like learners to practice the same scenario or task with different variables.

For example, if you want people to practice the correct response to a series of  emergency scenarios, you could repeat a similar scenario with varied circumstances and the same or similar responses to various emergencies.

… “okay now let’s imagine the polar bear is armed with an AK47 assault rifle – do you still climb the tree to escape?”

On the plus side, mini-scenarios are significantly easier to write than the branching scenarios.

The disadvantage it that they can be limiting in that the basic situation is repeated with a change added. They are best used for less convoluted behavioural problems.

 

Path 2: Branching scenarios
For some reason this gets all the press. I’m often asked to include it in work I prepare for clients but I wonder if those clients are always asking why they want it. Certainly “choose your own adventure” seems fun, but I don’t think that’s a reason to use it. Too many “fun” things in training come across as cringeworthy where they are more ellaborate than they need to be. It’s easy to forget just how fixated on purpose learners are!

Branching scenarios, otherwise known as choose your own adventure stories, contain multiple storylines. They present a series of opportunities for participants to make decisions and are built such that the responses determine what happens next. Typically, these scenarios provide multiple layers of backstory and involved narratives to inform the questions.

It follows that branching scenarios are most appropriate for practicing making decisions in ambiguous situations, gathering and applying information and recognising and recovering from mistakes moving forward. They create space for participants to explore multiple realities and see a problem from multiple angles.

The downside is that the format tends to be dense with backstory which can be difficult to plan and navigate. In fact it can be a bit tedious if people start to ask why it’s needed.

Branching scenarios are designed using a map. We use a tool to work backwards from the ends we have in mind through the various pathways that lead to them. The master of branching scenarios is Geoffrey Robertson. His hypotheticals, based on an educational technique used to train university law students became a popular TV series. Every branch is carefully planned. The host takes us through a story that makes sense and allows the panellists to be the experts. They surprise him with responses yet he is ready. Along the way we get humour and word pictures. What we don’t get is explanation from the narrator. His job is to pose the questions and know the consequences. The facts come from the participant.

If you aren’t prepared to invest that level of planning, don’t expect your branching scenario to work well.

Many scribble the pathways out on paper or a whiteboard. There are also software options – a quick search on Google will reveal a number of online storyboarding tools including Twine or XMind.

Don’t forget that this will be a complex piece. Be kind to your learner and allow them to navigate back and forth to refresh their memory of the story to date when they need to.

Writing the scenarios
The English essayist Percy Lubbock’s work The Craft of Fiction offers us many insights on storytelling. One of the most important for scenarios is the maxim ‘show, don’t tell’. 

Be descriptive. Use word-pictures to engage the learner. Inevitably, this will require more text but will allow the reader to draw their own conclusions. More importantly it engages the imagination. Whenever you find yourself making a factual statement ask – can I reword this as an exchange between my characters – replace it with dialogue?

While we’re on the subject of dialogue, don’t forget to keep it real. Write the dialogue as you would say it yourself – never be afraid of using contractions (isn’t, don’t, can’t) or of utilising informal words. You might also break the sentences into varied lengths to switch up the rhythm.

That said, the other extreme is cringeworthy too. Always remember what people woudl be likely to say in the real world. Having characters articulate something just to build the plot can be artificial.  Dialogue’s not for narrative – its function is to assist in characterisation and to keep the reader’s interest.

Quick tips 

  • Provide responses that include mistakes concealed as reasonable choices.
  • Add time and interpersonal pressures to mimic realistic challenges in decision making.
  • Consider how best to run these scenarios in the given context. For instance, would it be most effective for learners to engage with them online or in person, in isolation or as a small group? It may be more fitting to have learners act out a scenario, or even design their own scenarios.
  • Consider whether the learners will require a debrief or further discussion.
  • The best scenarios have no ‘correct’ or ‘incorrect’ decisions. Each decision has a realistic positive or negative consequence. Showing these consequences encourages the learner to draw conclusions about the effect of their decision, rather than dictating explicitly ‘right’ or ‘wrong’.

Insider advice: eLearning editing tools

I’m often asked about authoring tools – either for recommendations or I am asked whether we can build courses using them (the answer is yes – we use all of them by the way). However as we do have some expereience with them all, I thought I’d take the opportunity to provide a quick update on where the world’s at with authoring tools.

A word of warning
One piece of crucial advice – if you buy an authoring tool you’re locking yourself into that tool. Permanently. There is no way to take a tool built in one of them and expect to import it into another. We are occasionally aked for “the storyline” by a client after they elect to buy Articulate years after we developed the course. Sorry – no can do. We built the course in the tool you used at the time! If you now have another tool, you can only use it for new courses.

The players
Firstly – the market status – perhaps not surprisingly the big two – Adobe Captivate and Articulate Storyline are continuing to dominate the market with Trivantis Lectora placed a solid third. However a range of alternatives such as DominKnow, Axonify and TechSmith are carving out solid niches.

Alternatives
For the tech-savvy there is a really good, free option available in Adapt. It has no xAPI at all and (unless you’re able to write your own plugins to extend the tool itself) will not create sophisticated interactions. On the other hand, if you are able to program and you  want courses with a beautiful interface that’s highly responsive for mobile devices, Adapt may be an option.

Some LMSes have built-in authoring tools. Avoid themThey are addons to a system designed for something else. These create courses that are very limited in functionality and suffer from technical limitations. The big issue is that they, like other authoring tools will lock you in to them, but in the case of “built in authoring” in LMSes, you’re lockign yourself into a tool that is limited and will not grow with you. Not only that, but you are also locking yourself into retaining that LMS forever as well!

The review
Articulate has taken an interesting direction recently. As Storyline cannot produce responsive content, it seems to have proven impossible to create good mobile courses. The company’s response has been to sidestep the issue. They have a separate product (Rise) that is mobile compatible. The bad news is that courses are currently not able to be moved from storyline to Rise. Also Rise is somewhat more limited in functionality than Storyline (at the moment at least). Both Storyline and Rise have online cousins and a subscriptino to Artculate 360 provides access to clud based versions of both. However be aware that the installed versions and the cloud versions are not the same. In fact there are rhumours that Rise 360 may lag Rise in the future for functionality. Like other cloud tools, Rise 360 stores your content on a server owned by Articulate.

Both Captivate and Lectora have a reputation for being more difficult to operate than Articulate but have the capability to create more complex modules such as by including VR elements or complex interactions. Don’t buy these tools unless you’re ready for a steep learning curve!

All three of these tools claim to be xAPI ready – a claim that irritates me.

xAPI is not supposed to be a way to do exactly what you do with SCORM. To be truly xAPI aware, I want to see a course measuring how people interact with its interface, collecting statistics on the way they play, rewind and fast forward AV elements, providing granular details on people’s learning.

SCORM is able to capture scores, pass/fail and responses to questions. The easiest way to pretend to be xAPI compatable is to collect all those scorm data and transmit it twice – once in SCORM and once in xAPI. That’s what Lectora, Articulate and Captivate do currently. It does not bring value other than marketing value.

I hope this changes over time, but until then anyone wanting the true value of xAPI needs another tool such as DominKnow which was developed in the wake of the xAPI standard. DominKnow is easier to use then the others here. It produces beautiful courses that are truly responsive. The downside of DominKnow is that it doesn’t offer some of the more complex features you find in Lectora Captivate or Articulate – for example its ability to allow programmers to set variables and add programming code is limited. You would probably struggle to create a complex educational game (for example) with DominKnow.

Note – Global Vision Media is a DominKnow partner

 

SCORM 2004 flopped. xAPI won’t.

eLearning courses have multiple standards to choose from for sending results.

  • The latest is xAPI, released in 2012.
  • Prior to that we had four editions of SCORM 2004 starting in 2004
  • Before that was SCORM 1.2 which dates from 2001.

And despite that, the oldest of the standards above (SCORM1.2) still dominates. a full 75% of course developers don’t bother with SCORM 2004 at all, and of those that to, 97% use SCORM 2004 on only on a technicality – using SCORM 2004 to deliver only SCORM 1.2 results.

In summary, of all the courses created last year, fewer than 1% used SCORM 2004 in the way it was intended. Everyone else only needed what SCORM 1.2 was able to achieve.

In short, SCORM course development today continues to use thinking that dates from the year George W Bush was sworn in, Apple opened its first store and the latest PCs came with Windows XP.

You might think we’d have moved on! We certainly had the chance to. SCORM has had one mahor version and four releases since then, but it was hamstrung by the fact that the first release (SCORM 1.2) was actually pretty good at what it does. The things SCORM 2004 allowed us to do were:

  1. Ask the course to look at the learner’s results from earlier questions and adjust the course based on their responses.
  2. Tell the course to re-order its content based on a range of (quite complex) options.
  3. Remembering whether a person actually completed in addition to their result.

It allowed us to determine:

  • This person completed, did they also pass?
  • Does the learner already know the next section and should we skip it?
  • Will we permit the learner to choose their own adventure as they traverse the course?

Nice, but hardly transformative.

Not surprisingly it was met with a collective “meh” and simply never took off. So if people were not tempted to move from SCORM 1.2 to 2004, what makes xAPI any different? Why am I so confident that it will be adopted?

The answer lies in the fact that the step from SCORM 1.2 to 2004 was so subtle while the, the step from SCORM to xAPI is a game changer.

In 2001, it was impossible to imagine the way we use technology today. “CBT” (Computer Based Training – now termed eLearning) was mostly used to deliver automated tests. What little teaching there was was rudimentary but was a convenient way to complement our 20th century learning. The very thought that a computer might simulate things, that we might accredit anyone using results collected automatically or that a computer would play a central role in a person’s formal training was frankly absurd.

And at one level, that’s still the case. Whether we’re measuring a pilot’s ability to fly an aircraft, whether a paramedic can rescuscitate a mannekin or how frequently loaves of bread emerge burned from the oven, vast sectors of the workforce can only be adequately assessed and trained in the real world. What’s changed is that we now want computers to play a role in all of these xAPI is permitting us to use IT to administer training and assessment that is mission critical and cannot be done without it.

It is easy to imagine xAPI becoming mandatory for professions, but considering how few professions can only be taught in a purely online way, mandating SCORM is all but unimaginable.

Indeed the US Department of Defense has mandated xAPI for its distance learning, and it is why all nurses across the US are now trained using xAPI. Last year 2.5 million xAPI statements were sent to the SCORM cloud a doubling in five years. ADL (the creators of SCORM and the group who commissioned xAPI) have confirmed that SCORM will have no further releases. Their support is with xAPI.

Compare the list of questions SCORM 2004 enabled (above) with those that xAPI intruduces:

  • When the learner watched that video, did he/she actually watch it, or did they scrubb or fast forward it? If they did watch it, where did they get the most value?
  • How safely did the person drive today?
  • When people encounter my interface, how do they typically interact with my UI?
  • After they did my training, were they better at their jobs?
  • Looking at their use of the organisation’s systems, does this person require trainig and if so, which course(s)?
  • What training delivers the best ROI to the organisation? How much is that ROI?
  • How did the learner go on a course on some other LMS we can’t access?

xAPI is not setting about to improve on what we have now. It’s rethinking what we do entirely. That’s why its adoption will be explosive and not at all like its predecessors.

 

 

 

Trust me, I’m a trainer

A recent three-nation survey on Learning Transfer conducted by a consortium led by the University of Sussex made for some salutary reading.

The survey looked at the US, the UK and Australia and found that Australian employers were the least likely to insist on staff undertaking learning. Australians were much more likely to report “regulatory compliance” as the primary goal of training and conversely, while one in eight American respondents listed the primary goal as to “provide engaging, relevant and well received learning opportunities”, not a single Australian did.

As for outcomes, while most US respondents and almost half of UK respondents viewed  training investments as “generally beneficial”, only 28% of Australians felt the same way. Most alarming of all, well over a third can’t say whether their job performance is improved through the training offered to them at all and are left wondering why they are made to do it in the first place.

Australians also reported that training is delivered to mainly comply with external regulations rather than to improve their performance. Even management question the value to the point that Australian management is the least likely to back training through policy.

Given those findings, it’s little surprise that Australians are far less enamoured of their trainers. While L&D staff in the UK and US are trusted to deliver effective learning experiences by 72% and 60% respectively, a mere 38% of Australians have that view.

If that does not point to a trust problem, I don’t know what does…

The benefits of building that trust are manifest; on average employers outside Australia were prepared to devote $1,930 per staff member last year in training compared to $735 in Australia. With a third of the resources available to deliver outcomes, it’s small wonder L&D are not able to build the trust they can elsewhere. It’s a catch 22 situation.

2020-02-12_19-13-08

The good news is that the same survey tells us that this is much less the case elsewhere and by implication that it does not have to be that way. But the job is ours; noone else is going to break the cycle so the task falls to us trainers.

We must work to change the perception of our work before we’ll get more resources with which to do it.

Impossible? – No. Just ask the British and Americans…

Regardless of where on the cycle you’re standing the exit ramp is trust.

Fortunately, building trust doesn’t have to involve dollars. Time and effort maybe, but not budget initially.  It starts with introspection; for trainees and management to better trust L&D to deliver real outcomes for them personally, L&D needs to be prepared to question what it does now:

  • Does your training prioritise the needs of the trainee, or the needs of external regulators?
  • Are the courses you deliver crafted to maximise results or are they the least expensive/most convenient third party offerings?
  • How often do you measure the knowledge gaps in your organisation?
  • Does the pace of change to your courses match the pace of change in your industry?
  • Do you undertake effective measurement of outcomes and continually improve the material in response?
  • Is the delivery mode driven by knowledge of the most effective format for the situation or is it driven by what is possible?
  • In the shoes of your trainee, how eagerly would you participate and why?
  • When was the last time you asked trainees and management for their goals and in particular for their training priorities?

I do get it – you have always wished you could do more of this but it’s not possible with the resources you have. That’s not a reason not to ask these questions. If others are expected to undergo a needs analysis, why not L&D? Am I right?

Trust me, solutions will emerge; for example:

If your original objectives no longer align with management’s priorities but recreating everything is absurd. Look through it to find what can be adjusted and drop anything that no longer is serving the organisation’s objectives. Use the time you’d otherwise spend delivering the rest to survey people at all levels for their goals, identify the single most important unmet need and develop one item only that addresses it. Build trust by involving those who called for it to the extent you possibly can and in so doing demonstrate your commitment to ensuring you fully meet their needs.

If your material’s badly dated but you don’t have the resources to update it properly, why not learn to make video on a shoestring so you can interview some of your awesome in-house experts and turn it into microlearning? Whenever you deliver it, the recognition upon which it’s based will build trust.

If you are delivering primarily compliance training, it’s natural for people to see your work as an extension of a regulator, rather than aimed at maximising their potential. No matter how vital that compliance training is, it is aimed at addressing problems you hope will never arise. Inevitably it is perceived (wrongly) as pointless unless it’s carefully contextualised by building more practical material alongside it. If you don’t have the resources to offer more structured training in areas that your team do value offer unstructured options instead.

If your training is dry and boring but you don’t have the ability to redevelop it the way you’d like, read this article on crowd sourcing your engagement. Build on that over time until one day the case for it is so solid, the business case for a full Learning Experience Platform is self evident.

Over to you….

 

Learning Cultures – an accountant’s view

Deloitte Global’s 2019 survey of Human Capital Trends was pretty stark – 86% of respondents believe that in the face of rapid change in the workplace, they must reinvent their ability to learn.

However only 50% of those respondents reported that their L&D department was evolving rapidly and 14% said it was too slow. Just 1 in 9 of respondents rated their learning culture as excellent and 43% said it was “good”. Almost half were of the view that their workplace training culture was inadequate. Add to this the finding in an earlier edition that almost half of millenials were planning to resign due to inadequate training and there is a clear concern for employers emerging.

Deloitte’s view is that organisations must create a learning culture that is broad in scope:

In a competitive external talent market, learning is vital to an organisation’s ability to obtain needed skills. But to achieve the goal of lifelong learning, it must be embedded into not only the flow of work but the flow of life.

That certainly sounds like an ambitious goal, but in fact, as I’ve written previously it’s often surprising just how much impact you can get with relatively minor alterations or merely by elevating practices you undertake already.

Beyond that there are emerging technological solutions such as LXPs that are designed to create learning cultures (rather than merely deliver training). Meanwhile xAPI is helping organisations coordinate multiple forms of training experience throughout their workplaces – another key to creating a learning culture (and it is often free to do so).

Deloitte coined a term “DevWork” to highlight this trend of combining learning with work. The term is derived from DevOps  which refers to the way IT teams are now extending their work beyond development and into operations.

Another clue comes from Deloitte’s finding that there is a rapidly growing consensus that people want to be trained not only by L&D but also by the business itself, suggesting a desire for a more holistic approach to learning.

So, what might an organisation do to achieve this DevWork?

It’s actually not that hard – a learning culture is something that grows naturally if you nurture it. The answer I’ll offer is not very different from what I suggested when I was discussing 70:20:10 (which, after all is a closely a related concept).

 

Train in context:

  • Use technology such as LXPs to encourage peer sharing,
  • look at options for social learning
  • Encourage knowledge sharing and ensure that ideas are captured, shared and above all recognised
  • Build mentoring relationships through a formalised mentoring program
  • Empower (with resources) and encourage supervisors to take a more proactive training role
  • Create a repository of microlearning (perhaps crowdsourced from your workplace SMEs) and use xAPI to connect it to the points of need in the workplace

 

Train often:

  • Augmented and virtual reality is actually not an expensive option these days – you can achieve more than you think with the tool you have to hand and a few tips (a topic for another edition).
  • If you have an xAPI system in place, connect it to the tools around your workplace to check for understanding on the job and trigger microlearning interventions when people use those tools
  • Develop a plan to ensure that training becomes ubiquetous – people receive trainig in small doses throughout the day until it becomes “the norm” to be learning.
  • Look at ways to better use smartphones for instantaneous training.

 

But there had to be a catch, right?

If you really want to nurture a learning culture, show you mean it. As an employer I always make training a core part of performance appraisals. Training and learning goals are regarded as KPIs. We measure their attainment and we regard them as no more or less important than any other set of agreed objectives. An employer with a genuine learning culture should be prepared to link performance incentives to training, for example.

As with many things, money is the real test of your level of resolve. If your organisation is genuine in its commitment it surely sees this as a core business driver that flows to its bottom line. It will incentivise people for their contributions and it will see well directed training expenditure as an investment rather than a cost.

I’ve dealt with organisations that use the language of “learning culture” on one hand and “training cost” on the other. Sorry, but these are incompatible and unfortunately your staff know it. If they find that their preparedness to devote their time to attending a worthwhile training event and all they hear is “budget approval process”, all your good work is lost.

Perhaps, then a prudent next step is to seek an accountant’s assessment of these strong recommendations Deloitte makes…

Oh… wait…

Design Thinking (and a puzzle)

A lot has been said about design thinking recently, and a lot of it’s really really technical. That strikes me as ironic when design thinking is supposed to teach us to do the opposite of being geeks!

Like all fashionable terms, it’s nothing new. Like so many other exciting ides, you will know you have got the idea when you stare at this screen and think something along the lines of ‘duhhh’

It boils down to teaching in a way that starts and ends with the audience; we begin by empathising with them, we follow a process to determine how best to teach them, then we talk to them to find out how we can do better. Sure there are things to think about and a process along the way but much of that is best understood by taking a look at the tools that are available such as  The Learning Ecosystem Canvas, rather than wading through long (empathy free) articles that essentially say the same thing.

Pause that thought…

I want to give you a puzzle. If you think you can can solve it, great. Otherwise, feel free to skim over it.

According to Monty Python, it is part of his kingly duties for King Arthur to know off-hand the speed of a European Swallow in both the case where it has to carry a coconut, and where it was un-laden. The speed has been much debated online and a figure of 11 metres a second is now accepted.

So, our unladen swallow (let’s call her Jane) can cover 11 metres every second. If she flies a kilometre (one thousand metres), it will take her 1000/11 seconds or about 91 seconds.

Now, let’s suppose King Arthur releases Jane and starts walking towards Camelot. He is pretty fit from all those battles, but has some pretty heavy armour on and so let’s say he can stride along at 5km an hour. Camelot is a kilometre away and so Jane gets there  in just 91 seconds. She turns around and flies back to meet Arthur who is now part way towards Camelot but as soon as she reaches him, she turns back and flies to Camelot again.

Without a pause, trusty Jane turns back and flies to Arthur to make sure he is okay, and finding that he is, that he is now much closer but still walking at 5km/h, she returns to Camelot.

Jane repeatedly flies between the ever approaching Arthur and Camelot without pausing, and flying directly between the two.

How far does Jane fly?

A spoiler will be presented below, so if you want to try it, I would encourage you to have a go at solving this before reading on. If you do, I would be interested to read your solution (before you get to the spoiler that is!)

What does this have to do with design thinking I hear you ask?

The puzzle above is one that some of you may be able to solve instantly. Others will need to get out a pen and paper.

That’s because I used anti-design thinking ™    … hey! I now have a catch-term of my own!

Let’s look at this same problem in a design thinker’s way… To do that, we must start with empathy.

Who are my learners?

  • Well, I know that readers of this blog come in many shapes and sizes.
  • Most are working in an area linked in some way to training.
  • Some will have done maths, many will not have done it for some time.
  • Some will get the Monty Python jokes, some will not.

It is safe to assume that the way I presented this puzzle will alienate people.

Fun as the Monty Python reference is for some, it is not relevant and simply adds complexity for others. Why add all that additional wording to a maths problem?

Don’t get me wrong, linking learning points to learner experience is often valuable, but not for its own sake!

With this case study, I can hardly claim to have been attempting to cast the situation as real world to allow me to train someone to deal with a situation they might need to deal with. Adding humour can aid recollection, but the only humour here is for that portion of my audience who is in on my Monty Python joke. Worse, the humour for them is in reliving the experience of seeing that film – my story is not in the least funny. If anything, the humour here will cause people to think about the film, not my puzzle and actually be a distraction from effective learning.

So in summary, I ignored my audience persona and embellished a problem for no gain whatsoever; I showed no empathy to you, dear reader!

What do they bring to the lesson?

Our learners will always learn in a way that has context. What they have seen, heard or done in the run-up to the lesson makes a world of difference, and it’s our job to consider what that might be, Equally it is our job to frame up the material accordingly.

In this example, I created context that was guaranteed to get a lot of users onto the wrong track…

So, our unladen swallow (let’s call her Jane) can cover 11 metres every second. If she flies a thousand metres, it will take her 1000/11 seconds or about 91 seconds.

I tempted you to start thinking about a particular type of calculation… the time it takes for a bird to fly a particular distance, and relate it to the distance of one of the bird’s laps.

Not surprisingly, a number of you may have started with that first 1km flight and 91 seconds, then the time and distance back to meet Arthur, and gone on to work out more lap distances in ever more challenging calculations.

What if I had said this at the outset instead?

The bird can cover 11 metres every second. That means that if she flies for ten minutes (or 600 seconds) she will have flown 11 x 600 metres = 6600 metres.

I would have encouraged very different thinking. With this as our ‘previous experience’ we are led to think about how far the bird flies in a particular time. Immediately after that we hear about Arthur and his 1km walk at 5km an hour.

Perhaps some of you are now seeing a different approach to the problem?

More empathetic treatment 

Here is another puzzle…

A bird can fly 11 metres every second. That means that if she flies for ten minutes (or 600 seconds) she will have flown 11 x 600 metres = 6600 metres.

If her keeper releases her 1km from home and starts walking directly home at the same instant, and if the keeper walks at 5km an hour the bird will fly back and forth between him and home for the time it takes for the keeper to arrive home.

How far does the bird fly?

That’s a fair bit easier! Yet all I have done is change the wording a bit to be a little more empathetic. I have eliminated some red herrings and I have chosen a frame up sentence that will cause readers to approach the same problem differently.

In the classroom, that frame up iwill have come from experiences over which we have no control such as the work environment, the policies of the organisation, traditions, practices, culture, personalities etc. Yet your choice of frame up governs how your participants apply it. They bring existing lines of thinking to your classroom that might seem applicable to them, but might not represent the line of reasoning you expect, so the challenge is to anticipate and work with or adjust those lines of thinking.

As we see from my example, one’s preparation might alter dramatically what people learn, cause simple ideas to become complex and have people give up altogether (hands up those who gave up on the first version of this problem…).

Nobody should design training unless they know their audience, understand their context, and have gone further to use empathy to anticipate the practiced lines of thinking that might appear applicable for the challenges you pose.

Are we there yet?

Okay so I suspect more readers (but still not all) have a distance for the bird’s flight.

… but there has to be a ‘but’ right?

Earlier I said that design thinking does not only start with the audience, it must also end with it. Our job is to assess whether we have successfully taught people the most effective way and then to iterate our design. (That’s why I would love your feedback).

But for now let me start by saying that the best solution to the bird problem is not to work out how long it takes to walk a kilometre at 5 km/h (12 minutes) then ask how far a bird flies at 11 m/s over 12 minutes (720 seconds).

Sure, it gives the right answer (11 x 720 = 7920m), but there is a vastly better approach.

This puzzle is still framed so as to prepare you to think in a way that overcomplicates it.

So here is my puzzle to you. See if you fan see a better line of reasoning for the bird problem and send me your solution. When you have found the best approach, look at my wording to decode how it leads you away from the simple solution to the less simple one. Knowing that, how should this puzzle be framed?

This is precisely what an instructional designer must do when creating learning empathetically.

Solution next week…

P.S. a really thoughtful quote on design thinkers from Don Norman :

“…the more I pondered the nature of design and reflected on my recent encounters with engineers, business people and others who blindly solved the problems they thought they were facing without question or further study, I realized that these people could benefit from a good dose of design thinking. Designers have developed a number of techniques to avoid being captured by too facile a solution. They take the original problem as a suggestion, not as a final statement, then think broadly about what the real issues underlying this might really be (for example by using the “Five Whys” approach to get at root causes). Most important of all, is that the process is iterative and expansive. Designers resist the temptation to jump immediately to a solution to the stated problem. Instead, they first spend time determining what the basic, fundamental (root) issue is that needs to be addressed. They don’t try to search for a solution until they have determined the real problem, and even then, instead of solving that problem, they stop to consider a wide range of potential solutions. Only then will they finally converge upon their proposal. This process is called “Design Thinking.”