Continuous Improvement blogCreating a Robust Approach to Gathering Feedback

Having frightened the life out of everyone with the ‘Pitfalls’ blog, hopefully this will provide a few ideas that resonate to strengthen any feedback process you may use.

Why do we ask for feedback? It’s reasonable that if you ask, the goal is to improve your offering or service, make it better.

In Lean when we look to improve a process we do this in a structured and consistent way:

  • mapping out the steps
  • identifying the wastes
  • proposing improvements

Lean theory also tells us that if you can measure something then you can improve it by creating a baseline to spot deviation from.

Let’s apply this thinking to our feedback activity

Map out the Steps

Let’s assume we want feedback on a workshop, a piece of training, etc. It is possible to identify standard steps or categories that apply to the delivery, irrespective of the subject, to drive consistency of feedback. The below are some, non-exhaustive, suggestions of common categories that if measured and improved upon drive a progressively successful outcome.

Objectives explained

The first step in any provision is that it has a purpose, and that your audience is aware of that purpose. If you’ve done your prework diligently with the sponsor and perhaps had them introduce the session this should be rock solid. A poor score here will destabilise everything that comes after. Critical to do pre-work.

Environment

Any provision requires a location or situation for it to be provided in. Traditionally this might be a meeting room using interactive presentation screens. At the moment our environment is more often virtual.

The fact it’s virtual or that we may be providing on a client site does not exclude the provider from being responsible for the environment.

I have a colleague who relates a story about specifying a 20-person capacity room for a 2-day workshop. This was confirmed several times that the room would hold 20 people. On arriving laden down with brown paper, pre-prepared A1 activity templates, sharpies and posits for an interactive 2 days, the facilitator was presented with a 20-seat computer lab, no tables, no wall space just computers and 20 stools. Be clear, lesson learned.

Pace

The speed of progression can radically affect engagement and understanding. It is also useful to garner whether the session was too fast or too slow in order to make the correct diagnosis and drive improvements in the correct direction.

Engagement

One of the challenges in delivering in the virtual world is to engender that engagement that is created in the traditional training setting via interactions prompted by movement, expression, and the feel of the audience group. This feedback drives at the effectiveness of the variety of learning, of the cycle of intensity, and can have implications on session length.

Material

Training, workshop, or service we rarely deliver without some foundational material. Asking about it helps keep it contemporary, dynamic and relevant. Well designed material can sometimes be like a good referee in a football game, very unobtrusive. Poor material can sabotage the best delivery as it distracts from the message.

Delivery

Time to put the thick skin on as this one asks about you. Serious but not too serious, witty but not too witty, strict but not too strict, friendly but not too friendly. This is a balance on how well you’ve gauged that audience as much as it is how confident, clear, and understandable you appear.

Understanding

Self explanatory check on the audiences understanding of the session content.

Objectives realised

We set out to achieve something and so it’s important to ask if we succeeded.

The above steps sequentially segment the session to usefully target specific feedback areas. This insulates one poor category impacting the whole feedback result and targets improvement activity. It is often desirable to aggregate the experience of the session into one measure. There are ways to do this using averages etc. however I like the measure of Net Promoter Score (NPS).

NPS can seem a tough measure, and it is, so it’s important to understand what it tells you as it’s not the whole solution. I’ll not detail it here but google NPS if interested in its derivation. In simplest terms NPS identifies potential promoters of your session and rates the chance your session would be recommended.

Measurement

If you can measure it you can improve it. NPS requires a specific measurement scale to operate. It measures success against a spectrum from 0 to 10. 0 being minimum and 10 being maximum. A typical NPS question would be;

  • On a scale of 0 – 10, where 0 is not likely and 10 is extremely likely, how likely are you to recommend the course to colleagues?

I’d suggest adopting the same scale for the other categories to drive consistency, for example;

  • On a of 0 – 10, where 0 is poor and 10 is excellent, how would you rate the delivery of the course?

Measuring each category as well as the whole allows for incremental targeted improvement after each delivery, in addition to an understanding of overall experience.

Identify waste / Propose Improvement

The above provides a fairly rigid process, giving feedback holistically and to consistent key categories of our session. To our customer however it may feel a bit mechanical and thus far we are not gathering context. To resolve this, ask what they think.

This is best done with two short open questions and a free text response.

  • Please state two areas of the course you found most useful and why?
  • Please state two areas of the course you felt could be improved upon, and why?

Now we are receiving some context around the individual’s experience, areas that worked for enhancement and ones that didn’t with reasons. These two areas can be gold.

This completes our Lean approach to feedback

  • Map the process steps
  • Measure
  • Identify Waste
  • Propose Improvement

Miscellaneous

Anonymity 

There are two schools of thought here. The most common is that anonymous surveys encourage honesty. I’m for the most part in the other camp which prefers accountable feedback as I like to believe we all want to be part of the solution.

Anomally

 You may get the situation where in a cohort of 20 the average score is 9/10 but 1-person scores 1/10. Don’t ignore it but it this should not get a disproportionate amount of airtime. Feedback to the sponsor on the anomaly is provision enough.

When

Try to glean feedback there and then. Make it the final part of the session and provide the survey, online link or zoom poll. This is the only method of driving a high return rate therefore more meaningful data. If you send it on, by email, it is lost in the pressures of the day job.

Feedback is an inexact science. The mechanism needs to be concise enough to engage with but detailed enough to provide understanding.  The above endeavours to provide consistency, standards, and a platform for improvement, which is always the goal.