Thursday 12 May 2016

Lost in metaphorical translation

I like to use metaphors and simile as a friendly, relatable way to communicate ideas.
I recently learnt it's worth being careful with how you use these devices, as it's easy to mix your metaphors, lose the information, and worse lose your audience.

An example of this occurred after Michael Bolton gave a talk at a We Test meetup in Wellington on Metrics and Measurements and Numbers oh my! Michael’s gave an engaging talk (as always) with good stories of how metrics can unintentionally obscure rather than reveal information, and therefore explored the importance of reporting relevant information in an appropriate format.

In the discussion that followed Michael's talk, the group discussed ideas for alternatives to metrics and graphs. One suggestion was to utilise second order measurement to quickly convey information to people about the state or health of a project. A thumbs up or a thumbs down - is it good? Or not good?

An idea was put forward (I think by Michael) that we could ask people to give an indication as to whether something was “too hot”, “too cold” or “just right”.
Too hot - it’s going to burn us; there’s something dangerous here. Too cold - we’re not satisfied; we need to pay more attention to this. Just right - things are good; we’re satisfied with how much attention we've given it, and we don’t think we’ll get burned.
A 'Goldilocks reading'.


After the talk I spent hours thinking about this metaphor and how it would be a really simple concept to introduce in our teams.

I first met the idea of second order measurement through Michael Bolton’s 1997 article Three types of measurement and two ways to use them in StickyMinds, where he talks about Gerald M. (Jerry) Weinberg’s classifications of measurement.
The article is on our recommended reading list for test analysts here at Trade Me. It’s an article that I've personally referred and forwarded a number of times when working in and with iterative and agile teams. Usually, this has been in response to higher ups wanting to see test metrics to determine if a project will ship on time - but also to people within teams who give extensively technical and detailed reports when the audience don’t have (or don’t want to have) the level of technical understanding to ‘correctly’ interpret them.

The idea of ‘Goldilocks readings’ as an informing process sits well with me because I strongly believe in trusting the people who are working on a project, empowering them to use their knowledge, observations and gut to inform stakeholders and start discussions. Obviously, you have to support this with escalation and ‘help’ paths to make sure they’re not out of their depth, but both projects and teams benefit from informed people.
People who are informed make better decisions, so informing people early and often should lead to even better decision making.
Too often you hear about projects missing deadlines and the team saying “we were never going to hit that date”, to the surprise of some other stakeholders. Assuming those people weren't being arrogant or ignoring available information, why were they surprised? Where was the information they needed? Was it too late in the project to change things? Was it buried in metrics?

My theory is that a ‘Goldilocks reading’ early and often from the team on anything from quality criteria, to deadlines, to team collaboration would make sure that people can be as informed as they need to be, and the discussions we have about mitigation more meaningful and timely.
Fewer surprises when the bears get home.
The reading is coming directly from the people building, testing and validating the project.
Hearing something is ‘too hot’ (might burn us) would start conversations about implementation, expectations, and hopefully mitigation plans. Doing readings throughout a project would allow you to track if a project is getting better or worse.

I wanted to test the theory out.

I'm the product owner for an agile team who implements, supports and maintains our automation frameworks. They set goals each sprint, but I don’t always get a chance to see how they’re tracking towards those goals until the sprint concludes.
So, on Monday I went to the team’s stand up and pitched the idea:
“I want us to try something out, so that I can get information on how we’re tracking against our goals. But - I don’t want to give you any reporting overhead.
I want you to try doing Goldilocks readings - each stand-up you give a ‘too hot’, ‘too cold’ or ‘just right’ reading on the goal. ‘Too hot’ means it’s unlikely we’ll hit the goal, ‘just right’ means we’ll achieve it, and ‘too cold’ means we havent investigated enough to make a judgement.”


Unfortunately, while nodding their willingness to try out my idea, their blank looks told me something was wrong. After a decent pause, one of the team members asked "what is a Goldilocks?"

The team is made up of three outstanding test engineers - two Indians and one Chinese.
I thought I was super clever introducing this measurement concept with an allusion to the ‘famous’ judgements in the story of Goldilocks. The metaphor of heat and satisfaction with a product (porridge) was meant to be relatable and friendly - but meant nothing to the team as they had no affinity to the story of Goldilocks and the three bears. In their cultures, the story wasn't prevalent like it was in my white New Zealander upbringing.

Now, unfortunately when I explained the fairy tale it spawned more conversations about ‘breaking and entering’ rather than the protagonist’s need for porridge at an ideal temperature.

But - I learnt a valuable lesson. Wrapping a simple concept into a metaphor damaged the delivery of the concept because the audience didn't see the information I was trying to convey. It got lost in the messaging.

We’re still going to try ‘Goldilocks readings’ in the team soon, and I’ll let you know how it goes.
But I think we might settle on something more universally relatable like ‘Temperature readings’.
Going forward,
I'm going to make an effort to make sure my information isn't being obscured, in both my reporting on test activities and when I'm communicating new ideas.