Thursday 1 December 2016

Strong test communities; establishing castles vs growing gardens

This article was originally published in the October 2016 edition of Testing Trapeze
At Trade Me we’ve built a strong internal testing community. I have been reflecting recently on what makes it so great, and how it’s different from what I’ve seen and heard about the test communities in organisations elsewhere.

Often strong test communities within an organisation are established, reliable, robust and fortified groups housing test experts and best practises. The testers who belong to these communities take the role of being Knights of quality within their project teams, carrying banners emblazoned with non-functional requirements, and shields depicting the values of their central castle. These castles work well in many organisations for a number of reasons.
The tester has a prominent structure to reference their testing to. The strength of that solid community means that people are able to trust and understand that the test practise is refined and established.
Castles usually come with rulers and round tables who use their knowledge and influence to build the laws and values for their community to live by. In castles, their Test Managers and Leads use their knowledge of testing to enhance the community and its practices.
There is a clear escalation structure in these communities. If the knights encounter a problem they know who to talk to. Likewise, if someone as an issue with a knight’s conduct the escalation path is clear.

At Trade Me we don’t have a castle.
But, our internal test community is really strong. It’s been openly envied by other disciplines within the company, and over the years I’ve been asked to help and advise other internal communities who want to build themselves to be like us.
One thing I stress when giving advice, is that we didn’t set out to build a castle.
In fact, the community  isn’t finished being built. It’s constantly being developed by those who work within it and anyone who values what it produces. It’s like a communal test garden.

The state of the garden is not solely due to my work or influence. Its shape is the result of the people who are, or  have been, part of  the community over time.
We’ve implemented suggestions for things like training sessions, test environment configurations, new tools, and our hiring process which came from people within the test community.
Castles on the other hand tend be to governed and directed by central figures and processes. The ideas and decisions tend to come down from the leadership teams and there is little opportunity to suggest or propose alternatives.
While we have a Test Manager as a central figurehead for the Test, we welcome ideas and input from all levels of tester to help shape our testing practise and community.
If you are accepting of people making suggestions you are more likely to discover new gardening techniques or fertilisers that you haven’t applied in the past.
Using our community’s experience, observations and product knowledge to shape the practices and guidelines we have means are more likely to have buy in to how things are done.
Having the community being influenced and nurtured by the people who benefit from it means it is dynamic, adjusting quickly to suit the needs and wants of its members.

We also don’t wall in the garden. People from outside our community are able to drop by and see what we’re doing and how. Developers, business analysts and other members of the business are welcome to attend our training sessions and meetings. We openly and actively share how and why our community does what it does, believing that transparency builds understanding.  Like learning from the people who tend our garden, we learn from our neighbours. We observe how their gardens grow, and are open to their suggestions on how to keep the weeds at bay or getting better returns for our investments. For example we’ve incorporated a number of improvements to our tools due to suggestions from developers.

We encourage our testers to better themselves and others, and provide frameworks for this to take place. Gardeners are always looking for ways to maximise their harvest, or grow the best flower and the best way to do this is by learning from people with experience or knowledge. In our test community this betterment can include things like peer led training sessions on new technologies or test techniques, or pairing with strong domain experts or SMEs.
I believe you can always learn something from anyone, and a test community is no different.  Anyone at any level of experience can teach you something new. Within our community anyone can run a training session. I’ve yet to sit through a training session where I haven’t learnt something new.
Recognising and learning from expertise and knowledge in your gardeners means your garden is stronger as a whole.
While the primary goal is upskilling and continuous improvement, it also results in strong relationships between testers. These relationships inspire the building of strong internal networks, and the community does a lot to support itself from within. People come to gather, and share. They leave more nourished than when they arrived, and are better equipped to take on their next task.

Like a castle community, our community does have central values and practise guides but in our garden they’re not carved in stone like you might find in a castle. We keep ours lightweight, flexible, and non prescriptive which leads to our gardeners employing fit for purpose techniques after judging the soil and weather conditions that they encounter
We learnt that prescriptive documentation can be dangerous when we went through a rapid growth period. I was chatting to a newish tester about his test documentation, specifically if his very thorough documentation was needed. His response was; “But you said we have to do it like this?”
And going by the wiki page he proceeded to show me , I had. Months prior I’d written a guide for our test documentation after a short training session I ran. It got referenced in our ‘new tester manual”, which this new tester diligently went through on his first day. When I’d written the wiki page I’d left out a caveat giving testers permission to be pragmatic and use their  own judgement. Now, if that tester had been at the original training session he would have had the opportunity to question me and get guidance on the effect of the missing caveat.

Likewise we value and strongly encourage face to face communication between testers whenever possible, over written emails or documentation.
Besides the time benefits you get from it, face to face gives people the opportunity to question and clarify, rather than the meaning or urgency being lost in the black and white of an email. A discussion in front of the roses which have an insect infestation at the point it’s occurring gives a faster and focussed response than waiting for a reply to an email. this leads to knowledge sharing and support within the community. Getting a situation in front of others increases the chances that you will find out which someone, or something, might be needed to diagnose the species and correct treatment, rather relying on the gardener to have previously memorised how to handle every bug they may encounter.

Of my my four and a half years as Test Manager at Trade Me, the internal test community is one of the things I’m most proud of being involved in. It has built a common sense of purpose, investment, ownership and autonomy without having to enforce rigorous structure or formality.
To me the strength it comes from what it produces and how it nourishes its members and neighbours.
A strong internal testing community has the ability to help to produce better quality products. But a strong community grown and nurtured from within has the ability to create engaged and enthusiastic testers who help to keep that community growing and nurtured in the future.

Footnote: Since writing this article I've moved on from Trade Me. I'm happy to report the garden along with the community of gardeners who tend to and benefit from it continue to thrive.
Testing Trapeze is a bi-monthly testing magazine to feature testers from Australia and New Zealand. If you haven't already, I highly recommend subscribing to Testing Trapeze, I've continually found it to be provide inspirational and insightful articles from very talented people.

Monday 12 September 2016

Introducing testers to basic programming conventions

I recently ran a 'introduction to programming conventions' workshop with our test analysts.
It was really well received, so I thought it would be worth sharing it in case anyone would like to reuse or copy it.
You can find it in my recent blog post 'Robozzle'

Here's how it came about...

One of our test practices central themes this year is 'Grow the technical tester'.
We're aiming to build out our test analysts technical capability around reading, writing and understanding code, as well as understanding the systems and infrastructure our products run on.

I strongly believe there are some advantages to having a stronger technical base when you're working as a test analyst.
Whether it gives you a partial Rosetta stone to bridge the gap between technical terminology, or it gives you confidence to question implementation talking to the people writing or building your product - a technical understanding of your product used wisely will enhance your test approach.

As part of our internal training workshops and sessions, I wanted a light weight training exercise to kick start this technical growth.
Our test analysts have varied backgrounds in their exposure to programming, some have very limited or no exposure to programming.
I needed something friendly and in a language anyone could pick up.

There are some great courses our there which teach programming. We use PluralSight in house at Trade Me, and I've been through course on Code Academy. I also came across Code Avengers, which is an awesome resource aimed at schools to teach programming - I learnt a lot from some of their courses, so it's not just for kids!
These are great, but it was hard to find something that could be run in the group learning and workshop format I was after for our internal training session.

While researching courses one of our team leads told me about an iPhone game he was using to 'learn coding' called Robozzle.
Robozzle is a programming game where you give a robot a set of instructions to solve a puzzle. It can be pretty addictive...
There are simple tutorials, and then a large number of community created puzzles in varying degrees of difficulty. To solve the puzzles you have to assemble instructions for your robot to collect stars in a maze. utilize things like
I spent a couple of commutes playing the game, with good satisfaction when I cracked a puzzle, as well as good frustration when I spent upwards of 30 mins trying to solve one.

Robozzle ticked the boxes for what I wanted for a workshop
  • show that programming is a set of instructions
  • introduce basic programming conventions
  • be friendly and not scary to people who've never written code
  • be suitable for a group workshop 
So, I threw a draft together
I picked a handful of puzzles which showed the basic concepts within Robozzle; loops, subroutines, and conditional logic.
I added in an exercise on psuedo code to illustrate that the solutions were a set of instructions, and that that programming is writing instructions for computers to execute.
After I had this draft fleshed out, I socialised it with one of our team leads who's not got a strong programming back ground. He thought it would be a fun hour for people to go through, even if no learning took place.

So, we ran it with groups of 10 - 14 people in our training suite (room full of PCs), in one hour sessions.

What I saw and learnt in the sessions

  • People got psuedo code way faster than I expected them to, it wasn't that big a leap for people to get their heads around the concept. It proved to be really good for debugging solutions when people got stuck, and it reaffirmed that programming is just giving something a set of instructions.
  • Different people had different solutions to the puzzles. Most of the puzzles have more than one way of solving them, but at least two groups came up with solutions that stumped the facilitator (me).
  • The people with programming experience weren't the first to complete the solutions. I was worried that people with programming experience would be bored, or see it as a waste of time. But, at the end of the hour all groups in all sessions were still working.
  • People were keen to take the exercise back to their desks. I was walking to get a cup of tea this morning and spotted someone working on harder puzzles than were in the workshop. It was cool to see people still giving it a go five days after doing the session.
  • Some people resorted to writing out the psuedo code on paper for each puzzle, and stepping away from the computer.
  • People really liked the puzzle / game aspect of the workshop. They switched in to competition mode, trying to complete the games before others did. It was all in good fun, and added a nice energy to the room.
Overall, I'm really happy with how the exercise went.
The engagement was great, and people definitely walked away keen to get in to more programming training.

Robozzle

Welcome to a short exercise designed to teach you some basic programming concepts.
The point of the exercise is to show you how program code can be seen as a set of instructions, and show you some conventions like; loops, conditional logic, and subroutines
To do this we're going to use 'Robozzle'. It's a free web based programming game where you give a robot a set of instructions.
All up the exercise should take about 1-1.5 hrs.

Here's what to do...

Get set up

Grab a PC or phone (Android or iOS apps are available. Search 'Robozzle')
Pair up with someone.
Work through the puzzles in order, and utilise pair programming (one person use the mouse and keyboard, the other person talk - and then swap).
If you get stuck, feel free to ask for help!


Part 1: Introduction to Robozzle

  1. Tutorials
    1. Tutorial 1
    2. Tutorial 2
    3. Tutorial 3
    4. Tutorial 4
  2. Basic loop
    1. Stairs
      (make sure you keep this open once you solve it, you'll need it on the next page)

Part 2: Introduction to Psuedo Code

Psuedo Code is a notation resembling a simplified programming language, used in program design.
We're going to write some basic psuedo code to illustrate the instructions which we're giving the robot.

Log on to Trello
Visit our 'Robozle Psuedo Code (master)' trello board

This has been prepopulated with some psuedo-code statements
In the right hand menu in trello, choose '... More', then 'Copy Board' - this will make a copy of the board on your trello account.
  1. Using the statements; translate the solution you had for 'Stairs' (above) in to a psuedo code stack. 
  2. Move on to the Iteration Puzzle
    1. Using the board from above, build your solution with psuedo code FIRST
    2. Then, translate it in to Robozzle instructions. 
Question: What psuedo code instructions are missing? 

Extension if you're feeling up to itVisit our ''Robozle Psuedo Code (master)' trello board
This has been prepopulated with some more code-centric psuedo-code statements
In the right hand menu in trello, choose '... More', then 'Copy Board' - this will make a copy of the board on your trello account.


Part 3: More puzzles

Work through these puzzles.
If you get stuck, look at the Robozzle instructions like they're psuedo code. Walk through what you're telling the robot to do, and see where it might be going wrong.
  1. Nested subroutines
    1. Simple spiral
    2. Function calls
  2. Conditionals
    1. "First puzzle"
    2. "Very easy"
    3. "Don't fail"
  3. Conditional subroutine
    1. "Right on red"

Conclusion

You should have an understanding of program code as a set of instructions which is executed, and understand how things like loops, subroutines, and conditionals can be used to enhance instructions to increase efficiency, and expand logic.
Robozzle is a free to use game that you can play with in your spare time.
As well as web app versions, there are native apps for androis and iOS 

Thursday 12 May 2016

Lost in metaphorical translation

I like to use metaphors and simile as a friendly, relatable way to communicate ideas.
I recently learnt it's worth being careful with how you use these devices, as it's easy to mix your metaphors, lose the information, and worse lose your audience.

An example of this occurred after Michael Bolton gave a talk at a We Test meetup in Wellington on Metrics and Measurements and Numbers oh my! Michael’s gave an engaging talk (as always) with good stories of how metrics can unintentionally obscure rather than reveal information, and therefore explored the importance of reporting relevant information in an appropriate format.

In the discussion that followed Michael's talk, the group discussed ideas for alternatives to metrics and graphs. One suggestion was to utilise second order measurement to quickly convey information to people about the state or health of a project. A thumbs up or a thumbs down - is it good? Or not good?

An idea was put forward (I think by Michael) that we could ask people to give an indication as to whether something was “too hot”, “too cold” or “just right”.
Too hot - it’s going to burn us; there’s something dangerous here. Too cold - we’re not satisfied; we need to pay more attention to this. Just right - things are good; we’re satisfied with how much attention we've given it, and we don’t think we’ll get burned.
A 'Goldilocks reading'.


After the talk I spent hours thinking about this metaphor and how it would be a really simple concept to introduce in our teams.

I first met the idea of second order measurement through Michael Bolton’s 1997 article Three types of measurement and two ways to use them in StickyMinds, where he talks about Gerald M. (Jerry) Weinberg’s classifications of measurement.
The article is on our recommended reading list for test analysts here at Trade Me. It’s an article that I've personally referred and forwarded a number of times when working in and with iterative and agile teams. Usually, this has been in response to higher ups wanting to see test metrics to determine if a project will ship on time - but also to people within teams who give extensively technical and detailed reports when the audience don’t have (or don’t want to have) the level of technical understanding to ‘correctly’ interpret them.

The idea of ‘Goldilocks readings’ as an informing process sits well with me because I strongly believe in trusting the people who are working on a project, empowering them to use their knowledge, observations and gut to inform stakeholders and start discussions. Obviously, you have to support this with escalation and ‘help’ paths to make sure they’re not out of their depth, but both projects and teams benefit from informed people.
People who are informed make better decisions, so informing people early and often should lead to even better decision making.
Too often you hear about projects missing deadlines and the team saying “we were never going to hit that date”, to the surprise of some other stakeholders. Assuming those people weren't being arrogant or ignoring available information, why were they surprised? Where was the information they needed? Was it too late in the project to change things? Was it buried in metrics?

My theory is that a ‘Goldilocks reading’ early and often from the team on anything from quality criteria, to deadlines, to team collaboration would make sure that people can be as informed as they need to be, and the discussions we have about mitigation more meaningful and timely.
Fewer surprises when the bears get home.
The reading is coming directly from the people building, testing and validating the project.
Hearing something is ‘too hot’ (might burn us) would start conversations about implementation, expectations, and hopefully mitigation plans. Doing readings throughout a project would allow you to track if a project is getting better or worse.

I wanted to test the theory out.

I'm the product owner for an agile team who implements, supports and maintains our automation frameworks. They set goals each sprint, but I don’t always get a chance to see how they’re tracking towards those goals until the sprint concludes.
So, on Monday I went to the team’s stand up and pitched the idea:
“I want us to try something out, so that I can get information on how we’re tracking against our goals. But - I don’t want to give you any reporting overhead.
I want you to try doing Goldilocks readings - each stand-up you give a ‘too hot’, ‘too cold’ or ‘just right’ reading on the goal. ‘Too hot’ means it’s unlikely we’ll hit the goal, ‘just right’ means we’ll achieve it, and ‘too cold’ means we havent investigated enough to make a judgement.”


Unfortunately, while nodding their willingness to try out my idea, their blank looks told me something was wrong. After a decent pause, one of the team members asked "what is a Goldilocks?"

The team is made up of three outstanding test engineers - two Indians and one Chinese.
I thought I was super clever introducing this measurement concept with an allusion to the ‘famous’ judgements in the story of Goldilocks. The metaphor of heat and satisfaction with a product (porridge) was meant to be relatable and friendly - but meant nothing to the team as they had no affinity to the story of Goldilocks and the three bears. In their cultures, the story wasn't prevalent like it was in my white New Zealander upbringing.

Now, unfortunately when I explained the fairy tale it spawned more conversations about ‘breaking and entering’ rather than the protagonist’s need for porridge at an ideal temperature.

But - I learnt a valuable lesson. Wrapping a simple concept into a metaphor damaged the delivery of the concept because the audience didn't see the information I was trying to convey. It got lost in the messaging.

We’re still going to try ‘Goldilocks readings’ in the team soon, and I’ll let you know how it goes.
But I think we might settle on something more universally relatable like ‘Temperature readings’.
Going forward,
I'm going to make an effort to make sure my information isn't being obscured, in both my reporting on test activities and when I'm communicating new ideas.