Despite the evolution of digital engineering in the 50+* years since the first user interface for a computer system was commissioned, we tend to still believe that we must get everything right the first time. Anything less is regarded as a failure.
This situation immediately puts the team working to this goal under pressure to deliver to undefined expectations, and their success (or level of failure) will be measured using the broadest of measurements that are incapable of providing any direction or feedback.
The team is then caught in a downward spiral of spending time, resources and sanity developing a product that they are not certain will be fit for purpose. Inevitably they burn out, morale plummets and eventually, the project dwindles.
Traditional IT procurement and delivery strategy appears to still suffer from misdirection, if not outright waste of resources. Coupled with a high chance of delivering a product that’s not fit for purpose. IT news sites provide a shameful list of projects that have failed. Abandoned £10bn NHS IT system, £2.6bn Sainsbury’s warehouse automation failure, The UK’s worst public sector IT disasters, to mention a few.
To an outsider, we appear to be members of an industry that has ways of working that are inefficient and run a high chance of failure so deeply ingrained that we regard modifying our behaviour with suspicion.
In this post, I’m going to write about how my work as a proponent of User-Centred Design, hypothesis-driven development and fast feedback cycles can be used to enable digital projects to be delivered quickly, and respond to failure without breaking stride.
*In June 1964, NASA Commissioned the computer hardware and software that would enable the Apollo astronauts to control their spacecraft.
User-Centred Design (UCD) at speed
My area of expertise is ‘User-Centred Design’, and with a role of ‘Head of Product’, my remit is:
- Ensure that we identify as early as possible the organisation and its users needs with a reasonable level of confidence
- Translate those needs into a strategy that the development team can use as a plan for production
- Monitor the production to ensure that the organisation and its users’ needs are not lost in translation and represented by the project’s ‘definition of done’.
The first issue I face is that I enable the team to have ‘a reasonable level of confidence’. This is not a certainty, nor is it an assumption, or hunch. I have Kev Murray’s & David Draper’s presentation at the Manchester NUXUK presentation in August 2016 to thank for bringing a way of working that’s formed the basis of how I can work with ‘a reasonable level of confidence’.
Working with hypothesis
Let’s suppose that I have the collated outcomes from a project’s user research, hopefully this is a mixture of qualitative and quantitative knowledge, that’s been formatted so that the project team can understand the user needs. I personally like to use an archetype with quadrants: Name and sketch, who they are, what they do (within the service), and why they’re motivated to do this.
With the archetypes defined, it’s reasonable to ask a small group of domain experts (usually the project team) to define the goals, or discrete tasks that the users should (reasonably) be able to achieve within the service.
With the user goals identified, we can prioritise these against predefined organisation’s goals (usually defined as an organisation’s vision or North star). Now we have a prioritised backlog that we can slice into our delivery method of choice (Scrum, sprint or kanban mashup for instance).
However, we must remember that we’re working at two, if not three levels of abstraction from the users and their needs. A mistake could have been introduced at any point during this process, and for us to be true to the agile mantra ‘people over process’ (which I believe regards users and developers as people!), then we need a mechanism to ensure the team is working in the right direction.
This is where hypothesis-based design comes in. There are many variants of formal hypothesis statements, I’ll describe the one that works for me. I was introduced to hypothesis statements by Jeff Gothelf at NUX6 in Manchester, and refined by Ben Holiday (He added the final statement that introduces measurement –We’ll know we’re right when we see this evidence).
We believe that:
Meeting this user need
With these features
Will create this business outcome
We’ll know we’re right when we see this evidence
What a hypothesis statement gives the team is a neatly defined set of criteria that describe what we build should be able to do, how it will benefit the user, and with the closing statement, how we’ll know if we’ve built the right thing.
Now imagine I’ve persuaded a delivery manager to allow me to use the team’s time and effort to produce hypothesis statements for the most pressing (highest on the backlog) user needs, now what? I need the fastest way of validating our hypothesis, which is by building a minimal viable product, and testing it.
Iterative design through MVP (the real thing, not the corner-cutting cynical version that you’ve seen before)
Every time I present the agile delivery process, I show a slide with Henrik Kniberg’s skateboard to car diagram. Everyone nods and agrees, yes this is how we should deliver software. Then we continue with the project, and the ingrained desire to ‘just get our heads down and ship stuff’ takes over, and we end up shipping with a mindset that ‘once it’s made, it’s done, and we can move on’. This is the antithesis of what Henrik was describing.
MVP’s teamed with hypothesis statements are a strategy whereby we build the quickest, cheapest, and dirtiest thing possible that enables us to validate our hypothesis. If it works, then we’re able to dedicate more resources to building a more refined version with a level of increased confidence. However, if it fails, we learn why, using the hypothesis statement as a grounding point, and build another MVP to repeat the experiment.
To be able to work like this, the team must be capable of working together, pivoting quickly, communicating at all levels, and having an all-round level of agility that is only possible when they are a high performing team.
More speed!
Once a team embraces a hypothesis way of working, we find that we’ve started a repeatable process that’s capable of continuously delivering output with validation baked in. Therefore, is the only limit to the team its overall capacity? Unfortunately, this is where waste, unevenness and overburdening come into play. These are the 3 major blockers as described in classic Lean/Kanban practice. In my experience, and thankfully backed up by Google’s state of dev-ops report 2019, we can see that wherever wastage is reduced, we reduce the risk of a project’s failure. Rephrasing the Google dev-ops report to my needs:
- If we can increase software deployment frequency so that we are releasing multiple times per day, not only are we able to quickly validate our hypothesis, we are also demonstrating our direction of travel to the project stakeholders.
- If we can reduce the lead time from a user need being identified to an experiment being live on the service, we are able to respond to emergent needs, and (again) experiment efficiently.
With a release methodology/pipeline in place like this, we’re able to help a delivery team move at pace, and enjoy the benefits of rapid delivery.
A winning process
I’ve described a project process with an inbuilt method for enabling us to have confidence in what we build and a strategy for iterating until we get there (and knowing both where ‘there’ is, as well as being able to measure whether we’ve arrived).
If the team is coding using a TDD methodology (Test-Driven Development), or BDD methodology (Behavior-Driven Development), then we’re in a strong position to get the whole team designing from user research, through to writing the tests and the code that are the core of the project.
This way of using hypothesis as the vehicle for describing needs and measurable success criteria is also valuable when working with services that have no direct users. In this scenario we can treat the business layer/logic, API, Interface as a service user with needs and measurements for success (response rate etc.).
Delivering at speed
However, we’ll raise our stakeholders’ level of stress if we don’t deliver at speed. If we’re seen to be ‘experimenting’ (which we are), without constantly moving towards the ultimate goal of meeting the organisation and its users goals, then we can erode the trust placed in us. Thankfully a fast pace is achievable if a few rules are in place:
- The team’s methodology is transparent and has buy-in from the project stakeholders.
- The organisation’s goals and a clear direction of travel towards them are agreed by the stakeholders.
- Stakeholders are obliged to give direction and feedback as quickly as possible.
- The team is happy. This is a single KPI (Key Performance Indicator) that embodies all the qualities that define a high-performance team. If your team has friction within it, it needs fixing now.
- Managers are servants to the creators. As a manager, your role is to make sure that anything that would block or delay the work of a researcher, developer, experience designer, etc. is managed, mitigated or at least deflected from your team.
- Teams are cross-functional. If your team is missing an interaction designer, or database engineer, then your pace will be lower than it could be.
I’ve documented (admittedly at a very high level) how we approach uncertainty in digital projects, enable a team to rationalise the unknown, manage it and maintain their agility to respond when things don’t work. Coupling this with a methodology that integrates into the way code is written and tested, and with a clear measurement for determining when we’ve been successful, means that we’re ‘preparing for success’, and have a feedback loop that identifies failure as quickly as possible. Finally, with a rapid release/test cycle, we’re able to reduce wastage through delay and unevenness in our production cycle.
Though I’ve written about a complete end-to-end process, please don’t be under the impression that this is a one-size-fits-all solution. All organisations are different, as are all projects. How Made Tech designs your project will be completely bespoke and respectful of your organisation’s culture.
We hope you enjoyed this post and found the content informative and engaging. We are always trying to improve our blog and we’d appreciate any feedback you could share in this short feedback survey.