Most of us have all seen MoSCoW used when assigning a priority to a business requirement. In case you haven't, MoSCoW is an acronym for:
- Must – Without it, there is no system. What is a must in your software?
- Should – Without it in the system, it will be a struggle and a work around may exist..
- Could – It is good to have the feature, but it would be easily missed from the software.
- Won’t – A decision is taken that the feature is not needed.
It sounds great in principle - very simple, but it's not very granular and if there are a lot of requirements it doesn't allow for real prioritisation.
The backlog is effectively an ordinal list of priority that allows the features with the most value to be developed. But how can we refine this list beyond the very blunt four statuses given to us by MoSCoW? How can a Product Owner stress to a development team what to work on when there are 100 Musts but capacity to only build 20?
I've seen this many times; a business owner is confronted with a long list of equally rated requirements and becomes paralysed. What is often not considered is the business value and cost of each requirement. If paralysis is reached a useful technique is to find and apply a relevant metric that can quantify the value and cost.
This can then be converted into a prioritisation matrix to give a visualisation of the relative importance of each item.
Can we use another method?
Prioritising with Fibonacci sequencing
Some teams will prioritise by abstract measures such as t-shirt size (XS, S, M, L, XL, XXL). If it works then great.
My preference is to rate business priority using a Fibonacci scale. This is more discerning than a sledge hammer MoSCoW approach. A Fibonacci scale consists of series of numbers where each number is the sum of the previous two (0, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, etc).
As the numbers increase, they become further and further apart. That means its easier to give a score with scores of 8 vs 13 than deciding between scores of 8 vs 9. This is why I prefer it over regular increments, for example from 1-10.
There will always be items that people understand well and can clearly and definitively say are the most important. There will also be another slice of items which are well known and can be said to be very important, but not quite as important as the previous ones. Eventually we will come to those which have some importance and whose rating isn't so clear. Using a 1-10 scale would make it harder to say it is 6 or 7, but easier to put as a 13 or 21.
By the time we have provided those items ranked at 1, 2 and 3, we will re-rate what we have left and all those 13 or 21 items may be better understood allowing them to be either upgraded or downgraded. The item rated 55 may suddenly become more important and change to a 2. Alternatively, it may be realised that it isn't actually needed any more or it is catered for in a different way.
There is less value trying to break down and fully understand everything such as those items which don't jump out as being vital like a 1 or 2. That's not to say that there isn't value in understanding everything, but would you rather be creating and delivering value rather than building a deep, upfront understanding of items which aren't in high demand and may even be removed from the backlog later on?
No matter which scale is used, it is easier to rate things relatively, rather than absolutely and at any one point we will always have a backlog of items whose relative priority is known.
Value and cost
What I haven't mentioned is value and cost. This is down to the team to decide upon. Sometimes a team may decide that the rating itself is the business value and this will be sufficient.
At other times additional factors may be needed in order to work out a priority rating. We may wish to take into account complexity. Having this may lead to a change in priority if three simple items can be done instead of one complicated item.
Complexity and task sizing with Fibonacci
The same principles as described for prioritisation can be applied in many areas. For sizing or planning, estimates have traditionally been given in terms of time, for example x hours of work, and a completion date set.
For agile teams however, complexity expressed as story points may be used. People are notoriously bad at sizing by use of a date because they fail to take into account all of those 'little' things that take your attention in the day; meetings, back-to-back meetings, phone calls, emails - they all carry a burden so it's easier just to rate the size / complexity of a task. The outcome will be a rated list of all items which can then be looked at relatively.
What does this mean for deadlines? Sprints have deadlines, but as part of a bigger plan deadlines and agile don't necessarily go so easily hand in hand. Deadlines need to be looked at in a different way and will be the topic of a separate blog.
Planning poker is a tool that can be used in order to estimate whatever it is that we want to measure. It is designed to be used in order to avoid the biases and conformance that creep in when a group are asked to decide something together.
For each item, the team will briefly discuss it. Each team member will mentally formulate an estimate. Then everyone will show a card with the number that reflects their estimate.
If everyone agrees then we move to the next item. If there is a material difference that concerns the group then a discussion can be had and cards shown again. If material differences still exist then the item can be put to one side to be reviewed later.
All members of the scrum team need to be there but others should be selected according to what is being estimated. If it's for sizing the complexity of the task to build, programmers should be there. Be aware though, bringing the wrong people can result in the team being pressured and giving bad estimates.