Thursday, 11 December 2014

What's wrong with a Little predictability?

I was asked recently about Little's law. For the uninitiated, it is a fundamental, but elegant result in queuing theory. It's akin to the simplicity of Einstein's 'E' equals mc squared as it reduces a whole heap of complexity into a few simple variables. It is now finally being applied to software Kanban having existed way before the field of software engineering ever existed.

In software, it's pretty simple and relates the average number of cards in play (between the backlog and done) to the average cycle time and arrival rate. If your arrival rate is the same as your service rate, which in Scrum you would expect it to be if you're delivering all your cards in that Sprint's time period, you end up with a pretty good link.

So what's the problem?


The issue is (again) that people miss a crucial detail. It's how KISS differs from Occam's razor and how folk abuse the agile manifesto. Remember the items on the right? Now do you remember the last statement that references them? ("Whilst there is value in the items on the right, we value the items on the left more").

With Little's law, it is that the team has to attain predictability. That predictability is the team consistently delivering the same number of points every sprint and/or having a consistent cycle time. Little's law doesn't technically have a stochastic component, so obviously needs stability to attain a zero variance. The problem you have, especially at the beginning of each 'project' [*grumble* *humbug* need #NoProjects] is that you do not have that stability. Teams can under or over-perform, so there isn't stability. That said, a team that is also improving and delivering 'more', which is always desirable, then has the disadvantage that they're not naturally stable! They are delivering more, so naturally the average changes.

But isn't improving a good thing?

Totally! It's the best thing you can do! However, if you are hoping to use Little's law to project/forecast in an environment which is improving, you can't do it because of this. At least, you can't do it without the introduction of a stochastic component, or comparing against the desired burn-up. Believe it or not, improving is instability which naturally increases the variance of the delivery as a whole. That's your trade off! Continual improvement means you cannot gain the stability needed to use Little's law!

*shock horror*

Are you sure?

Yep, very!

Consider the following graph. it shows a team's data where they do not improve their delivery and are running late to start. If projecting forwards, their variance is very narrow. You're going to be very late, but you're pretty sure they are going to be late. If you plot the projection of the end of the 'project' through the average burn-up as you accumulate ACTUAL data, you'll see where it's likely to be:


Team who do not improve


Little's Law could be used here to project where they're going to be and if you look at the range of possible outcomes in the time allotted or the time variance needed to complete the scope (remembering the golden triangle) you'll see this is much narrower than the team who improve below!

Team practising continuous improvement
Here Little's Law is pretty much no use! Indeed, in most teams, you can't get enough of a data set for each improvement to measure the average and deviation reliably.

Conclusion: What to do?

At the end of the day, you're just trying to give yourself the best chance. It’s not intuitive to applaud greater variance, since that’s normally greater risk, but because the variance needs to ‘cross’ a value (the average, which in this case is the original burn-up. i.e. 'fixed' at the outset if scope is fixed), it’s the more points you deliver ‘above the thick red line’ that count. If it swings wildly with the majority of the mass below the red line, you’re scr3wed. If it’s above, you're rocking! This is why I prefer to get the average of teams to be on or above the red line and then reduce the variance, since this gives you greater certainty about the burn-up rate.

So, in short, there are a million and one tools out there to help folk with software development and predictability. Teams have to be careful they don't pick a tool and misapply it and it's these limits that often tell us whether it is appropriate or not to use it. The situation where it doesn't work may outnumber the ones that do. We're not all hammering nails, after all.