Thursday 13 March 2014

#NoEstimates and Uncertain Cones

It has been a good month for alienating people... if that can ever be considered a good thing. The first was outing a chap as not really knowing what he was talking about on a certain social media platform. The second was a discussion about #NoEstimates.

I am not going to go over #NoEstimates again, as you can read my initial opinion in a post from last year. The thin slicing of stories really help with this and the concepts in #NoEstimates (which are somewhat shared by ideas like the Elephant carpaccio) show a lot of promise. However, we have to remember that this in itself won't afford continual improvement. That's the missing link.

On a very recent twitter stream, I got into an argument... sorry, passionate debate, with Woody Zuill, a long time advocate of the #NoEstimates movement. One of the things I take issue with is that the term is misleading and has not just zero, but possibly damaging connotations, on the grounds that as people misunderstood agile, they'll misunderstand this term too.

I also reiterated that estimates themselves never killed projects, it was the variance between the estimate and the actual delivery effort that killed projects. That's actually rather obvious when we step out of the somewhat siloed mindset that we developers often have (and sadly criticise others for).

fig 1 - context of discussion before moving to here


If we're aiming to step up into lean and improve the rate of these tiny deliveries to the point at which they are a true flow (if, mathematically speaking we are aiming to see what happens when 'delta-t' tends to zero) and attain consistency, we need to reduce this variance around tasks (and the flow), which is the reason that #NoEstimates often works in the first place. The truth is you're always going to be delivering thin slices, not a 'zero-effort' task and as such, stats has a huge part to play in that as the variance itself increases by over-unity as the task size increases. So to summarise the agreed points, thin slices = good. Name change #NoEstimates -> #BeyondEstimates also agree.

Effing CHEEK!

*meh*

Teams don't always know the solution or even problem they're trying to solve at the beginning of a project. This introduces huge uncertainty into projects at that point. It is the point that estimates are at their most useless and are as good as guaranteed to be wrong.

As we deliver working pieces of code into production, through small increments ideally (as they naturally have a lower individual variance) then we learn more and more about the environment we're working in, the platform were working on, the problems were trying to solve and crucially, the bit that most teams miss, things about themselves and their performance in that environment.

I mentioned the cone of uncertainty in that twitter storm. The cone of uncertainty basically maps out the levels of uncertainty in projects as they move through delivery cycles. The most obvious and visible part of this is seeing the numbers of spikes reduce (or the effort required for them), as you need less and less to understand the problems that are outstanding in the domain. Granted the cone of uncertainty did exist in ye olde world, but if you follow the cone of uncertainty through the lifetime of the project, you'll very typically see the blue line, which is also matched by the funding in the diagram:

fig 2 - image from agile-in-a-nutshell (incremental funding)


Now, this is fine. You can't do much about uncertainty in itself, aside from reducing it as quickly as possible. So where I agree with Mr Zuill et al is that delivering working software often is a good thing and slicing it up thinly is also a good idea. However, this doesn't change the level of uncertainty or that developers still need to improve to deliver the value, which remember, as an investment, includes their time and remuneration (see ROI and ROR for a financial explanation of why that's the case. It's a crucial business value metric for the product owner and developers in a team who are aiming to be trusted or self-sufficient would do well to remember that. Those who have worked LS or have run their own business will already be well aware of these ).

The comment I was probably most 'insulted' by was:

...Such as delivering early and often."

At this point I wondered whether people think I am some sort of #@&!?!

I replied explaining this goes without saying. Delivering small stories early and often allows you to descend the cone of uncertainty and plateau much faster than you otherwise would, in turn getting to the point where Little's Law kicks in and you attain predictability, which is were #NoEstimates really works but the name totally misses the point of this. Which is why I agree with Zuill's assessment that it should be called #BeyondEstimates.

If you don't know where you are or what direction on that curve you're travelling (it's possible for uncertainty to increase. Ask science) then you really aren't going to make any progress. That's what I think #NoEstimates is missing.

As mentioned, the cone of uncertainty ultimately manifests in agile as an inconsistent flow and cycle time with a significant variance (statistical definition of significance). As you deliver more of these slices faster, you start to reach significance thresholds where the use of a normal distribution starts to become viable (30+ samples ideally) and you can infer statistical significance and hence provide control limits for the team. Again, #NoEstimates works well with this, as these thin slices get you to significance faster.

Flow is a concept that features extremely heavily in queuing theory. The cycle-time and throughput effectively manifest in Little's law's equation, which when applicable, can be used to answer questions like "How much effort is involved in this *thing*". However, when you start, the variance is far too high for Little's law to work effectively or return any meaningful predictive information apart from a snapshot of were you are. There is just too much chaos.

When your flow stabilises, the chaos has eradicated and the uncertainty ratio tends to get much closer to 1, i.e. the ability to track how much effort or how long each of these thin slices takes relative to the other thin slices. It's this relative comparison that then allows the predictability to manifest (lowering the standard deviation and hence variance). This makes Evolutionary Estimation easier because you have only got to think about breaking tasks down into tiny, think slices and this is where this understanding helps this evolution. Go too small, then the overhead of build->test->deploy far outweighs the development costs and in the case of ROR, monetary uncertainty increases again. 

This is something I assume everyone who knows about agile development knows about. Given his response, I am assuming that's not the case. So I am obviously in a different place from 'them' (whoever they are). However, as well as being a good talking point amongst the capable, it also seems to manifest in amateurs who just like to code, which is the same pattern I saw when I first came across XP in 2002.

History repeating itself? I really hope not! So I second Zuill's proposal to change the name to #BeyondEstimates

Signed,


Disgruntled of the web :(

0 comments:

Post a Comment

Whadda ya say?