Sunday, 22 November 2015

Revisiting the Cone

In a twitter discussion with folk on the #NoEstimates thread again (I don't know why I go back there) Henrik Ebbeskog (@henebb) stated there is no reason to fix time. Absolutely, and as mentioned to others though, fixing time means the other dimensions of software delivery [are allowed to, or must] vary. This isn't a surprise to most of us in the agile space, nor is it a surprise to management scientists either.

As part of the discussion, Henrik mentioned that you could fix time to 1 second and it reminded me of a discussion I once had with an old Senior PM (at the time), Duncan McCreadie. It centred around agile development some 5 years ago and in the discussion, he stated that if he wanted to monitor an realign the delivery, he'd look to bring delivery rates down to every day or every 4 hours. He was spot on with this and I agree.

The reason is in part due to the Cone of Uncertainty I keep banging on about mathematically, and even get heckled at, but that doesn't change the math, nor does it change the empirical numbers, which also back it up.

Why Delivery Rate Matters

If you deliver software continuously, each delivery gives you knowledge about your process that you didn't have before. What has happened, has happened, you can't change that, but you can both learn from it and consider it's variance zero (it's happened - there is no variance, it's a certainty) and if you are, then make things happen faster.

This is like I illustrate in:

http://goadingtheitgeek.blogspot.co.uk/2014/10/cone-head.html

In essence:
You've delivered a coin flip. It's now become known and in a fixed number of delivery cycles, this changes the expected outcome for all coin flips and it's potential overall variance.
Each flip of a coin is a delivery. Now, substitute the words 'coin flip' with 'story' and 'all coin flips' to 'major release' and reread the above.

As you can see from the graphs of actual project data shown at:

http://goadingtheitgeek.blogspot.co.uk/2015/04/monitoring-value.html

This applies across the board. There isn't a real delivery process in the entire world which doesn't follow this rule. The only possible case, is if the process extends to infinity, since this just pushes out the variance to perpetuity, which follows the last [sketched] graph at:

http://goadingtheitgeek.blogspot.co.uk/2015/04/lean-agile-metrics-like-it-or-not-stats.html

However, you'll note that nothing exists to infinite time. Even the Universe isn't considered to be able to exist at infinite time and I'd argue that your project budget would run out before then. Using faster monitoring approaches, as well as employing lean architectures, you will make the most of that budget as well as making it easier to either align or find a new direction.



E

Sunday, 15 November 2015

Reading the Maximum of an Array

There's an interesting question on Quora doing the rounds at the moment. It's entitled:

"What's the fastest algorithm to find the largest number in an unsorted array?"

Traditional O(N)

To repeatedly find the largest value of any array supplied to a function is O(N). You have to look at each element once, since you're checking if it is a new maximum. However can we do better?

I postulated this left of field idea in response to a comment from the original poster, which stated their lecturer asked them "what is it that an array automatically has which may be used to improve on this?"


"How about setting the array 'Base' address (pointer to the address of the first item)  to the largest number? After all, all you want is the one dereferenced pointer value. Not to sort or search the array. So you're then effectively always reading the value at address array[0]. Therefore O(1) each time"

In principle, this is the same as other answers appearing which propose storing the index of the maximum location as you build the array.

The Building

Building an array somewhere is at least an O(1) operation. This lower bound is defined using the Omega asymptotic notation. So

\[\Omega(1)\]


The Reading

Here is where it gets interesting. Normally, to find the maximum value of an array by iterating through each element, we'd get a $O(n)$ algorithm as we iterate through each item and compare it to our maximum. i.e. the algorithm (in C#):

        public static int Maxi(ArrayList array)
        {
            var maxValue = 0;
            for(var i = 0; i < array.Count; i++)
            {
                if ((int)array[i] > maxValue)
                    maxValue = (int)array[i];
            }

            return maxValue;
        }

Is an $O(n)$ algorithm.

Efficiency Step

My proposal was essentially to create the efficiency, by the combination of the two operations into one. So you would build the array but keep hold of the maximum value as you build it. You can encapsulate it in a class, we'll call it MyMaximus, just for fun and it is essentially:

    public class MyMaximusArray
    {
        public int Maximum { get; private set; }

        private MyMaximusArray()
        {
        }

        public static MyMaximusArray CreateFrom(int arrayLength)
        {
            var result = new MyMaximusArray();
            var random = new Random();
            var array = new ArrayList(arrayLength);
            
            for (var i = 0; i < arrayLength; i++)
            {
                var value = random.Next(int.MaxValue);
                array.Add(value);

                if (value > result.Maximum)
                    result.Maximum = value;
            }

            return result;
        }
    }


Now, just reading the Maximum property gives you the maximum value. You can alternatively substitute the value of Maximum to be the index in the static function and you have what I was suggesting.

The expectation is you'd substitute a reduction in array build time on building for the read time when locating the maximum item in the array. This is particularly suitable where the data doesn't change.

How Much Quicker?

So, the proof of the pudding is in the eating. Hence, I wrapped the two of these this up in a little console app with a tick-timer which read the ticks at the start and end and output the result, including for the sub-phases of building the array and reads of the maximum. For those people unfamiliar with ticks, they are a measure of 1/10,000 of a millisecond (1/10,000,000) which is sufficient for most applications.

The tests were each run three times, over a logarithmic scale from 1 to 10,000,000 and an average taken across the three.

The algorithms were then modified to behave as if they would be written to once and the maximum read many times.

The results were pretty conclusive:

Build Time

The expectation would be the 'search optimised' algorithm (MyMaximus) would perform worse than the regular algorithm when first building the array and sure enough it did, though surprisingly, not as much I thought. Both algorithms on this stage of the process would be O(n), with only a difference in coefficient. The 'doubling' due to the introduction of the if comparison didn't quite occur, though I speculate this may be due to JIT optimisation on the .NET platform.




Maximum Value Search 

Here is where the MyMaximus version was expected to make the gains according to the maths. This also behaved as expected:


The blue line is following the axis because it genuinely was zero. Here are the actual data points:

Maximum item search times (ticks)

The reason it is zero is that I am running this on a 4 GHz system, with a 20x clock multiplier and 1866MHz ram. All in all, this means it can carry out a CPU cycle including memory access (the slowest part of this process) therefore one read instruction occurs every 0.000000000526 of a second, which if a tick is 0.0000001 of a second, will never register. Hence, this result.

Total Time 

The combination of build and run fulfils the full scenario. Here, we expected the MyMaximus read to achieve similar asymptotic performance on the single run, but perform substantially better on the continual searches, tending towards $\Omega(n)$ the more searches that happened.

single run comparison


Total Search Performance by Size of Array

So overall, the performance of MyMaximus versus a regular search resulted in a small and I'd argue insignificant (chi-squared another day) win in the single search case. What happens in the case of the array being queried for it's maximum multiple times? The expectation is that the average will start off about the same when building the array, but the performance of the queries will be much faster with MyMaximus.

To test this, I created the same setup, but this time asking for the maximum 1000 times per array size. The end results seemed to live up to that expectation:



So what exactly happens?

It's pretty straightforward. The lower bound of the MyMaximus algorithm is $\Omega(n)$ whilst the lower bound of the regular algorithm is  $O(n)$ and that's the best you can do. So MyMaximus tends to the lower bound over time, whilst the regular algorithm does not.



Conclusion

The context of a particular algorithm is as important to the asymptotic complexity as the individual optimisation itself. This is well a known idea in Electronic engineering, as logic circuitry, such as NAND gate representaitons of other circuitry, are often evaluated to remove unnecessary gates and save money. Here you're doing it to save time, which in this era of cloud computing, also happens to save you money and indeed, losses if customers are not put off.

In any case, the question on Quora had an academic slant. Just be aware there's more there than meets the eye :)

Tuesday, 13 October 2015

When what people say isn't what they do... (Aka How I escaped a pseudo-Cult)

**** UPDATE: 12th October 2015 ****

This was a blog post I was hoping never to publish. It was something I wrote in April frankly out of rage at the way this organisation was going and specifically on the behaviour of the central limited company in the middle. I wrote this as a cathartic exercise, almost to put my thoughts down on paper after what was frankly a bizarre experience with this central 'hub' of the organisation. The thing that has held me off publishing was that a number of folk within the organisation this is aimed at are good, honest folk. Any negative experiences I've had since are equally likely to have resulted from the middle of the network picking up stuff from one person and making 'suggestions' to another person, which the other person then assumes is their own contribution. We've all worked in corporate environments where ideas are stolen and sold on as coming from their boss or another person. Personally, I don't like those environments. That's dishonest in my mind.

Many people in the network are there in good faith, more or less know what they're doing (which is good enough for most needs), who I've worked with before, who communicate the ideas around lean-agility well, even though their understanding of the mechanisms of operation isn't close to say mine, or other's in the field not within this network. It is the communication that gets the fundamental message across, especially to those just starting off in the Lean-Agile space, not the expertise you have. That is a lesson I am continually learning, especially where there is a significant gap between where I am and where the organisation is.

If there is collateral damage associated with this, then I apologise, but in reality, I am somewhat fed up of my contributions being used without attribution and whilst I have always taken the stance of sharing work, that experience earlier in the year has started me thinking about formally protecting my intellectual property, something I was always hoping to avoid. After the events of this weekend, I am very seriously considering this again.

Written in April 2015

**** End Update ****


For those that have known or followed me for a while, they know I have a fierce sense of justice. I mean really fierce. I don't consider myself an activist, but where I feel it's needed, I stand up and make it count. This happened with the closure of Manchester libraries and of course means I am one of the folk who is not scared to stand up and keep the lean and agile community honest even though I contribute to it. I have no issue looking inward for opportunities to improve as well as being honest to clients and colleagues about how I think things are going, both in the way I see my contribution to the engagement and vice versa. I unwittingly developed this 'perhaps too honest for his own good' stance over the years and am pretty clear with myself on not being a doormat.

I don't believe in sugaring pills or promising things I have no intention of delivering and as an engineer, I find organisational politics dysfunctional and abhorrent outside the need to improve employee relations with each other and improve the flow of work. Anything else is a huge waste of time! If everyone is under the impression everything is perfect when it isn't, the end client not only gets less value than they expect, but also has paid out way more than they should have done! The key to a lot of this is honesty and transparency and as I found out recently, organisations claiming this mantra aren't anywhere close to understanding what that means! Indeed, when you find yourself peeking under the hood of the organisation I've just been booted out of for whistle-blowing, and working through what the concepts actually mean or do, then you more or less lose all faith in humanity.

Network Consultancy Model

Last year, I had the then opportunity to join a new movement. It was a network of contract and consultancy practitioners who wanted to work together to solve client problems and contribute to a better model of consultancy than was on offer from traditional consultancies. At the time, this sounded brilliant! It sounded like it was the catalyst to Stoos that we needed to take it from a talking shop to action. However, my experience of this organisation's model in particular eventually turned sour, even though by then I'd put the proverbial blood sweat and tears into making it work.

Being a new model, it was always risky from a contractor's perspective. Not necessarily because there was maliciously anything wrong per se, just that as a new engagement model, it was just that. New! This means that classical procurement models weren't likely to fit it perfectly, so a misalignment was bound to occur.

I did my due diligence and a couple of things stood out as mildly concerning. For the contractor, the proportion of income taken was a smidgen higher than recruitment agents take. OK, one percentage point in real terms, though on a higher rate, that sounded fine to me. So I signed up and all I needed to do was undergo a recommendation and an assessment. Passed those easy enough.

Initial Conversations

The chap who called me, explained the model. He then sent documentation on chaordic organisations. It was a good read, one I certainly recommend, but I couldn't tell at the time whether or not this was the organisation's material. Indeed, when I asked the chap about it he couldn't tell me much materially about what it entailed. So I read it a bit later and it transpired that it wasn't, as it was a recommendation from one of the existing members (also an escapee I've since found out :). However, as someone who has been responsible for branches of federated organisations for more than 6.5 years, I felt pretty confident I knew how that sort of model worked and in any case, felt I could support the network in making such a model work, even though it was pretty obvious from early on that this individual, the director of the limited company representing the network's core had never even been close to doing this sort of work in earnest before. Other things stood out as more concerning though!

Engagement Method

Firstly, there was no contract! It was called a 'memorandum of understanding' (MoU) which was apparently agreed not to be a legally binding contract between the parties. I've copied the relevant section below:



In addition, this MoU was very sales focussed. Plus, the way the payments worked at the time, you would be charged out at a full rate, the company would take a large percentage to run the network (some 17% of the charge out rate) with a 4% locator/principle fee then on top of that to pay the person who found the opportunity or managed the relationship with the client. The rest was called the booking value and this was sold to me as 'your money'.

From the booking value, which is effectively what you were booked out at 5% would be taken from you (interesting, since it was 'my money') and split into network contribution pots. That 5% contribution would be split:

  • Bonus fund = 17%
  • Investment Pot = 33%
  • Reserves = 33%
  • Blue-sky = 17%

HiveMind pricing model as explained by HMN Ltd  - "it's really very simple"

The only fund you were more or less guaranteed to receive back was the reserves contribution as one of two options were available to you.

Conversations around this would usually continue with "It's really very simple". In sales-speak, this very often means it's about to get convoluted as hell! "It's really very simple" is often an attempt to make the listener feel they aren't smart enough to understand it, deflects any questions and thus need you to explain it to them or perhaps divert tough questions. Consultants seeing themselves as smart, or having enough of an emotional tie to worry about that smart reputation almost become anxious at the prospect of not knowing something. So they genuinely don't ask.

I don't fall for that one and these days I don't care enough about my image to not ask the necessary questions to tease out the details and thus pull sales people up on the rubbish they're spouting (as many PC World employees who have had the misfortune to engage with me can attest ;). After all, in the agile community, we call each other out all the time, which keeps us as a community honest and is a form of informal peer-review. So "It's really very simple" is one of those phrases that should be added to this list.

The two options were:

Option 1 - Leave this 5% contribution in the network and get a proportion back in 12 months.

This gave you the opportunity to get a share of the bonus fund, which was made up from 33% of half of that 5% contribution (still with me?). The reserves making up 66% of 2.5% of it's value you got back no matter what, since that was your money... together with all the other bits that were your money. The blue-sky fund was reserved for a committee to decide what to do with (I suspect market analysts would decide this, since the network was looking to build collateral and the network was very cronyistic in this regard). An investment contribution was for the network as a whole to decide. So we would get a vote on what happened to this investment fund and our voting would be weighted according to the amount of money contributed to this investment sub-pot.

Option 2 - Take the reserve funds out immediately and incur loss of other contributions

If you took this option, the money, your money, would be split in the same proportions as before but this time, the only proportion you get back is the reserves. The investment and blue-sky pots you lose anyway, the reserves you take, with the bonus then being split in half, with half going to the investment pot and half to the bonus fund, neither of which you now have any say in. So the bonus fund and the investment pots go up by a certain amount, which means that those contributing within the network, towing the network line, could get more than they put in and the investment pot is worth more than the collective voluntary contributors.

The aim was always claimed to "keep the network front and centre" of the expert practitioners mind (by basically holding on to a proportion of their cash - keep working or contributing, or else). I knew I was being sold to and I didn't care for the sales pitch. For me, there were a number of folk who had previously proven themselves and I was willing to help them out via the network. After a while of doing some significant work for the network, it became clear there was such a disconnect between what the director of this network hub thought and what it actually is and after a short while, I assumed I would lose all the contribution from the start.

Basic Premise

The MoU implied expert practitioners would support each other and the network. I was fine with that and having worked in many contexts like this, including in the 3rd sector, I was very comfortable I knew what that meant. Indeed, my own company scales that way and I myself interviewed one other person who was very clear on federated structures. These others I was tasked with evaluating and they seemed a really good fit to me, given the parallels in what I understood the model to be, as they also had experience working in fluid organisations. So they were a perfect fit with what this consultancy network claimed to value and the way it aimed to operate. In the end, it became very obvious there was nothing further from the truth!

My First Time...

Eventually, I got my first engagement through this consultancy network. It was pretty cool! The company was Northern Gas Networks. They had been through a massive 2 to 3 year cultural transformation exercise to make them more receptive to innovative ways of working and some of the agile coaching slotted in really really well and delivered significant impact! The teams 'got it' really easily and the consultancy network's expert practitioners on board were a really great bunch to work with! The end client itself was a pleasure to partner in!

As of the year to March 2015, the network and the limited company was almost entirely funded through expert practitioner engagement, not any other form of income. So expert practitioners and contractors funded the network in its entirety.

However, even then, some things didn't seem quite right to me. The first was that the newly appointed Director of Innovation, Improvement and Information at NGN was an ex-HiveMind director. He's a positively passionate guy, always good, who knew the director of this consultancy network's limited company from his work at a big market analyst firm and he released his HM interest before taking up the role not long before I arrived on site. You could argue an element of cronyism and there was a disaster of a situation which if you know the chap, probably wouldn't come as a surprise ;) but given there wasn't a current conflict of interest, there wasn't an obvious problem per se.

The bottom line is he landed on site and expanded HMN engagement by the bucketload! For those of us with experience in agility, we know that you should never big-bang agility, EVER! but that's what happened :(  Eventually it took, which didn't really surprise me given the 2.5 years of work that had gone on way before the network got involved. However, there was this "Black Thursday" incident akin to the banking crash, meaning we all had to go home and some of the network never came back.

Black Thursday was caused by a misalignment with the consultancy model, as I expected earlier. So it didn't come as any surprise to me. I had rejected an offer of a full-time contract with NGN precisely because, at my rate, I'd burn through client cash too early in their agile journey and of course, all eggs-one newly shaped basket is high risk in any case. Unfortunately, other consultants and contractors who were not as experienced in this model had banked their livelihoods on the engagement, giving up offers of traditional, stable contract work to take on the responsibilities in this consultancy. Even recruitment agencies, who I hate with a passion, I've never seen booted out of a client and in-turn have to ditch contractors, though it is theoretically possible. Even during the calls after that event, where the network took stock, I made a point of trying to support others into the organisation first. After all, due to other client work, I wasn't able to be on site and other members of the group had familial changes which meant I felt I had to offer them the opportunity to go in first, so stood aside (that's the kind of guy I am, but also the very reason why I am now a cynical old git, as it's been abused so many times in my career).


The Folk...

I met some good folk there. Really grounded folk who understood agile practise and some who could bridge clients up a little closer into the way I see things (and some of the maths that goes with it). There were a number of interesting and exciting technical aspects, including working in Cloud, building CI, BDD, disrupting the classical vendor procurement models, tracking and planning and of course, Agile-EA.

Chaordic != Hub and Spoke

It became pretty obvious early on that the network wasn't really a fluid, chaordic network per se. In order for it to work, the network has to operate as a series of independent, wholly responsible entities to be confident folk could engage one another. As it stood, this wasn't the case and the weakest link centred around the hubbed payment model.

It worked by clients procuring network units. These units could be cashed in at any time for expertise in either advice or delivery, usually for short-term engagements of no more than 2 weeks. However, at NGN, they basically contracted HiveMind in full-time at, of course, higher than contract rates, exactly like other consultancy services do even if they are not charging as much. This led to a number of issues, including burning through money. Indeed, I felt myself having to say "I think you've got XYZ on site at the moment, you'd do well to speak to them, as if I came in, it would be chargeable".

To me, sometimes it's necessary to reject a client request if it's in their best interests. Give them a phrase to ask person XYZ about, since it was cheaper for them to do that than get me in. Most of the space I deal in is much more specialist than the foundations of agile enterprises. So it's better to get someone cheaper to do the basics than me do that. It probably won't please the network limited company [director] to read this, since it's possible they'll think I've cost them money. But they have appointed others in my place and if they valued what the client actually needs as opposed to what the limited company [director] needs, they'd understand that. However, as part of the network, members would ask me questions anyway, so I knew there was always a chance I'd get an email come back to me about the same thing if more sophistication was needed and I had no issue replying as it was network member to network member, which I understood by the MoU to be 'supportive activity' (i.e. 'free'). I hope it was valuable for those I've helped.

Introductions & Initial Role

However, the next bit was mildly concerning there was suggestions that we introduced the network to out existing client base. In itself, this wasn't a problem as long as I got the impression that the 'business development' folk would manage the relationship correctly (read 'sales' - since in the main they were and despite the 'partner' name in their titles, shareholding and control remains under the one director as of April 2015 - More on this later). Every consultant has a particular style and not every consultant will fit into every situation like a glove. Mine is founded on competency, honesty, transparency and pretty much zero selling shpeel in that order. It transpired over time that the network's limited company values were almost exactly the opposite way round to the values they claim to espouse, and dare I say, convinced others with even if the expert practitioners inside the network were themselves aligned to the way I saw it.

However, during my engagement at NGN, it became clear that there were folk who were not on the same page as the rest of the network. They would talk about one thing and do something completely different. In addition, whilst the majority of the network was trying to take NGN in one direction, towards a DevOps type team, it became clear after going to a number of this network member's stand-ups that they were really struggling with the concept of agility. Indeed, they reverted to type and started appointing SysOps personnel when NGN needed to recruit those with DevOps experience! Part of that was the way that the Director of Innovation split the work-streams (note this last line highlights the problem - it's not a self-organising team if someone outside the team is organising it), which then pushed HMN folk who aren't confident in accelerated delivery into a siloed frame of reference, which is that one person's particular comfort zone. He entrenched after that, despite playing the 'move cards on a board' game.

This member's activity was and is totally tangential to the way NGN were heading elsewhere and taking such an approach usually results in folk who are silo-thinkers being appointed, especially in the more senior roles, due to homophily, who then dig their heels, keep thinking big-bang and never change, putting the organisation at huge risk! This sort of silo, big-bang thinking caused a 36 hour downtime at Laterooms.com when they needed to move data-centres. It nearly brought the company to it's knees. So having lived through that and watched a number of other companies go through it in my time too, despite my protestations, I can see that car crash coming a mile off! If companies value their existence, they have to adapt to remain robust and competitive! This person solidly failed to do that, paying lip service to the network and it's membership's experience, yet in-line with the limited company director's stance.

This same person was harvesting help and knowledge from everyone. He was also put in a position of hiring and firing service desk folk at the client organisation and operates a model wholly inappropriate to the rest of the organisation which was set up by the rest of the network of practise (but he said the right words, right?). Indeed, this person has ultimately put me in a position of professional reputational risk which eventually manifested. Given how much time this was taking from me, I got the impression that I was not getting as much support as I was putting in to the network. In addition, that person made me look foolish in front of my contacts, despite asking for my help and advice and that of others in the network. Needless to say, I am disgusted with that individual! Together with a number of other members, he is now longer part of my business connection network. I couldn't possibly recommend him to anyone, despite (or perhaps especially because of) the names on the CV!

LESSON 1: DON'T ASSESS ON WORDS, ASSESS ON ACTIONS!

Black Thursday 

The new ex-Gavtnor (names changed to protect the innocent) director at NGN, managed due to his lack of focus on governance, to put both the client and the network in a position of substantial compliance risk. That was an 'I told you so' card moment for me. Indeed, I remember that day well as I was on-site. We were all pulled in to then get kicked out and HiveMind network engagement suspended, despite the attempts of the individuals and aides inside the client attempting to reign his spending in. OK, perhaps a bit of carelessness there, ignoring compliance rules and GRC and of course, even if everything was done right, there was always the risk of an ill-fitting relationship because of the new(ish) type of model the network worked under. That latter risk became an issue and it saw a number of folk who had dedicated their entire professional base to the network, including the income, suddenly out on their feet with zero money.

LESSON 2: DON'T PUNT BRANDS OVER RISKS!


Market Analyst Cliques

This became one of the most perturbing of the lot for me. A number of the members of the network were marketing analysts. This in itself was no bad thing to have on board. However, most marketing analysts don't have any solid research experience. A number are ex-journalists who aren't statistically minded, conduct surveys and just report what they say (a la Survey Monkey - my past administrators have done that), without a more solid approach to analysing it in context. So they comment on market trends or perhaps 'consult' to C-Suite executives, despite never themselves ever having done such work, managed such work or sometimes even knew anyone who does such work.

Fortunately, this is the great opportunity this consultancy organisation has! To fill this space with people who have both the skills and the strategy in one. Imagining how much faster, more reliable and more informed organisations could be if the people who drove strategic technical direction could actually do the work (or vice-versa)? They can both be guided by the risks and issues inherent in the enterprise as it is now and also guide the direction of the development/change. After all, a vision is just a point in time and space. To get there, you need direction and to beat your competition, you need speed. You can't figure out direction without knowing where you are (the bit most strategists and many analysts are missing) and you can't figure out where you are going if you don't look up (the bit most contractors and many practitioners are missing). So the meld has the potential to be amazing, but equally, it has the potential to be destructive and conflicting if you get the wrong sort of leadership. Alas, the organisation has the wrong sort of leadership, who prioritised the wrong relationships, especially considering where the money is coming from. What organisations are actually valuing.

The consultancy's limited company network claimed to value this. The problem is yet again, what the network's limited company director said and what was actually happening on the ground, were completely different things. In addition there was very clear favouritism towards the analysts, especially those the directors worked with in the past (Trust? This bit is fine) to make decisions in areas well outside their speciality (Trust? This bit is NOT!).

Marketing analysts contributed very little (nearly zero percent) to the income of the limited company and the network. In addition, a small number of the analysts network had become 'partners', though this appears to just be title, since the limited company retains it's single directorship.

LESSON 3: KNOW WHAT YOU ARE TRYING TO SELL!
LESSON 4: STAY HONEST AND TRUE TO VALUES!
LESSON 5: ALIGN TO VALUE GENERATION, NOT PERCEIVED VALUE!


Sweat the BIG Stuff!

The limited company had a habit of increasing the take from the expert practitioner side of the relationship. In my 12 months, the prices increased to the expert practitioner twice. The second time by 50%! This was to pay for folk the limited company brought in to pretty much buy a foot in the door of other organisations. One of the values the network espoused was skills over seniority. This was probably the biggest violation [read: lie] that I saw of this value. The funds were taken from everyone else, to pay for one or two people of higher seniority. If the prices were put up to the customer, as apparently Gavtner do by (conveniently) the same amount, at apparently the same time each year, it would show value to the network. However, it wasn't, so the limited company director proposed the disadvantaging of all the network's expert practitioners for the benefit of a very small few. Indeed, such a capable salesperson that he is, he more or less convinced the room that gathered at an away day that this should be it.

LESSON 6: VALUES EXPECTED TO BE ADHERED TO MUST BE ADHERED TO BY YOU!

Know your competition

A month or so ago, I was in a discussion about recruiting more expert practitioners. I was shown the sales Trello board, which was a pipeline each person was taken through before they were then inducted as a member or potentially, they leave the process. At the time, there were a number of folk who had rejected the network in favour of contracts through traditional recruitment agencies. There are around 3,500 agencies in the UK and that number is growing all the time. Whilst I hate them with a passion, I stated to HMN during a call that HMN are competing with a traditional player in a traditional, established space which people know and have worked with. The new model will not find itself easy to compete with it in both recruiting experts and selling itself to companies if it doesn't aim to address the concerns of those experts who have a route that works for them, including the ones who respond to tenders for their work, including percentage of take-home pay (with reference to the events above). After all,why would they open up HMN to a process they've been through and have already won, instead of recruit directly through agencies at market rate or better still, access their own network without having to pay HMN a bean? Makes no commercial sense for those types of expert to do that.

HMN runs a series of Grades. The grade one was placed at in the network was determined by the individual at the beginning. Very often, individuals, especially ex-contractors, under-position themselves, because they are used to the contracting model and know their market worth. Contrast this with with consultants, who over positioned, by far in some cases. What this does, with the proportion taken by the network (at up to 23%) permanently or temporarily, is place them in a position under what a recruitment agency would give them (who takes 15% - 18% of the engagement value). So naive contractors moving into the consultancy space would actually place themselves well below market value. 

Simplicity.. or not?...

For example, if a contractor is normally on £350 a day, the client would be charged £426.80 via a recruitment agency at 18%. That's easy.

At HMN, the contractor would state "£350 a day". You'd be matched to a grade, more often than not, falling between two grades, so they'd offer to give you the higher level ("Woohoo! I like this network" you think). 17% is added for the network costs, and up to 6% taken by the network for the engager (it was 4% initially. The image above was taken whilst the new 6% cost was being explained and specifically, where the extra 2% was going to come from). The person who brought you on board. which let's say takes you to £356 a day, which means they're booking you out at £462 a day. Note, the £462 a day is the important number here, not your £356. In any case, great!... Unless you're the client, in which case you're already paying more for the same resource. No problem! The network units you get can allow you the flexibility to chip and change. That's the USP.

However, then comes the sting. Your £356 is then split further into 95% and 5%. That means your immediate contract rate is 95% or £356 a day, which is £338.20. With the ability to get another £5.87 in 12 months time (plus tiny bonus contribution). If you were on £350 a day and you took 5% of your revenue and put it in a savings account, you'll get just under 2% after that time. In either case you're taxed on it, but you get some income from the savings account.

If at this point you're thinking "Eh? I thought I was contracted at £350 a day!" I wasn't, but that's nly because my math is pretty reasonable. Indeed, I ended up writing the paper they use for the weighted voting rights based on this and also the Google and Excel Sheets used to calculate them. These are accepted as a true and accurate record. So I'm pretty sure I know how this works :) I solved this by pushing to get put at a higher grade. After all, the higher the grade, the more they get anyway. 

In any case, many contractors didn't. That concerned me a lot, as they were lulled into this false position of benefit. However, they are adults and can look after themselves, whatever empathy I may feel for them.

Despite this, you had the potential to prove your increased worth. Indeed, my grade was upped twice in the time I was there, so it wasn't impossible for you to level the playing field.

I made a point of stating this, especially as the network aimed to grow and also started charging us more. However, we only have (had) one paying client, who agreed to our involvement and were, let's just say, "receptive" to us being there, for reasons mentioned earlier. They are a beta-client, so there were going to be teething problems, but there would be a favourable review at the end of it (apparently contractually negotiated - though I never saw the paperwork) and of course, the director of innovation at NGN was in a position to bring more folk in.

[Partial] Conclusion

There is just far too much to write down, so I'll continue to write them when I need to purge. There are questions that need asking about some of the awards that were presented to the end client. The actions of the interim service desk manager (the SysOps advocate) placed by HMN who put my reputation at risk (I'm perfectly capable of doing that myself thank you very much). The under-appreciation of some of the other folk HMN placed there who were worth way more than HMN were paying them. There is also a question about the investment pot and how that works and how much of that could ever fall into the remit of the FCA, though I don't expect that latter one to be a concern given the positioning of it, but I am no expert in financial legislation.

I learned a lot about people during this period and truth be told, I just want to move away from people like that. They also employ many approaches I've even seen bullies in the workplace use to push out introverted folk or manage out 'impassioned experts with integrity' when ironically, those are the folk who can make the network, work. Who keep them honest and are honest.

Towards the end, the whole process became quite abusive. Not being the sort of person to sit back and take stuff, I gave as good as I got (and more I hope #ThugLife). Professionalism? Well, remember, a network is a series of relationships between people. Hence, they are my client as much as I am theirs. After all, I pay them a unit of cash. I don't have to choose that network. I can bid and get work and I can start up stuff (all of which the 'business development' department - read sales - wanted me to push them into). Though as you can see, the experience burnt. Some might call it 'disgruntled employee syndrome', which is a nice way to minimise or invalidate the experience someone has, or some might see this as a useful indicator to what to expect the network to be like (if it's like that when they join, I have no idea where it will go).

The introverts are almost always the ones with the real, true expertise and smarts. The cross functional, multidisciplinary folk who make stuff work. But it's clear the way it's structured isn't about delivering smart people. 'Smart' is just a word that salespeople use to court both C-suite executives and directors and other smart but gullible or "innovative" people into the fold, to try and get enough of a base of resources (which is what they consider experts to be - resources) to work with. So ironically, from a position of sales, the very thing that makes me so angry about that HMN director is also his greatest and dare I say, hyper-popular asset. He can sell anything to anyone...

...except me and those who see through him and know what they're doing.

If you don't like that sort of thing, assume every single thing that comes out of his mouth is a sales ploy. He's watching or listening for cues all the time (which all capable sales people do). If you want to use him as a resources, that's cool, but make that clear from the start, otherwise he owns the relationship, which is the default position by the way he recruits and pays you, as it's funded through his company and the cut is taken first (as opposed to you paying him after the fact).

No doubt this will be a 'teflon shoulders' moment (nothing will stick and he and others are adept at moving the network hub away from criticism). So I don't expect any of this to make a jot of difference longer term to the poor suckers who find themselves in it. Indeed, I also expect I'll be out on a limb, as there will be next to no support, but that's not a position I care much about. Indeed, I bet some of the work I did and the material I contributed will be resold as something else. After all, if I'm not in the network, they won't attribute, even though the lack of legally binding IP specification means I still hold mine.

God speed all who remain. Some of you will get the impression things are rosy, but again, unless you've sat where I've sat, you'll have no idea that it's all a ruse!



</rant>




Monday, 28 September 2015

Genetic Algorithms and BarCamps :)

It's Monday. I didn't have a Saturday or Sunday, since I was enjoying myself at BarCamp Manchester 2015. If you're not familiar with what a BarCamp is (which I wasn't really until his weekend), it's a form of 'unconference' with no scheduled speakers. The attendees are the speakers :) The rules of Manchester BarCamp included that anyone who hasn't spoken before MUST speak.

So in a panic, I hastily put together one presentation on the Saturday to get out of the way. I also didn't know what I should be pitching, despite the fact I could have picked any subject. I have a lot of interest in so many things it makes it really hard to choose sometimes. So figured I might try two at a push. The first on Saturday was entitled... well, "Sh*t that moves" which I learnt led to people thinking it was about robotics (one of those "Oh yeah moments" - *bomb*) the other on Sunday I decided to make much more specific and less click-bait like. I decided to focus on something I hadn't really picked up in 15 years and that topic was Genetic Algorithms.

I tore through a quick piece of JavaScript code after I put my card up on the wall that morning which I used to illustrate a really simple version of the concepts. I wrote it in JavaScript and visualised it in D3 JS, which this gave me an excuse to try out.

By comparison, this talk seemed to go down better *phew* and I'd like to give my thanks to those who have provided that feedback. Much appreciated. Many folk also asked if I have a blog (yes) and also asked me to post my code on JSBin/JSFiddle, so I did yesterday and am now following this up with this post to explain the concepts and go through the code in a little more detail.

Guided by Nature

Genetic Algorithms are considered by many a branch of artificial intelligence. Personally, I don't think that's entirely correct, as it's more in the realms of distributed multi-agent systems and the software side of BIMPA. The intelligence in such systems is contained in the ability of the agents to solve a problem or to search a space as a population. A rough comparison is it's like 'wisdom of the crowds' but operating through genetic or memetic transfer of information. These concepts can also be applied to programming in such a way that programs built from scratch solve problems more effectively than humans have done coding traditional algorithms. As is often the case, the complex behaviour of the population is non-linear even though an individual member's rules of reproduction are inherently simple.

Characteristics 

As hinted, a genetic algorithm, just like monte-carlo approaches, uses multiple agents or samples to determine solutions to the problem you're trying to solve. Each agent or sample contains an encoding of the information you wish to find, akin to a genetic sequence in biological systems. They start very often from a random sequence and through a process of one or more cross-over of a subsequence of that genetic code with a fit member of its cohort and potential mutation of one or more characters of the encoding string, it produces children (usually two children per cohort-pair) who share ideally, the best characteristics of each parent, perhaps with a 'genetic mutation' which takes in the population, or dies out over time.


a single pass through a genetic reproduction operation between two fit parents

Often, when the entire cohort completes its reproduction, that is considered a generation. Members of the preceding generation may remain or die out depending on the approach to the problem you're trying to solve.

Measuring Fitness

In order to determine how close you are to something, you have to know two things. Firstly, what or where it is you want to get to. Secondly, where you are now. If you don't have either of those two things, you can't measure how fit a solution is for purpose. Any measure of displacement is conducted through a distance metric, usually comparison what I call the 'utility' of something with the point you're trying to achieve, which in genetic algorithms is very often used to determine how fit a solution is for the purpose it was evolved for.

Code Walk Through:

Opening up the trivial JSBin example here, you'll see the scrap code I put together (I keep reiteratiing the shame that it's not productionisable - especially since I break Single Responsibility in it. Oops!). All it aims to do is evolve towards an ideal string, in this case provided by a user in the text box labelled "Fitness string:". I slowed it down to make it easier to see the operation via a visualisation of all agents below it. The visualisation is shown on the rightmost JSBin frame. A population of 100 agents is shown in the grid of circles.

In order to understand what the circles mean, we need to look at the distance metric code contained in the 'fitness' function:




Here the fitness of an agent is decided using a hamming distance between the ideal (targetFitness) word and the agent's genecode. A Hamming distance is just the number of positions where two strings differ. For example, John and Jean have a hamming distance of 2, Jen and Jon have a hamming distance of 1. Jim and Jim have a Hamming distance of zero.



Since we're trying to find the closest match, we are looking to define the shortest hamming distance as the fittest organism. However, this can equally apply to longest, biggest, fastest, least number of steps, richest etc. etc. depending on the problem you're looking to solve.

If you are familiar with TDD/BDD, this may seem strangely familiar. The goal is your acceptance test criteria, the initial condition is where the agents start and the actions, how it gets from pre to post-conditions you don't care about :)  Kind of how you'd hope managers/architects leave devs to fill in the gaps.

Breeding

In this code, all I am doing is taking the top 33% when sorted by fitness and breeding them.There is no reason they can't breed across other parts of the population, but this may take longer to converge on the solution.


This happens in two steps. The first is to sort the population by fitness (in order of increasing distance) and and then breed them with any other in that space. This then runs into the crossover function, which takes the genecodes of each  members, pairs them and mutates them (for illustration, I took out the random occurrence of mutations and maintained the random mutation point). You can see this as part of the Baby() class, which represents each cohort member:


When the button is clicked, the population breeds. I count the number of generations for convenience.

So the circles?

Yes, the circles. The darker the circle, the shorter the distance from the goal. A fully black circle, (hex rgb = #000000), hits it straight on (I circle the winning members in green when they achieve it). The lighter the circle, the further away it is from the fitness goal. Note how the cohort responds to longer strings of characters, how long it takes to get to a solution etc. also run exactly the same test twice and see how the different start configurations change the number of generations required to reach the goal. Are there any you've found which don't ever seem to get closer?

...And the sumDiff comment?

Yes, that was deliberate. This was an extra function I put in which simply counts the alphabetic distance between the characters in the strings and adds them. This then becomes the distance metric.

Evolutionary Strategies

There are crucially many ways (and combinations of such ways) you can use to change how the population evolves.


  • Change the distance function/metric
  • Number of fit members (I chose 33% but this can vary)
  • Increase number of cross-over points
  • Increase the number in the cohort
  • Apply attrition/mortality rate
  • Unconstrain population growth
  • Run large populations in parallel
  • Randomise weather a mutation occurs or not 
  • ....

Play around with any or all of these. It would be interesting to see what you come up with. If you have any questions, drop them in the comments and if pertinent, I'll update this post to answer them.

Summary

Genetic algorithms are not new. They've been around for over 20 years and was heavily into them at the turn of the millennium. I was spurred to give this talk after Rosie Campbell's talk on the "Unexpected Consequences of Teaching Computers to See" (which was hilarious! I have to say! So I nicked the concepts on one slide to illustrate problems with mutations - but she played a vid from 20 years ago which showed evolving virtual lock animals). There are places where Genetic Algorithms work well. Evolving solutions or programs. However, there are some major drawbacks, including the fact it needs to be 'trained' (either supervised or unsupervised) and hence, have very little productive usefulness for a long time. As I mentioned in the talk, sometimes 'champion-challenger' algorithms perform better since the approach can be used with production data 'out of the box' and the fittest algorithm evolves as it goes.

Overall, BarCamp Manchester was an awesome experience! I had a whale of a time! Really really enjoyed it. Well worth going to one if you're looking to improve your public speaking and given not everyone's talk

Thursday, 10 September 2015

Lean Enterprise

I attended the Lean Enterprise session last night at ThoughtWorks Manchester. Speaking were Bary O'Reilly and Joanne Molesky, who coauthored the upcoming Lean Enterprise book with Jez Humble.

I happen to like Barry O'Reilly's work. As a lean practitioner, I don't think I've ever disagreed with anything he's said (at least, not to any significant degree - believe me, I try :). Whilst I came into the venue and fumbled my way to a seating position with folded pizza slices in hand, they had just started speaking (thank you Manchester City Centre for having so many roadworks going on at the same time that I had to U-turn on myself 3 times to get in).

I am always interested in looking at how companies close the feedback loop. i.e. how they learn from the work they've done. Not just learn technologically, but also about their process, about themselves, their culture and how they work. I'm a great advocate of data driving retrospectives. Hence, I always find myself wanting CFDs, bug and blocker numbers and a generally greater understanding of how we're developing in a particular direction.

With this in mind, I asked a question about the hypothesis driven stories (which are a really great idea that Barry has shared with the community before). The format of the story is akin to:

" We believe that <doing this action>
  Will result in <this thing happening>
  Which benefits us <By this amount>"

What I asked was around how he gets people to come up with that measurable number. There's always a nervousness in the community when I ask about this sort of thing. I don't mean to cause it, it just happens :)

Why asked it?

When working in build-measure-learn environments, including those in lean environments, the learning process aims to become more scientific about change. If the result is positive, that's fine, since every organisation wishes positive change. However, if it's negative, that's also fine, since you've learned your context doesn't suit that idea. Spending a little money to learn a negative result is just as valuable, since you didn't spend a fortune on it. The only real waste when learning is spending money on inconclusive results. Hence, if you design an experiment which is likely to yield and inconclusive result, then you are designing to spend money generating waste.

What's inconclusive?

For those who use TDD, you might be familiar with this term. If you run unit tests, you might see the odd yellow dot when the test doesn't have an assertion (async JS programmers who use mocha may see it go green, oddly). This is a useful analogy, but not wholly correct. It isn't just that you're not measuring anything, which is bad enough, since most companies don't measure enough of the right stuff (hence, most of the results of expenditure are inconclusive in that regard), it's also concluding an improvement or failure under the necessary significance threshold.

Say what? Significance Threshold?!

The significance threshold is the point at which the probability of false results, the false positive or false negative is negligibly small and you can accept your hypothesis as proven for that scenario. Statisticians in frequentist environments, those which work off discrete samples (these map to tickets on a board tickets), are very familiar with this toolkit, but the vast majority of folk in IT and indeed, businesses aren't, sadly. This causes some concern, since money is often spent and worse, acted on (spending more money), when the results are inconclusive, there is not just no uplift. Sometimes it crashes royally!

Here's an example I've used previously. Imaging if you have 10 coins and flip them all. Each fli i a data point. What is the probability of heads or tails? Each one is 50%, but the probability of getting a certain number of heads is normally distributed. This may perhaps be counter-intuitive to folk:



So you can be quite comfortable that you're going to get 5 heads in any ten flips with any ten fair coins. However, if you look at zero heads or all heads after all the flips, the outliers, these are not very likely. Indeed, if you get your first head, the probability of getting zero heads in 10 after the remaining 9 have been flipped as well is obviously zero (since you already have one).

Now let's suppose we run the experiment again with the same number of coins. An A/A-test if you like. Let's suppose we get 4 heads. Is that significantly different? Not really, no. Indeed, many good researchers would consider a significant difference to fall at either 0 or 10 in the above before they call a change significant. Indeed, an unfair coin, one which has only a head or a tail on both sides will give you exactly that outlier (all tails or all heads). Anything before this is regarded as an insignificant change. Something that you already have knowledge for or can be explained by the existing context, not the new one the team delivers, or 'noise'.

Why is this important in lean enterprises?

In business, you spend money to get value. It's as simple as that. The biggest bang for your buck if you will. Positive and negative results, those that yield information, are worth paying for. Your team will be paid for 2 weeks to deliver knowledge. If there are 5 people in the team, each paid for two weeks at £52,000 a year each (gross, including PAYE, employers NIC, pension, holidays, benefits etc.) that is £10,000.

If the team comes out with knowledge that improves the business value by 3% and the required significance level is a 7% uplift, this value addition is insignificant. Rolling this out across the whole enterprise will cost you significant amounts of money, for a result which would likely happen anyway if you left the enterprise alone. At the end, you'll be down. Lots of consultancies which have delivered positive results have actually seen this sadly. However, as Joanne rightly said in the meetup, it's often just as easy to do the opposite, and miss opportunities because you didn't understand the data. The false negative.

Teams have to be aware of that level of significance. That depends very much on sample size. You need to get a big enough sample for the 'thing' you're trying to improve. Significance levels also generally depend on that the degrees of freedom (how many possible categories each sample can fall into - heads or tails) and the probability of false positives and negatives.

If you have a pot of £1 million, each costing £10,000 you can run 100 experiments. You need them to be conclusive. So select your hypothetical number for acceptable value, the threshold beyond which a credible change can be deemed to have occurred, before you spend the money running the experiment.

Otherwise you won't likely just lose the money to gain zero knowledge (pressure to declare a result conclusive which isn't, is just another form of the sunk cost fallacy), you may end up spending more money on a result that isn't credible and it will most likely bomb (check out Bayesian stats for why), or also miss opportunities for growth adding value or something else. As a result, I'd argue that you need to know the hypothetical sample size requirement (there are tools out there to do that), but also remember to stop when you reach that sample size, not before (since you'll destroy the credibility of the experiment) and not too long after (since you're getting no significant increase in extra knowledge, but you are still spending money).

Keep it lean, keep it balanced! :)




E

Saturday, 22 August 2015

...And the battle rages on?

It's 8am here in the UK and I am still simmering over a twitter storm from about 3am my time. I made the mistake of looking at my phone after going to the bathroom (I washed my hands) and noticed more on the #NoEstimates conversation.

It all centred around the heated discussion the other day on #NoEstimates, except this time it got personal, with a few members of the discussion choosing to do the tabloid headline thing of taking one part of some of my material out of context and then basically making libellous inferences. I don't mind a heated debate at all, as long as it stays fair, but I was somewhat disgusted with the actions of a few folk, especially since they purport to work with probability and statistics, which folk who know me well, know is exactly my area of specialism in this domain. If you want to read the full article on my LinkedIn blog post and see how it's out of context, it's here, as opposed to reading the tabloid rubbish. They obviously TLDR or were out for the vendetta as opposed to admit where they were wrong. Too much riding on it I guess.

Needless to say, twitter is a really poor forum for these sorts of discussion (which is pretty much the only thing me and @WoodyZuill agree on). So I figured I'd explain it here, in a bit more detail, then post it back, as those folk are hell bent on not admitting their lack of understanding and fighting with people on 'their side' of the debate and to do that, needs a lot more than 140 characters to explain the gaps. However, before we get into how they fit within the discussion of estimates, we need to bridge some gaps and answer some criticisms.

Buzzwords

Now, I hate 'buzzwords' as much as the next guy. However, we in IT are probably more guilty of creating and using them than any other industry. Indeed, particular communities of practise create buzzwords that only those in those communities understand. Therefore it is a kind of 'private joke'. However, here's the rub, you can't get away from it. They are always necessary to succinctly communicate a concept. 'eXtreme programming', 'design patterns', 'TDD', 'Refactor' they are all examples of words used to communicate concepts in our domain. They mean nothing outside it to anyone not connected to it. So those people see it as a 'buzzword'. Is that their problem or ours?

Similarly, because we in software development are often in no way connected to accountancy and finance, when see words like 'NPV', 'IRR', 'ROR' we in the main don't get an illustration of the concepts in our minds. Hence, we see them as buzzwords. Their problem or ours?

The moment of violent agreement

So, hopefully we should now be on the same page around 'buzzwords'. Cool?

No? Do we not like hearing that?

Grow up!

Estimates (or None)

When working in an organisation, you're always going to have to justify your work's existence (sometimes even your salary/fee). It's how businesses work. Why are we doing this project? What is the business case? How much is it going to cost? What benefit am I getting out of it? The answers to all these questions are all estimates. Yes, we hate them because we are often held to them. However, being held to them is a people problem, not a problem with estimates. Business are held to estimates all the time!

Estimate Risk

Estimates are naturally probabilistic. What is worse is that the further out you look, more uncertain that probability becomes. To expand on a previous post from the past, using insignificant data volume as an example, if you imaging you have to deliver one small task and you estimate it to take 2 days and it takes 3 days, you have one data point, with one variation of 1-day (or 50% of it's expected duration - average absolute variation of 1-day). If you then get another task and you estimate it to be the same size and it takes 1-day, then you have a range of total variation of   -1 day (delivered early) to +1 day (delivered late) which is 2 days in total. You can't make a decision on one data point.

The average absolute deviation, which is is the average across the two, is 2/2 = 1-day. That's just standard statistics. Nothing special there. You can relate that to standard deviation really easily (sum of the residual differences) and this comes out as the square root of 2, since the mean of 3 days and 1 day is 2 and the variance is 2-days. Standard deviation is the square root of variance, ergo...

Now, let's suppose you classically estimate ten such elements (deliberately staying away from 'story' as to me, a story is an end-to-end, stand alone piece, so shouldn't have a classical dependency per se) in a dependency chain on a critical path and you don't improve your process to attain consistency, the total absolute variation goes from all of the tasks being delivered early, to all of them being delivered late. From the mean (2 x 10 = 20), this becomes a range of -10-days (1 day early for each task) to +10-days (1 day late for each task) a total absolute deviation for the whole project of 20-days on a 20-day expectation, even though the individual tasks still have an average total deviation of 1-day! 

Let's now imagine we're actually delivered stuff and look at the variation of the tasks remaining after these first 2 tasks on the board have been delivered and their variation was as stated previously. Those are now not uncertain. They have been delivered. There is no future uncertainty about those tasks and of course, no need to further estimate them. The only variation now exists for the remaining 8 tasks on the board. Again, 1-day average absolute variation, means the 8 tasks remaining now have a total systemic (i.e. whole project) variation of -8 to +8 days (16-days). So you can see the variation reduce as you deliver stuff. 

It's reduction makes that darn cone to look like it does! Since you're now 4 days into the project. You can plot that on a graph.The first point of uncertainty was +10 and -10 on day zero. 4 days in, this has reduces to +8 and -8. You keep going across the time on the x-axis as you deliver stuff and you always get it finishing on a final point. After all, once you have delivered everything, you have no more variation to contend with. Zero, zilch, nada!

example of a cone of uncertainty (src. wikipedia)

There is no getting away from this fact. It's as much of a fact as the law of gravity. To do anything that goes against it without understanding it, is like this. Which whilst fun and harmless (some might consider 'pioneering'), killed people when flight was first invented and in any case, spends money pointlessly, which is waste. We are in a position where we know better, why reinvent the wheel?

What does this have to do with Estimates?

Right, here is where we get back to the meat of the matter. 'How do estimates help me deliver better software'. 

In short, as far as software development alone is concerned, it doesn't. However, and this is the bit that ired me because people just didn't seem to want to hear it, software development by itself, is useless. We use software to solve problems. Without the problem, there is no need for software (indeed, there is no need for any solution). However, don't forget organisations themselves solve client problems and those clients themselves solve problems potentially for other clients! So software development doesn't exist in isolation. If you think it does, then you exist in the very silo mentality that you purport to want to break down. Do you not see the hypocrisy in this? I am sure many of the business readers do!

Again, grow up!

Teams Should Aim to use the closeness of their estimate and actual delivery performance as an informal internal indicator of the level of understanding of the codebase and their performance with it. No more. Businesses should not use the estimate to hold the team to account as there is a high level of variance around any numbers and the bigger the system being built, especially if it has a number of components in a chain, the worse the variance will be.

Improving?

The way to improve on estimates totally depends on the way the team itself works. Let's assume the team carried out retrospectives. This is their chance to employ practises to improve the way they work, quality of the work and/or pace at which they develop software. As a rule, the team can't go faster than it can go, but the quality of the code and the alignment of the team naturally affects the flow of tasks carried through to 'done' (production, live, whatever). 

Blockers and bugs naturally affect the flow of work through the team. Reducing them, improves the flow of work, as contention for the 'story time' of the team, which is a constrained resource, then isn't there. If you don't track bugs/blockers, then you are likely losing time (and money, if you're not working for free) as well as losing out on opportunity costs or potential income (probabilistic) in value for the business be delaying deployment into done and you'll have no idea if that applies or not. If it does, the business is getting hit on two fronts. 
  1. Delivering value later because you are fixing bugs in earlier processes
  2. Costing more money to deliver a feature because you are using 'story time' to fix bugs in earlier releases
The combination of the effects of the first and the second hits your NPV and hence, directly affects your IRR and also ROR and ROI (buzzword alert). However, most developers are too far away from finance to understand this and many who purport to understand it, don't.

How can methods like Kanban and ToC help?

OK, so it's no secret the IT world, the one I inhabit, has an extremely poor understanding of flow and indeed, does kanban 'wrong' relative to the real way lean happens in manufacturing and TPS. Kanban ultimately aims to optimise flow of X. Flow of stories, tickets, manufacturing items, cars, whatever.

My scribbles on importance of understanding variance from previous posts

The process is stochastic in nature, so there is no certainty around it but what most folk don't understand is that kanban inherently has naturally got waste in the process. Movement of items is one of the recognized 7 types of Muda waste

- Unnecessary transport and handling of goods
- Unnecessary motion of employees

Transportation of goods (read stories) is a movement of one item from one stage, to another. Often a development context to a QA one or into live. There is a change of 'mental model' at that point, from one mindset, say, development, to another, say QA. That is a form of context switch, just not using time, which shouldn't be new (after all, context switching happens with stack frames on CPUs when multi-threading - Take out and store the stack frame for one thread, introduce the frame of another) and just like all context switching, it never costs nothing to do.

In addition, as per ToC (buzzword alert), there is inventory, and indeed, a 'wait time' between stages where the item is ready to be pulled on demand can be considered an implied 'inventory' stage. This introduces another cost. Usually in not delivering the software into a production environment so it starts to yield knowledge or indeed, it's value.

Run a dojo and try this. Take one developer and make them code and QA one scenario. Time how long it takes to deploy that one thing into a production environment. Then take another developer and a tester and make them code one scenario and then QA that one scenarios in sequence. Time how long it takes. You'll never get faster with the QA and the dev. The cost to switch the task naturally elongates the cycle-time of the software delivery of that one task. If you did 10 tasks like this in an iteration, all sequential and the dev didn't pick up another one until the QA signed it off for live, then the throughput would be just 10 x the cycle time.

In short, introducing a kanban stage has introduced waste! You'd lose time/money as a business.

What's the benefit for this cost?  What's the trade-off?

To answer @PeterKretzman's retort

Still think so now it's been explained?

The systemic trade-off is pipelining tasks to make team delivery faster ( to be delivered by the team). Each stage can pick up a 'ready' task from the previous stage when they've finished their main involvement in their stage/phase of the story's flow through the pipeline.

Run the same experiment with 10 scenarios and this time the dev can pick up a task whilst the QA is testing the previous one. Suddenly this makes much more sense and your throughput, whilst still related to cycle-time, is not wholly dependent on it. So you are delivering the 10 scenarios much faster than you would do if it was sequential. After all CPUs use pipelining as a standard optimisation mechanism. This is obviously why we do what we do in the way that we do it in software, lean manufacturing, lean construction or anything else.

Can you get too small?

As I demonstrated in a talk I gave last year, the short answer is yes. If you keep adding columns to the point it doesn't add value i.e. isn't a step in the value chain (buzzword alert) then all you are introducing is the cost of the context switch out of that stage, with no value add, which then costs both time and money. Indeed, if you can run tasks wholly in parallel pipelines, it's much faster than kanban, but requires resources to be allocated accordingly.

To see this in the previous example, introduce a post-QA stage called 'stage' and all they do is sign a pieces of paper and then run a manual deployment. There is no value add in that process, since there are no other contentions for the 'stage' process in the organisation as it is at that moment in time. However, you're paying a post-QA personnel member money to stage it.


Conclusion

I hope folk can now see where I am coming from. However, make no mistake, I am extremely disappointed in the quality of understanding around this, the hypocrisy that exists in the field and the low down, dirty tabloid style tricks that some folk will stoop to just because they've never come across such a scenario, and as if they know it all from all organisations everywhere. The #NoEstimates movement is sadly littered with such folk who frankly seem to show a distinct lack of understanding of anything related to the field. Many show a distinct unwillingness to engage, inherently overly political standpoints to avoid having to admit a failing, limited success or understanding. After all, the only people who'd want to sell #NoEstimates if it doesn't mean anything are the #NoEstimates movement. It's a real shame as it's something I think needs to be discussed with a wider audience and as I have said previously, it has massive potential, but is being taken down a black hole with pointless discussion and constant justification across the board.

After all, if we can't constantly be responsibly critical of our field, our means of operation, then we can never ever improve what we do?


E

Tuesday, 18 August 2015

Story Points: Another tool, Not a Hammer!

*bang head on desk*

Nope.

*bangs head on desk again*

Nope. Still can't knock that alleged sense into me.

Today has been one of those days that started off OK, then I saw a conversation on twitter which got me all het-up (not necessarily in a bad way). It seems I'm returning yet again to the issue of story points and the #NoEstimates #BeyondEstimates movement. I've covered so many topics in this space it's getting frankly tedious to repeat myself. If you're interested in the kettle boiling, see:




What Ignited the Blue Touch Paper

I'm not all that bothered about story points. I use them a lot as they were intended. Relatively sizing tasks. I often also find myself using T-shirt sizing or occasionally Size, Complexity and Wooliness. They all have their merits depending on what the teams I work with decide they wish to use. The biggest problem is when I find some proponents of various methods, including of course Scrum, XP, RUP, Waterfall etc. trying to impose their way of thinking as the right way of thinking. We're just as guilty of this in the agile world as the 'waterfall' managers we often criticise.

Truth be told, with estimates, I don't care a jot which we use. If you believe every situation is different, then you should expect that the tools used may well be different and that's OK.

The problem we have is that many folk are critical of story points as they are used as a stick to beat developers with. If you've ever worked in business or perhaps even running a charity, then you'd know that this is only one of many possible outcomes of why estimates are important. It's just that developers seem to take offence to the idea more than most. Also, bear in mind the maturity of a team creates or negates the need for precise estimates. Indeed, if a DevOps team is mature enough to delivery through MVP (lean-thinking/startup) then adhering to 'hard' estimates is much less important as the outcome of a miss is simply the value in the missed version of the software, not value overall, since the client already has something they can work with. However, I digress...

Story Points to Reality: Parametric Equations

Many proponents standing against story points seem to fail to realise that a story point link to the real world exists whether we like it or not. A story takes time to do. You don't have negative time and you can't carry out zero duration tasks. It also doesn't cost zero, because the developers wages or rates are being paid (yes, you ar coting the business money - Sorry but it's true. Even if you work for free and are late you lose the company an opportunity cost). That is just as much a reality as the law of gravity. Just like gravity, your mind has to escape to outer space to escape that reality. The value a story delivers can also be quantified and analysed statistically. All of these re-quantifications have units of measure which can legitimately be attached to the parameter.

To recap, in A-level maths (senior high for those in the US and a heck of a lot younger in many other countries), most people should have come across the concept of a parametric equation. It usually includes a variable which itself has no units to simplify the process of reasoning about the model at hand. Consequently, it allows for much easier expression of much more complex structures and concepts in easier to use form. In a tenuous way, it's akin to the mathematical equivalent of using terms such as SOLID, IoC, TDD, BDD etc. since just using these words helps communicate ideas where communication is the goal. Just like in the software world, there is often a transformation in and out of the real world context of parametric equations (read, parameters). This is a normal, analytical approach to many problems in many more industries than software development or engineering. The only difference between these is that parametric equations contain a stochastic component when working with flow of tasks across a board. That doesn't often change the approach needed, just the skill of the person using them (which may or may not be desirable). But guess what? So do story points.

Crucially, and this is the bit that gets me wound up, just because people choose to play with the numbers incorrectly, which many project managers, scrum masters and product owners do, doesn't invalidate the analytical position, nor does it invalidate the statistics around these numbers. It also winds me up because it is very often the same folk who have made these statements that never followed process when more formal methods of software development were used. They just want to code. Lots of great noises, but when it's time to walk the walk...

*breathe*

Story point are just a tool. A tool like any other. If you misuse a tool, who is at fault?

Now #NoEstimates  #BeyondEstimates. I'd love for us to drop the NoEstimates term. It's got the dev world in the space of the top of the Gartner hype curve for absolutely no reason. #BeyondEstimates is a much better term for selling it, sure, but it also communicates the intent much much better. It's a term Woody Zuill came up with himself, which I think perfectly positions and communicates the goal of the movement. NoEstimates isn't about not estimating. It's about always looking to improve on estimates. So '#NoEstimates' is one of the worst phrases you can use to describe it. Plus, just like any tool, I suspect it's misuse will leave you in no better position than the standard evolving estimation processes, just with less understanding of where it all went wrong.

That said, overly precise estimates will leave you in worse positions than you'd otherwise be in. Get good at deciding how much effort needs to go into estimating things.

All Forecasts are Wrong

Yes, but what do you mean by 'wrong'? Wrong as in you'll never hit it? Yes. However, what's an acceptable deviation?

For example, do you get out and measure your parking space at work before then renting a fork lift truck to lift your car and spending 8 hours positioning it perfectly in the space with millimeter precision, only to have to get into it at the end of that day to go straight home? No, I suspect not. You estimate the position of the car in the space, sample the space to make sure you can get out or are in the spot and there we go. Job done. 15 seconds.

The amount of waste is the amount of unusable extra space around your car and even that definition depends on who you are. Statistically, most people are likely get into that space on their first try. Second and third try includes almost everyone. However, nobody attempts to just crash their car into that spot. That is good enough. Is it 'wrong' if measured by the deviation from the very center of the space? It certainly is! Is it good enough for the job? Yes it certainly is.

Is this your #NoEstimates approach?
In reality, the #BeyondEstimates movement is right to ask the question of the role of estimation in software development projects and beyond (pun intended). What I don't want to see though is people blame estimation methods or worse, maths, for the failings of people. That was agile c2000+ when most folk adopted the wrong ideas around agility and I can't stand to see another 10 years lost to needless bad practice.

This all means that teams have to get better at managing variation. Product owners have to get better at managing their own 'expectation' around that variation and both have to keep track of the scope of their deliverables and how likely they are hitting the commitments they make. Overall the culture has to support pivots, backtracking and encourage the raising of issues and also the organisation must be able to support changes of direction. This is a much bigger problem than either 'party' can solve alone.

</rant>