Friday 8 August 2014

#NoEstimates is not #NeverEstimate - my take

This is not a true story, I'm sorry. But I really think this is an approach a company could use. And there are probably companies out there that are doing this, daily. I just want to give an example to state my view.

If you follow the #NoEstimates discussion you might come across the statement that "#NoEstimates != #NeverEstimate" ("!=" is a programmatic operator (in some languages) that means "not equal to"). But what does that mean? In twitter discussions (where most of the discussions happen) it kind of tends to become "defend your point stances" like discussions. What I mean is; it feels like it becomes a "never estimate" vs "always estimate" discussion - even though we say #NoEstimates actually doesn't mean "never estimate" (or that opponents actually thinks that "always estimate" is something great too?).

This is unfortunate in my opinion. It muddles the real topic.

So, to give some "credibility" to my points - because I believe in the ideas/thoughts behind #NoEstimates - I'm going to give an example of what "it doesn't mean #NeverEstimate" means, in my opinion.

These thoughts comes from after reading the book "The Lean Startup" by Eric Ries (http://theleanstartup.com/). Those of you who have read the book will recognise the idea I'm presenting and those of you that haven't; read it! It also has some big influences from the "Impact Mapping" idea (http://impactmapping.org/) - read that book too, if you haven't.

An Example

Let's say I have a customer that wants to "update the design of their web site". There's nothing wrong with that, right? Now they want to know what that will cost and when it can be expected to be finished. They might crank in some additional changes when they are at it, or not. Doesn't really matter here.
Now what happens? Well, first they have to contact some web designer company that have to look at this and give an estimated cost of what a "design update" might cost. When that's done someone has to implement the design into the real web site ("real code"). That's probably another company who does that - they need to give an estimate of cost for the update as well.

Nothing strange going on here. The project might even (unusually) be on time/budget in this example as well. All is fine. All companies involved might even celebrate the "Web Site Design Update" project. A true success! And it is. ...or is it?
I mean, in a project sense it is. We didn't break budget and we released on time. That *is* great. But still... all that time and money spent - was it worth it?

I guess no one really knows.

Because what is the hypothesis? What assumptions about our web site user's behavior have we made? What is it we want to learn about them and their behaviors that will give us a competitive advantage? In what way will they help us earn/save money? And - very important - how can we measure that our hypothesis is correct or not?

Well, we do it by making experiments. Trying/testing our hypothesis. Thinking small. Focusing on how we can gain (measure) that knowledge as cheap and fast as possible - by cutting all waste, that is; remove everything that doesn't contribute to the learning we seek. Doing minimal effort to get to that knowledge.

How can we apply this here?

Well, what's the hypothesis about updating the web site design. Well, that depends on what the company do. But let's pretend that the main purpose of the web site is to steer the visitors to a certain place or doing something special; like "becoming an applicant for X" (there can be many different purposes of a website that gives a company real value - that is $$$, in the end).

I guess the hypothesis then becomes: "If we update the design, more visitors will apply for X".
Now, that becomes: "How can we prove this assumption is correct?" "What's the minimum thing we can do to test it, and how can we measure that what we do is the actually affecting those metrics?"

On a web site, let's do an "A/B test". That is, let half of (or maybe a specific set of) users see and use the new design and let others see the old one. Try it on just the start page or a specific flow. And while having this focus on "experiment", maybe we should look at improving the user experience as well? An idea we might not have come up with at all if we just focused on "updating the design"..?

The outcome of this might be that the new design had no, or little, impact at all on the behavior of our customers/visitors.
Then an update of the *entire* site might not be what we should spend our money on. This "failure" (hypothesis turned out to be incorrect) is great! We have *learned* something about our customers and their behaviors. We can use that to our advantage. "What more can we learn!"

But also. We haven't spend a lot of time and money on all the estimates and the time and money on updating the entire site - which would have proven to be a "failure" in the end anyway; but a costly failure, a late failure.

But - now to the point - there are estimates here (maybe it doesn't really have to..?)
We have to give some judgment on how much to spend (estimate) and how long it will take (estimate) to perform this experiment, because an experiment isn't free :-). But, I rather give an estimate under these conditions. And it's less "sensitive" if a "1 month" estimated cost is "off" then if a "6 months" estimate is.

And what if the assumption turned out to be right; the new design made a great impact on e.g. the number of applicants? Well, either we estimate the cost of updating the rest of the web site, or we iterate with next assumption or whatever we want to learn next. 
If we choose to update the entire site we now have a much better certainty on the effort of doing so, since we have already done some.
But by choosing to do this in small batches there are other benefits as well. Instead of having the design team hand-off their designs to the next team and move on with their next project - we work together. And we will save time and money! Why? Because if the design team move on with another project they might have to be interrupted in that project when the other team have questions on how to implement the design or problems occur. Those interruptions can be costly and cause delays - and most certainly if these interruptions becomes "the norm". 

To quote Peter Drucker (I'm not sure it was actually he who said it, but it's attributed to him):
"There is nothing so useless as doing efficiently that which should not be done at all."

No comments:

Post a Comment