Monday, 11 August 2014

Don't Do Stupid Things On Purpose - Is it stupid?

"DDSTOP - Don't Do Stupid On Purpose". I came across this by Mr Glen B. Alleman on twitter. You can read some of what he's written on the topic here:
It's aimed at the #NoEstimates movement. But DDSTOP doesn't seem to originate from that (if you read the post, at least).

DDSTOP sounds good at a first glance. We should actually not do stupid things on purpose. And sometimes it kind of feels like we do, at least I feel that. It happened today actually. We were requested to estimate a change request that wasn't really thought through. At least we had quite low confidence in giving a credible estimate on the effort required. So for a while we talked about how we could give an estimate and what it should be at. Then it struck me; "Wait a minute! Customer collaboration! Let's book a meeting instead and go through this together with them. Why not even let them hear what we think, directly, instead of 'delivering' the estimate." Everyone in the team agreed. Meeting is now booked. To be continued... :-) I call this approach "no estimates", because we made a decision without estimates - we decided to *not* provide an estimate, i.e. "no estimates". We *will* give an estimate. And that leaves me with: "no estimates" is not "never estimate". And we also avoided a "misuse" of them; we avoided a "guesstimate". And doing the estimate in collaboration with customer at the same time as we look at what is to be done, is also much better, in my opinion. But now I'm really off-topic...

I don't really like DDSTOP as an approach or mindset or whatever it is. I feel it has a "blame" tone. Like: "I/You/He/She did a stupid thing, shame on me/you/him/her!". And even if we don't actually say that, we might implicitly set a culture of not wanting to try new stuff; "Oh, I don't dare to try/do this, it might be stupid, someone might even interpret it as me being stupid on purpose... That banner over there says that and all, oh well...". In my opinion, that is not a culture we should foster. In fact, quite the contrary.

I claim we actually *should* do stupid things! It reminds me of a movie quote I love. From The Three Musketeers. D'Artagnan is about to leave his parents to meet the world and seek adventures:
D'Artagnan's Father: There's one more piece of advice.
D'Artagnan: I know, I know. Don't get into any trouble.
D'Artagnan's Father: Wrong. Get into trouble. Make mistakes. Fight, love, live. And remember, always, you're a Gascon and our son. Now go. Go.
A wonderful advice!
My take: Try! Fail! Learn! Be creative! Ignore stupid!

And when someone makes an obvious stupid thing, shame on *us* (as a system or organization) for allowing it to happen. See it as something positive! A chance of learning - and improvement - in the organization. And not just wag it off as "that was just X doing a stupid thing, and furthermore on purpose". There is no learning and improvement in that mindset (or whatever it is).

Taking the example from Glen's blog above; a foam filler was used without reading the directions on its applicable.
"Stupid! And on purpose; not reading the directions."
Well I'd say: "Shame on *us* for not educating people on how to use the foam filler instead, and other safety equipment. Maybe we should even arrange other courses on how to perform CPR as well?"

It's "culturally" important to not put any blame on employees. Not even a slight indication that we do. Not even open up for misinterpretation of the concept that might lead in that direction (I obviously might have misinterpreted DDSTOP. But if I have, others will as well). Because it might lessen (or even kill?) the company's most precious recourses; creativity, innovation and learning.

I think that banner was actually a rather bad idea. I think DDSTOP as a way of thinking is a bad idea. Not stupid though... ;-)

Friday, 8 August 2014

#NoEstimates is not #NeverEstimate - my take

This is not a true story, I'm sorry. But I really think this is an approach a company could use. And there are probably companies out there that are doing this, daily. I just want to give an example to state my view.

If you follow the #NoEstimates discussion you might come across the statement that "#NoEstimates != #NeverEstimate" ("!=" is a programmatic operator (in some languages) that means "not equal to"). But what does that mean? In twitter discussions (where most of the discussions happen) it kind of tends to become "defend your point stances" like discussions. What I mean is; it feels like it becomes a "never estimate" vs "always estimate" discussion - even though we say #NoEstimates actually doesn't mean "never estimate" (or that opponents actually thinks that "always estimate" is something great too?).

This is unfortunate in my opinion. It muddles the real topic.

So, to give some "credibility" to my points - because I believe in the ideas/thoughts behind #NoEstimates - I'm going to give an example of what "it doesn't mean #NeverEstimate" means, in my opinion.

These thoughts comes from after reading the book "The Lean Startup" by Eric Ries ( Those of you who have read the book will recognise the idea I'm presenting and those of you that haven't; read it! It also has some big influences from the "Impact Mapping" idea ( - read that book too, if you haven't.

An Example

Let's say I have a customer that wants to "update the design of their web site". There's nothing wrong with that, right? Now they want to know what that will cost and when it can be expected to be finished. They might crank in some additional changes when they are at it, or not. Doesn't really matter here.
Now what happens? Well, first they have to contact some web designer company that have to look at this and give an estimated cost of what a "design update" might cost. When that's done someone has to implement the design into the real web site ("real code"). That's probably another company who does that - they need to give an estimate of cost for the update as well.

Nothing strange going on here. The project might even (unusually) be on time/budget in this example as well. All is fine. All companies involved might even celebrate the "Web Site Design Update" project. A true success! And it is. ...or is it?
I mean, in a project sense it is. We didn't break budget and we released on time. That *is* great. But still... all that time and money spent - was it worth it?

I guess no one really knows.

Because what is the hypothesis? What assumptions about our web site user's behavior have we made? What is it we want to learn about them and their behaviors that will give us a competitive advantage? In what way will they help us earn/save money? And - very important - how can we measure that our hypothesis is correct or not?

Well, we do it by making experiments. Trying/testing our hypothesis. Thinking small. Focusing on how we can gain (measure) that knowledge as cheap and fast as possible - by cutting all waste, that is; remove everything that doesn't contribute to the learning we seek. Doing minimal effort to get to that knowledge.

How can we apply this here?

Well, what's the hypothesis about updating the web site design. Well, that depends on what the company do. But let's pretend that the main purpose of the web site is to steer the visitors to a certain place or doing something special; like "becoming an applicant for X" (there can be many different purposes of a website that gives a company real value - that is $$$, in the end).

I guess the hypothesis then becomes: "If we update the design, more visitors will apply for X".
Now, that becomes: "How can we prove this assumption is correct?" "What's the minimum thing we can do to test it, and how can we measure that what we do is the actually affecting those metrics?"

On a web site, let's do an "A/B test". That is, let half of (or maybe a specific set of) users see and use the new design and let others see the old one. Try it on just the start page or a specific flow. And while having this focus on "experiment", maybe we should look at improving the user experience as well? An idea we might not have come up with at all if we just focused on "updating the design"..?

The outcome of this might be that the new design had no, or little, impact at all on the behavior of our customers/visitors.
Then an update of the *entire* site might not be what we should spend our money on. This "failure" (hypothesis turned out to be incorrect) is great! We have *learned* something about our customers and their behaviors. We can use that to our advantage. "What more can we learn!"

But also. We haven't spend a lot of time and money on all the estimates and the time and money on updating the entire site - which would have proven to be a "failure" in the end anyway; but a costly failure, a late failure.

But - now to the point - there are estimates here (maybe it doesn't really have to..?)
We have to give some judgment on how much to spend (estimate) and how long it will take (estimate) to perform this experiment, because an experiment isn't free :-). But, I rather give an estimate under these conditions. And it's less "sensitive" if a "1 month" estimated cost is "off" then if a "6 months" estimate is.

And what if the assumption turned out to be right; the new design made a great impact on e.g. the number of applicants? Well, either we estimate the cost of updating the rest of the web site, or we iterate with next assumption or whatever we want to learn next. 
If we choose to update the entire site we now have a much better certainty on the effort of doing so, since we have already done some.
But by choosing to do this in small batches there are other benefits as well. Instead of having the design team hand-off their designs to the next team and move on with their next project - we work together. And we will save time and money! Why? Because if the design team move on with another project they might have to be interrupted in that project when the other team have questions on how to implement the design or problems occur. Those interruptions can be costly and cause delays - and most certainly if these interruptions becomes "the norm". 

To quote Peter Drucker (I'm not sure it was actually he who said it, but it's attributed to him):
"There is nothing so useless as doing efficiently that which should not be done at all."

Friday, 1 August 2014

My Rules of Estimation

In this post I'm just going to write down some of the things I've found about estimates. That is: when estimates fill a purpose (some say they *always* do, and I'm more into the ideas of #NoEstimates to lessen the dependency on estimates... But this is off topic in this post).

These are things that apply to me and the situation and context I'm in (types of customers I have, size of projects etc). I'm not saying these are universal truths or are general guidelines. It's merely my opinions. There's no real news in here either, it's rather old stuff. I just want to share them and keep this as a reminder to my future self.

I won't go into lots of details on these items, just list them as shorter notes. I might revisit these items later on and update them.

  1. Estimate or commitment?
    Someone buying software (or anything really) aren't that interested in estimates. They seek commitment (promises). So they are probably not asking for estimates, but for a commitment. But don't confuse estimates with commitments, two different things and requires different approaches. Here's a metaphor on the difference I like (by Mike Cohn):
  2. Find out the budget
    It's there for sure. Then try to discuss what can be built for that amount of money. If they don't want to tell their budget (it's a trust issue really but...) it's rather easy to find it out anyway; Tell them something clearly too high and work your way down ;-)
  3. Find out the "roughness"
    Maybe "This year" is okay as an answer? Or "Less than a month"? Don't just assume they want "estimates as you usually do". Probably they don't need precise answers - you just provide it anyway - they need accurate answers (I'll get to this soon). But, if they really seek a precise answer it's a sign of something that is probably a bit risky and you should look out for it.
  4. Find out the value
    If the cost is to be estimated, the value must also be estimated. Calculate the Return Of Investment (ROI). If value and cost is close (low ROI value), it's probably time to cut scope. It's a risk anyway, and it should be discussed with customer. Sometimes this can reveal the level of "roughness" on the cost estimates as well (item 3). Maybe cost isn't that relevant (high ROI value)?
    Value always trumps cost!
    Sometimes they can't (or won't - a sign of trust issue?) estimate the value. If so, one could really ask why this project should be carried out at all..? Anyway, go back to item 2 and you probably have the value somewhere...
  5. Estimate in collaboration
    Who says we can't estimate together? I've written a blog post that touches that subject:
    "Customer collaboration" right? I see no problem in this if there is trust (and maybe even if there isn't?). A customer might have much valuable information to provide when estimating. At least they will learn - and feel they are a part of and/or have opinions about - the estimation process you have and the estimates you come up with. Great value, in my opinion. They might claim it's a waste of their time. But I think it can be very valuable for them. We have nothing to hide, right?
    If it's not possible to estimate in collaboration at least don't present your estimate as a "truth" or "this is how it's going to be". Collaborate! Help. Guide. Tell where there are uncertainties or where you feel there should be margins. Tell them some margins might be used, some don't. It's like inviting people for dinner - you want people to be satisfied so you tend to buy too much food, and it's okay. Discuss risks. Read more here:
  6. Confidence level
    Try to provide your estimates with a confidence level. This is actually not used at all where I work, but claimed to be utterly insane not to in other domains.
    Create a graph (usually in a Weibull distribution form), it doesn't have to take much time to create or use maths. Just use your "best bet" and then min-max values and create a distribution curve by free hand, it's enough to give you an overview of your initial "best bet" estimate. And discuss what you should "commit" to.
  7. Don't be an optimist!
    There might be a good reason for you to be... But usually - and especially with estimates - there might not be a reason to be. Ask yourself: "Who will benefit from us being optimistic here and not be pessimistic instead?" There might be reasons, as I said, but usually not. One reason I can come up with is if there is no true "benefit" from doing a software project, like updating to a new version of some CMS or something. But one could really ask why we should do it, if there is no clear "value" in doing it.
    Read some of what I've written here: But always remember "Hofstadter's law":'s_law
  8. Preciseness is not accuracy! 
    For instance: "952 hours" is not a good estimate. If anything it might clearly show that it's a sum from a number of other estimates. But, please, stop provide estimates like that!
    It's better to be correct than precise. It's better to say "By the end of this year" or "In august" than to say "2 August". Let the specific date be decided later on. Maybe you don't have to present the exact number of hours you estimated at all? Maybe you even shouldn't? Customer is probably only interested in what it's going to cost and roughly when to expect it.
  9. Always keep a deadline - however it was set!
    Okay, this is not true. There might be good reasons to postpone a deadline, but it's a provocative title right :-)
    Anyway, I actually thinks deadlines - often - should be held. Even if they are off. Postponing deadlines can be discouraging for both the team and your customer. And it always feels good to make the "finish line". And you probably have *something* to deploy.
    What do you do? Trim the tail! Read more here:
  10. Say No (or: don't always say Yes)!
    Even if I like item 9 it doesn't always work (as I've said). There's still something else you should keep doing during project: say "No". Or actually - and better explained - don't always say "Yes!" to everything. If things change (and they will) we have to say "This will make the initial estimate a bit obsolete" and re-estimate (if needed). There's nothing wrong (in my opinion) to re-estimate if things change, even small things might affect schedule (and surely if you provide those precise estimates I talked about). Read more here:
    Real user input is always good and should be encouraged. But all changes comes at a price (when you drive with estimates (or you could think about trying to lessen the estimate driving i.e. #NoEstimates. Damn! Now I'm there again. Sorry.)).
  11. Intervals
    Presenting estimates as intervals is okay. And sometimes even preferable to e.g. mark some kind of uncertainty or confidence. It will keep your back safe at least :-)
    But remember that some customers might tend to hear the lower interval while you tend to hear the higher. And you run a risk that the lower value will start traveling in the organisation... (dysfunctional, yes, but there's always a risk).
  12. Focus on iterations
    Do everything you can to always try to have something "usable" to deliver. Don't build (arbitrary) parts of the system/application but ask instead: "If we abort this project after the next sprint/iteration and deliver - what will it look like then?" That is, focus on the most important things first.
    Try to have a "beta site" or "beta application" and continuously deploy to it. Let real users see it and use it - get real feedback. Don't listen to *all* feedback, but more is better than less (and one is infinitely better than zero).
  13. Prototype!
    You don't have to stop prototyping (and by "prototype" I include things like mockups e.g. in Balsamiq) just because project has started (and please use prototypes before you start the project as well). Don't finish all the nifty little details 100% (see previous item as well); give users something they can see and interact with. It doesn't have to work all the way to database and handle all "edge cases" etc. Read more here:
    Remember that real user input is something good! Change is better than following a plan, right?
Remember, this list is mainly written for my own purpose, and it fits me and my context. But I'll be happy if you would like to provide some comments on your thoughts and opinions.