Sunday, 29 March 2015

Not estimating can also be problematic

Let me start by saying that I'm all for trying things. Try. Evaluate. Learn. When you've read this post you'll forget I have said this, maybe because I'm contradicting myself. I don't know.

Anyway, I'll tell you a story. I once worked as a software developer at a product company. We developed a hardware firewall/ADSL-modem. I didn't work as a consultant/contractor back then. It was a small company, like 20 people. Around 5-7 software developers working on the product.

Anyway, I almost never estimated the work I was planned to do. It was like "We really need to get this next thing done, please make sure it gets done" (not exactly of course, and in a rather friendly tone, actually). I admit, it sometimes felt kind of nice back then. But it wasn't always good. Because there still were expectations (and that is kind of why my opinion is that "no estimates" doesn't really exist). And when things took a little bit longer than someone (somehow) expected, it felt even worse. Because I "didn't meet the expectations". And I was pretty junior back then. And also because I hadn't really had the chance to say or claim anything about what I thought the effort was. I would probably have felt bad even if I'd had the chance (or not, if my claim about the effort had been close to correct). But not estimating didn't help either.

I don't know if we really were doing #NoEstimates? Probably not (because there were estimates at some level in the end after all). But I did not estimate most of the work I did, so I guess it kind of counts..?

So, if someone is planning on letting go of estimates. I'd advice don't just try it. Make sure everyone is on board with that decision. Someone might actually not like that decision. I guess that's kind of obvious, though. And no one is saying that you shouldn't do that. It's just an advice, or perhaps something to think about at least.

I guess my point is that if "dysfunctions" isn't solved, letting go of estimates might perhaps solve some problems, but you might also create other ones. And also, if you don't estimate, someone else probably will (for you), and that might feel even worse (for some). So, I don't know, probably the entire organization needs to be on board with the decision? I don't really know, it's just my experience. Guess the goal is to always keep a close feedback loop, but I'm not sure that it will remove the "expectations" of "the next thing that really needs to get done"? Or perhaps the solution is to make sure everyone is on board with not having expectations of anything? Maybe the solution is to always say: "I will have something of value to show you tomorrow"? Will that somehow remove the expectations? Or maybe the solution is to say something like "Sit here and help me make sure it gets done". Is that possible? Maybe. Maybe not. Or maybe something completely else?

Please try for yourself! I guess I'm a bit reluctant to try again myself though (coward!). But hey, times changes! :-)

Friday, 27 March 2015

Estimates, decisions, prioritization, inventories

Ideas

In my personal life (or in any organization) ideas are like a flow of things. (This is nothing new for people who knows about lean etc). I see it like this:
I (or an organization for that matter) probably have a lot of ideas, or "things", of what I want to do. It's like I have an inventory of ideas/things. That's great. I hope? (I don't see anything bad with it at least).
The next step in the flow is the "materialization" of those ideas/things. Like: actually working on it, or doing some planning on what needs to be done.
And the final step is the outcome, or the idea/thing actually materialized.
That's basically how I view the flow of things in my life (and even how I see it in organizations as well).
Ideas/things comes at different sizes. I should switch tyres on my car, I should take it to service, I should put up new curtains at home, paint a room, remodel my kitchen, etc etc. One can easily correlate this with ideas/things in an organization (at probably all levels) as well.
But the thing is, I only have so much time to do all these things. Or I can't afford to "add resources" by hiring people who can help me.

Priorities, decisions

I don't need to dwell on this. I need to do these things, you need to do these things, organizations need to do this. We have limited resources/time, thus we need to make decisions, prioritize. And that's good, I'll get to that.
But let's say I need help because I'm not quite sure how to decide/prioritize. The value part is quite given, even though it might be hard to put numbers on them. But I'm not sure the effort of doing some things. I need help.
However someone gives me an idea of that effort/cost/whatever, if they judge, forecast, slice, whatever. I will treat it as an estimate. Because we can't know exactly, and unexpected things always happen.

Being reasonable

When unexpected things happen, I might question if those things wasn't possible to somehow foresee, or why not having some margin for such things. (Usually, in contracts, there are disclaimers for really unexpected things). But if there's a good reason for it, I'll buy that. That's not a big deal. "Shit happens", right? I'll somehow raise more money or stall everything. What can I do? Was it a bad decision? No. Why dwell on that? Let's do what we can right now, based on the information we now have. We made a decision based on the information we had at that moment. That's great! Be positive. I'm glad we at least started something. Now let's finish it. And learn til next time.
Would I have made another decision if I had knew? Maybe. Maybe not. But why dwell on that? It is what it is. Look ahead. Stay positive. And learn!

Feedback

Hopefully some progress will be reported. So I know ahead of time how things are going. So I can steer and adjust and coordinate. Even with the other things I want to do. Best thing is to actually show me progress, not just tell me progress. If I get to see small pieces of value being delivered and ready, I might even start using some of them. I might actually say "Stop, I think that's enough" (who knows?). And if you show me progress I would probably stop asking for estimates during this materialization, I can obviously just look at the progress and see for myself we're on track or not.

Inventory

Let's say I don't want to have an inventory of ideas, or that I want to drastically decrease it. I'd have to get more resources (or time, however that is done?) to get them done. That's fine. But you know what I think will happen? I will just have even more ideas. And we're back on square one. And that's great! It's called: growing.
Or I could just delete most, or all ideas. But that doesn't feel right.
Or I could start working on all ideas/things (and have all of them in the "materialization phase"). But then I'll probably never really finish any of them either... You know - limit WIP.

"Materialization phase"

When an idea/thing is in this "step/phase", I don't really care that much about other things than progress, as I've said earlier.
If the one I'm hiring doesn't feel it's valuable - in their process - to use estimates, that's fine. If they want to not do sprints or sprint planning or planning poker, story points, or if they want to slice or forecast or whatever, I really don't care that much. (Even though I might be interested in how they work). And this is probably where #NoEstimates have its place?
But as long as I, somehow, have an idea of how we're progressing and/or if it seems like we need to plan (or do) something differently, I'm happy.
When someone reports that progress, it's an estimate to me (because they can't possibly tell me it's fact. But if it is - great!). Thanks for those estimates! (Sorry if I've never said that, I should). I feel that's valuable to me.

Bottom line

I think this is kind of basic stuff.
Do I want to change how this works? I don't see any reasons.
Is it a smell of possible dysfunctions? I can't actually sniff any.
In fact I think it's part of living. Both personally and as an organization.

Love! Live! Fight! Learn!

And how do I look upon #NoEstimates?
It's about the dysfunctions associated with estimates. It's not about throwing babies out with the bathwater.
Could it have a better name? Hell, yeah! :-)
But it's like some kinds of "journalism", you have to make some drama to get attention to a topic. One can have lots of opinions of that. Mine is probably as good as yours :-)

Monday, 2 February 2015

ApprovalTests part II (or things I didn't show at swetugg)

I gave a talk at swetugg about ApprovalTests. Swetugg is a conference arranged by Sweden .Net User Group. Thanks a lot for a great conference and all the work you put in to it!

The presentation was recorded so I'll put up the link to that video when it's available. It's in Swedish though.

Anyway, my talk and demo on ApprovalTests was merely a teaser to the tool. So I thought I was going to show some other pretty neat things you can do with it in this post. Especially how it can be used on legacy code. If you haven't seen ApprovalTests before, this post can perhaps be a little bit difficult to understand, it's mostly a follow-up on my talk. But you can watch some introduction to ApprovalTests on the site: http://approvaltests.sourceforge.net/

Nothing of what I show here is anything new, it can be seen and downloaded from other places. This is just my view on it.

Testing Console Application

There's a code kata called "Gilded Rose". The code can be downloaded here: https://github.com/NotMyself/GildedRose

But it can be any Console Application you have or any other program that might log output to some logger (most loggers support Depency Injection, and thus you can grab the output from it from your test). But let's use the Gilded Rose Program.

So, lets start by creating the gilded rose program. Copy the code from the link above in a new class, call it GildedRose.cs or something. Then also download the Program.cs from the link above.

If you run the program you'll see it will render some output to the console. But let's see what we can do instead.
Create a new test fixture class (I use xUnit, but you can use whatever you like). Add the UseReporter attribute to your test fixture (you have to grab ApprovalTests from nuget first of course). Then we can redirect the output to our own StringWriter, like this (sorry about using pictures/screen caps, but I want to show what it really looks like on my machine):

When we run the test, we will now have the output in our "received" file:

Let's approve that. This is production code so we know it works (well... it could contain bugs of course, but that's another story. We don't want to fix those bugs now anyway).

(Another pretty good thing with logging is that we can always add more. Logging doesn't (shouldn't?) break things, just bring you more information on what happens).

Great! Now we can start refactoring our code! Let's - just for fun - try and move this if-statement (we might think that's a good refactoring step?) and see if we break anything:
What happens? Let's run the test:

Nope, that broke it. I like how we easily can spot the difference using the difftool. But we really don't care what actually broke, we just care that it actually did.

Combination Tests, or "Locking code"

This is another pretty neat thing. I've stolen this directly from Llewellyn Falco (modified some, hope it's ok...) There's a video on this, but I'll show it as screen dumps here.

First, download my version of MovieNight.cs from this gist: https://gist.github.com/henebb/a4c3ac25399858234e3f

Create a new test fixture class and first write this:

If you run the test you will get an output that says: "Mr. Henrik should watch A Most Wanted Man, Sex Tape," (thus I will have time to watch "A Most Wanted Man" and "Sex Tape" during my 3.5 hour movie night).

But this doesn't really feel like we've covered much. And yes, if we start a code coverage tool (like the built in Visual Studio: Test > Analyze Code Coverage) we see that all code is not covered.

But we can modify our approval and use what is called a combination approval. It looks like this. Note how we have to change our variables to arrays (all combinations we want to try) and how we moved the function call to our CombinationApprovals:

If we run the test, we know get this output (excerpt screen dump):

The output will have 300 rows (one for each combination). And if we look at the code coverage we can now see we've covered everything. We can now approve this output and thus "lock" it. And we can refactor the code as we please.

That's two more powerful things you can use ApprovalTests for. If you download ApprovalTests you can explore some other nice reporters and approvers you can use for your specific needs.

Monday, 11 August 2014

Don't Do Stupid Things On Purpose - Is it stupid?

"DDSTOP - Don't Do Stupid On Purpose". I came across this by Mr Glen B. Alleman on twitter. You can read some of what he's written on the topic here: http://herdingcats.typepad.com/my_weblog/2014/05/ddstop.html
It's aimed at the #NoEstimates movement. But DDSTOP doesn't seem to originate from that (if you read the post, at least).

DDSTOP sounds good at a first glance. We should actually not do stupid things on purpose. And sometimes it kind of feels like we do, at least I feel that. It happened today actually. We were requested to estimate a change request that wasn't really thought through. At least we had quite low confidence in giving a credible estimate on the effort required. So for a while we talked about how we could give an estimate and what it should be at. Then it struck me; "Wait a minute! Customer collaboration! Let's book a meeting instead and go through this together with them. Why not even let them hear what we think, directly, instead of 'delivering' the estimate." Everyone in the team agreed. Meeting is now booked. To be continued... :-) I call this approach "no estimates", because we made a decision without estimates - we decided to *not* provide an estimate, i.e. "no estimates". We *will* give an estimate. And that leaves me with: "no estimates" is not "never estimate". And we also avoided a "misuse" of them; we avoided a "guesstimate". And doing the estimate in collaboration with customer at the same time as we look at what is to be done, is also much better, in my opinion. But now I'm really off-topic...

I don't really like DDSTOP as an approach or mindset or whatever it is. I feel it has a "blame" tone. Like: "I/You/He/She did a stupid thing, shame on me/you/him/her!". And even if we don't actually say that, we might implicitly set a culture of not wanting to try new stuff; "Oh, I don't dare to try/do this, it might be stupid, someone might even interpret it as me being stupid on purpose... That banner over there says that and all, oh well...". In my opinion, that is not a culture we should foster. In fact, quite the contrary.

I claim we actually *should* do stupid things! It reminds me of a movie quote I love. From The Three Musketeers. D'Artagnan is about to leave his parents to meet the world and seek adventures:
D'Artagnan's Father: There's one more piece of advice.
D'Artagnan: I know, I know. Don't get into any trouble.
D'Artagnan's Father: Wrong. Get into trouble. Make mistakes. Fight, love, live. And remember, always, you're a Gascon and our son. Now go. Go.
(http://www.imdb.com/title/tt1509767/quotes?item=qt1793828)
A wonderful advice!
My take: Try! Fail! Learn! Be creative! Ignore stupid!

And when someone makes an obvious stupid thing, shame on *us* (as a system or organization) for allowing it to happen. See it as something positive! A chance of learning - and improvement - in the organization. And not just wag it off as "that was just X doing a stupid thing, and furthermore on purpose". There is no learning and improvement in that mindset (or whatever it is).

Taking the example from Glen's blog above; a foam filler was used without reading the directions on its applicable.
"Stupid! And on purpose; not reading the directions."
Well I'd say: "Shame on *us* for not educating people on how to use the foam filler instead, and other safety equipment. Maybe we should even arrange other courses on how to perform CPR as well?"

It's "culturally" important to not put any blame on employees. Not even a slight indication that we do. Not even open up for misinterpretation of the concept that might lead in that direction (I obviously might have misinterpreted DDSTOP. But if I have, others will as well). Because it might lessen (or even kill?) the company's most precious recourses; creativity, innovation and learning.

I think that banner was actually a rather bad idea. I think DDSTOP as a way of thinking is a bad idea. Not stupid though... ;-)

Friday, 8 August 2014

#NoEstimates is not #NeverEstimate - my take

This is not a true story, I'm sorry. But I really think this is an approach a company could use. And there are probably companies out there that are doing this, daily. I just want to give an example to state my view.

If you follow the #NoEstimates discussion you might come across the statement that "#NoEstimates != #NeverEstimate" ("!=" is a programmatic operator (in some languages) that means "not equal to"). But what does that mean? In twitter discussions (where most of the discussions happen) it kind of tends to become "defend your point stances" like discussions. What I mean is; it feels like it becomes a "never estimate" vs "always estimate" discussion - even though we say #NoEstimates actually doesn't mean "never estimate" (or that opponents actually thinks that "always estimate" is something great too?).

This is unfortunate in my opinion. It muddles the real topic.

So, to give some "credibility" to my points - because I believe in the ideas/thoughts behind #NoEstimates - I'm going to give an example of what "it doesn't mean #NeverEstimate" means, in my opinion.

These thoughts comes from after reading the book "The Lean Startup" by Eric Ries (http://theleanstartup.com/). Those of you who have read the book will recognise the idea I'm presenting and those of you that haven't; read it! It also has some big influences from the "Impact Mapping" idea (http://impactmapping.org/) - read that book too, if you haven't.

An Example

Let's say I have a customer that wants to "update the design of their web site". There's nothing wrong with that, right? Now they want to know what that will cost and when it can be expected to be finished. They might crank in some additional changes when they are at it, or not. Doesn't really matter here.
Now what happens? Well, first they have to contact some web designer company that have to look at this and give an estimated cost of what a "design update" might cost. When that's done someone has to implement the design into the real web site ("real code"). That's probably another company who does that - they need to give an estimate of cost for the update as well.

Nothing strange going on here. The project might even (unusually) be on time/budget in this example as well. All is fine. All companies involved might even celebrate the "Web Site Design Update" project. A true success! And it is. ...or is it?
I mean, in a project sense it is. We didn't break budget and we released on time. That *is* great. But still... all that time and money spent - was it worth it?

I guess no one really knows.

Because what is the hypothesis? What assumptions about our web site user's behavior have we made? What is it we want to learn about them and their behaviors that will give us a competitive advantage? In what way will they help us earn/save money? And - very important - how can we measure that our hypothesis is correct or not?

Well, we do it by making experiments. Trying/testing our hypothesis. Thinking small. Focusing on how we can gain (measure) that knowledge as cheap and fast as possible - by cutting all waste, that is; remove everything that doesn't contribute to the learning we seek. Doing minimal effort to get to that knowledge.

How can we apply this here?

Well, what's the hypothesis about updating the web site design. Well, that depends on what the company do. But let's pretend that the main purpose of the web site is to steer the visitors to a certain place or doing something special; like "becoming an applicant for X" (there can be many different purposes of a website that gives a company real value - that is $$$, in the end).

I guess the hypothesis then becomes: "If we update the design, more visitors will apply for X".
Now, that becomes: "How can we prove this assumption is correct?" "What's the minimum thing we can do to test it, and how can we measure that what we do is the actually affecting those metrics?"

On a web site, let's do an "A/B test". That is, let half of (or maybe a specific set of) users see and use the new design and let others see the old one. Try it on just the start page or a specific flow. And while having this focus on "experiment", maybe we should look at improving the user experience as well? An idea we might not have come up with at all if we just focused on "updating the design"..?

The outcome of this might be that the new design had no, or little, impact at all on the behavior of our customers/visitors.
Then an update of the *entire* site might not be what we should spend our money on. This "failure" (hypothesis turned out to be incorrect) is great! We have *learned* something about our customers and their behaviors. We can use that to our advantage. "What more can we learn!"

But also. We haven't spend a lot of time and money on all the estimates and the time and money on updating the entire site - which would have proven to be a "failure" in the end anyway; but a costly failure, a late failure.

But - now to the point - there are estimates here (maybe it doesn't really have to..?)
We have to give some judgment on how much to spend (estimate) and how long it will take (estimate) to perform this experiment, because an experiment isn't free :-). But, I rather give an estimate under these conditions. And it's less "sensitive" if a "1 month" estimated cost is "off" then if a "6 months" estimate is.

And what if the assumption turned out to be right; the new design made a great impact on e.g. the number of applicants? Well, either we estimate the cost of updating the rest of the web site, or we iterate with next assumption or whatever we want to learn next. 
If we choose to update the entire site we now have a much better certainty on the effort of doing so, since we have already done some.
But by choosing to do this in small batches there are other benefits as well. Instead of having the design team hand-off their designs to the next team and move on with their next project - we work together. And we will save time and money! Why? Because if the design team move on with another project they might have to be interrupted in that project when the other team have questions on how to implement the design or problems occur. Those interruptions can be costly and cause delays - and most certainly if these interruptions becomes "the norm". 

To quote Peter Drucker (I'm not sure it was actually he who said it, but it's attributed to him):
"There is nothing so useless as doing efficiently that which should not be done at all."

Friday, 1 August 2014

My Rules of Estimation

In this post I'm just going to write down some of the things I've found about estimates. That is: when estimates fill a purpose (some say they *always* do, and I'm more into the ideas of #NoEstimates to lessen the dependency on estimates... But this is off topic in this post).

These are things that apply to me and the situation and context I'm in (types of customers I have, size of projects etc). I'm not saying these are universal truths or are general guidelines. It's merely my opinions. There's no real news in here either, it's rather old stuff. I just want to share them and keep this as a reminder to my future self.

I won't go into lots of details on these items, just list them as shorter notes. I might revisit these items later on and update them.

  1. Estimate or commitment?
    Someone buying software (or anything really) aren't that interested in estimates. They seek commitment (promises). So they are probably not asking for estimates, but for a commitment. But don't confuse estimates with commitments, two different things and requires different approaches. Here's a metaphor on the difference I like (by Mike Cohn): http://mountaingoatsoftware.com/blog/separate-estimating-from-committing
  2. Find out the budget
    It's there for sure. Then try to discuss what can be built for that amount of money. If they don't want to tell their budget (it's a trust issue really but...) it's rather easy to find it out anyway; Tell them something clearly too high and work your way down ;-)
  3. Find out the "roughness"
    Maybe "This year" is okay as an answer? Or "Less than a month"? Don't just assume they want "estimates as you usually do". Probably they don't need precise answers - you just provide it anyway - they need accurate answers (I'll get to this soon). But, if they really seek a precise answer it's a sign of something that is probably a bit risky and you should look out for it.
  4. Find out the value
    If the cost is to be estimated, the value must also be estimated. Calculate the Return Of Investment (ROI). If value and cost is close (low ROI value), it's probably time to cut scope. It's a risk anyway, and it should be discussed with customer. Sometimes this can reveal the level of "roughness" on the cost estimates as well (item 3). Maybe cost isn't that relevant (high ROI value)?
    Value always trumps cost!
    Sometimes they can't (or won't - a sign of trust issue?) estimate the value. If so, one could really ask why this project should be carried out at all..? Anyway, go back to item 2 and you probably have the value somewhere...
  5. Estimate in collaboration
    Who says we can't estimate together? I've written a blog post that touches that subject: http://kodkreator.blogspot.se/2014/03/what-noestimates-has-taught-me.html
    "Customer collaboration" right? I see no problem in this if there is trust (and maybe even if there isn't?). A customer might have much valuable information to provide when estimating. At least they will learn - and feel they are a part of and/or have opinions about - the estimation process you have and the estimates you come up with. Great value, in my opinion. They might claim it's a waste of their time. But I think it can be very valuable for them. We have nothing to hide, right?
    If it's not possible to estimate in collaboration at least don't present your estimate as a "truth" or "this is how it's going to be". Collaborate! Help. Guide. Tell where there are uncertainties or where you feel there should be margins. Tell them some margins might be used, some don't. It's like inviting people for dinner - you want people to be satisfied so you tend to buy too much food, and it's okay. Discuss risks. Read more here: http://www.leadingagile.com/2014/02/use-agile-build-next-home/
  6. Confidence level
    Try to provide your estimates with a confidence level. This is actually not used at all where I work, but claimed to be utterly insane not to in other domains.
    Create a graph (usually in a Weibull distribution form), it doesn't have to take much time to create or use maths. Just use your "best bet" and then min-max values and create a distribution curve by free hand, it's enough to give you an overview of your initial "best bet" estimate. And discuss what you should "commit" to.
  7. Don't be an optimist!
    There might be a good reason for you to be... But usually - and especially with estimates - there might not be a reason to be. Ask yourself: "Who will benefit from us being optimistic here and not be pessimistic instead?" There might be reasons, as I said, but usually not. One reason I can come up with is if there is no true "benefit" from doing a software project, like updating to a new version of some CMS or something. But one could really ask why we should do it, if there is no clear "value" in doing it.
    Read some of what I've written here: http://kodkreator.blogspot.se/2012/01/my-estimates-are-too-optimistic.html But always remember "Hofstadter's law": http://en.wikipedia.org/wiki/Hofstadter's_law
  8. Preciseness is not accuracy! 
    For instance: "952 hours" is not a good estimate. If anything it might clearly show that it's a sum from a number of other estimates. But, please, stop provide estimates like that!
    It's better to be correct than precise. It's better to say "By the end of this year" or "In august" than to say "2 August". Let the specific date be decided later on. Maybe you don't have to present the exact number of hours you estimated at all? Maybe you even shouldn't? Customer is probably only interested in what it's going to cost and roughly when to expect it.
  9. Always keep a deadline - however it was set!
    Okay, this is not true. There might be good reasons to postpone a deadline, but it's a provocative title right :-)
    Anyway, I actually thinks deadlines - often - should be held. Even if they are off. Postponing deadlines can be discouraging for both the team and your customer. And it always feels good to make the "finish line". And you probably have *something* to deploy.
    What do you do? Trim the tail! Read more here: http://www.crosstalkonline.org/storage/issue-archives/2014/201407/201407-Cockburn.pdf
  10. Say No (or: don't always say Yes)!
    Even if I like item 9 it doesn't always work (as I've said). There's still something else you should keep doing during project: say "No". Or actually - and better explained - don't always say "Yes!" to everything. If things change (and they will) we have to say "This will make the initial estimate a bit obsolete" and re-estimate (if needed). There's nothing wrong (in my opinion) to re-estimate if things change, even small things might affect schedule (and surely if you provide those precise estimates I talked about). Read more here: http://www.mountaingoatsoftware.com/blog/to-re-estimate-or-not-that-is-the-question
    Real user input is always good and should be encouraged. But all changes comes at a price (when you drive with estimates (or you could think about trying to lessen the estimate driving i.e. #NoEstimates. Damn! Now I'm there again. Sorry.)).
  11. Intervals
    Presenting estimates as intervals is okay. And sometimes even preferable to e.g. mark some kind of uncertainty or confidence. It will keep your back safe at least :-)
    But remember that some customers might tend to hear the lower interval while you tend to hear the higher. And you run a risk that the lower value will start traveling in the organisation... (dysfunctional, yes, but there's always a risk).
  12. Focus on iterations
    Do everything you can to always try to have something "usable" to deliver. Don't build (arbitrary) parts of the system/application but ask instead: "If we abort this project after the next sprint/iteration and deliver - what will it look like then?" That is, focus on the most important things first.
    Try to have a "beta site" or "beta application" and continuously deploy to it. Let real users see it and use it - get real feedback. Don't listen to *all* feedback, but more is better than less (and one is infinitely better than zero).
  13. Prototype!
    You don't have to stop prototyping (and by "prototype" I include things like mockups e.g. in Balsamiq) just because project has started (and please use prototypes before you start the project as well). Don't finish all the nifty little details 100% (see previous item as well); give users something they can see and interact with. It doesn't have to work all the way to database and handle all "edge cases" etc. Read more here: https://medium.com/@ppolsinelli/from-noestimates-to-yesprototypes-1b51f6a63e5d
    Remember that real user input is something good! Change is better than following a plan, right?
Remember, this list is mainly written for my own purpose, and it fits me and my context. But I'll be happy if you would like to provide some comments on your thoughts and opinions.

Monday, 28 July 2014

Capabilities (or why IT projects fail)

When you (as a company/organization/department/whatever) have an idea or a need or a problem you want to solve you may not have the ability to materialize the idea or solve the need/problem. You then need an ability to do it - a capability. That's the definition of "capability" I'd like to use when it comes to software (or IT in general).

If a company have an idea/problem/need that might result in them wanting some kind of capability that requires software.

Now, they want to know what that capability (via software) is going to cost them to get and maybe they need it at a specific date or at least when to expect to have their new capability ready. Nothing strange, who wouldn't?

The software is built. It might unfortunately be over budget (underestimated obviously), late (underestimated obviously) or it didn't provide the capability they needed.
But, I say it really doesn't have to matter. At least why it is a "fail". Maybe it's on (or below?) budget, on time (or maybe early?) and the right capability was provided and it still is a "fail". Why? Because the idea wasn't that good or the need wasn't really something someone actually had or solving the problem didn't really mattered.

So, how can you know before you have the capability? Well, you can't. The only way to know is to actually have it. But. There's a mind shift here. (Some are actually already doing this) Validate. At the end of the day it's all about learning. You want to learn that your idea is good or that the need is actually a need someone has or that solving a problem is worthwhile. And eventually you *will* learn (or not, if you choose not to do it).
And you can only *assume* that the capability will give you that learning.

Now, you can either start thinking about it (analyzing) or you can start acting, creating. That's the mind shift. I'm not saying: Stop thinking! Stop analyzing! But instead of "Let's postpone this to next meeting" (or whatever), think "What can we do today/tomorrow that will bring us learning about this? Is there anything small we can start doing?". I.e. how can we try - and validate - this assumption we have? Anything? Think "small". Think "learn". Think "validate".

If I take it to the extreme, I basically think they have two options (according to The Rule of Three they should have three options and not two, because 1 option is a trap, 2 is a dilemma, 3 is a choice :-) - heard through George Dinwiddie and Jerry Weinberg. But I'll let you ponder on that third option yourself. Or let all the "gray zones" in between these two extremes be those options).

1. Assume there's a software project ahead and start defining it. Gathering requirements, start planning, estimating etc.
2. Assume there's something small that can be started now (or as soon as possible). With the focus on validating that the assumptions are correct. Maybe we can continue iterating from there or at least we are more confident (validated) that the capability will make a difference and can go to point 1 above.

I'm not saying some (most?) companies aren't doing this already. But this is my experience, and the customers/companies I've met don't have much of this mindset. Your experience may differ.

And I suggest that if you have validated that the basic idea or need or problem will "pay off" it might not matter that much if project is over-budget and/or late? It's never something good, I admit that, but we might not consider it a "fail"?

But then again, it feels good being able to "blame" the IT project (and probably the "supplier" - someone else!) for being the "failure" instead of trying to change your own behaviors, seeing your own responsibility..?