Thursday 5 November 2015

The One Button Approach

A great way to start off a blog post is to tell a story about Steve Jobs. So I will do that. Unfortunately I don't think it's true. Not even slightly. I've only heard it as an anecdote. But let's pretend it's true, it's funnier that way. The story goes like this:

Apple was about to implement a "clone DVD" feature. So a group of people got together to specify and sketch out how this clone feature was going to work. They prepared some documentation and some wireframes for the day when Steve Jobs was going to meet them and approve their work. They felt prepared. And the day came. Steve entered the room where the group sat. They were ready to hand him their work and present it to him. But Steve just walked up to the whiteboard and drew a big button that said "Clone DVD". He then said "This is what you are going to build. Good luck!", and then he left the room, leaving the group staring at the whiteboard.



This little story has got me thinking if it (besides not being true) is a good way to think about how one could "slice" stories. Or split the work into something minimum.
Imagine if all your application had, or some part of it, was just a single button. Not likely, but not the point. Work your way backwards from there, ask "Why isn't it possible to just have a single button?"
I like to use stories as just that: stories. Perhaps just some short note on what we want to achieve. Like "Clone DVD". Just a verb and a substantive. Perhaps you could use user stories as well, but they don't really serve as a button, do they?

Let's take another example. Let's say we have a site or other application where one of the things people can do is apply for jobs. You already have a button:



Why won't this suffice? Well, one obvious thing is that a user doesn't even know what job they would apply for. Another is that we, as a receiver, wouldn't know who the applicant is. And there are lots of other things we could do here to make people want to apply for the job. But still, as a first iteration (or sprint), why not keep just the button? Make up all other things meanwhile. This way we can work on making the whole thing work all the way through the process and actually make a fully working job application. Then keep adding the other features when you know it works all the way through.

Or, this single button could work as a starting point for a discussion. As said previously "Why won't a single button do?" then let the discussion flow, but keep in mind "Do we really need this now?". The ultimate goal is to just have that single button. Wouldn't you love this as a user as well? "Clone DVD" and voilà!

What is the point here?


Update:

After I've got some feedback I feel I wanted to update and clarify.
The feedback was that a button feels like a solution. While this actually can be true (as in that we in a first iteration deliver a single button) it isn't the main point. I'd like to emphasize that it probably never will happen that we deliver a single button at any point.
First of all, the button should have the name of some capability (like "Apply for job") and then act as a conversation for what is required for e.g. applying for a job and how it can be achieved in simplest way. But, most likely the end solution will not even have a button named "Apply for job", it might be just "Save" or "Send" or something.

The button is hopefully a good way to trigger an early conversation about the UX (User eXperience: the user interactions, the flow, the design (although not initially the graphical) etc.). The true value of an application happens in the usage - the interaction that happens "between the screen and the chair". An application won't ever be valuable unless someone uses it and it makes that person more productive or creates better quality work or is faster or is just simply happier or whatever. And the best way is to start (and end) in the use. Having a discussion around the UX is also a great way to create shared understanding of what we want to achieve in general. Hopefully we can also see the value in having a UX designer at an early stage. Perhaps start building "paper prototypes" etc. "Visuals" are usually much better at trigger discussions than just plain text.

So please don't mainly see this as a way to deliver a single button. It's just a conversation starter. You can still read my previous point (it's strike trough to note it is the "old" one.):

It's about simplicity. Do not add more than necessary. I think The One Button Approach can help with this. Perhaps you struggle to come up with what do deliver in the first iteration? Just focus on making it work with the use of one single button. Like a vertical slice. And as I've mentioned already, it could also be used in story discussions (or call it "gathering requirements"). The thing to discuss is "Could the user experience just be a single button; 'Do the thing a user wants to achieve'?". And evolve from there, perhaps add more buttons for all the things a user of the application would want to achieve.

Still valid, though it is mostly confusing perhaps..?





Saturday 1 August 2015

"What can I get for..?"

I've been following some discussions on twitter and read in various blog posts about shifting to budgets (http://neilkillick.com/2013/11/25/how-much-will-it-cost-and-when-will-i-get-it/, http://gojko.net/2014/02/24/budget-instead-of-estimating/http://ronjeffries.com/articles/015-jul/mcconnell/ I've even written some about it as well: http://kodkreator.blogspot.se/2014/08/my-rules-of-estimation.html). It's not really a shift, because I think budgets always exists in one way or another, but the idea seems to be to move to questions like "What can I get for $X?" or "What is possible for $X?" or even "How much are you willing to spend/invest?", "How much is this worth for you?" (or similar).

These are seemingly great questions (approaches) and it may work really well in some situations. But I sense some problems with it as well, let me explain.

In everyday life, this is probably how we buy things. If I'm interested in e.g. a new TV, then "What TV can I get for $X?" is a common, sensible question. And also a good thing to let the TV salesman know. But buying software isn't really the same thing as buying a TV or a car or whatever. As the famous quote goes "Your job isn't to get the requirements right - your job is to change the world" (Jeff Patton). In other words, we're not selling cars - we're selling "dreams" (sounds epic, right?)
I think, perhaps, it's a better analogy to compare buying software to buying art or (even better?) a pet.
If I want a dog, the main question in my head isn't "What kind of a dog can I get for $X?" or "What is a dog worth to me?". And if a dog breeder told me "How much are you willing to spend on a dog?" that would - to me - come off as a pretty weird question to ask. I have "a dream" of owning a Labrador Retriever (just an example, and I think you can compare with art here as well), not just settling with a dog I can afford (although that might be something someone actually would do). I think the same goes for software in many cases. (I wrote an article in a Swedish computer magazine about the analogy of software and pets: "Så får du it att räcka vacker tass" (Swedish)).

In my experience, the "value part" or "what is this worth?" is quite often not really known (or very hard to predict) or perhaps not even really interesting. Of course, getting to know these things might be worthwhile, and sometimes (often?) it's actually really good to find out/explore these figures. But again, the most interesting question when buying pet/art/software often isn't "What is this worth (in $) to me?". Thus, the first question isn't "What are you willing to spend?" (the car salesman approach). The first question to ask is "What is your 'dream'?".

Then, of course, the inevitable question will come: "What will this cost me?" and whatever we respond, the response to that most likely(?) will be "I can't spend that much!". But what if someone doesn't? Wouldn't you rather have someone having their "dream" fulfilled (or as close as we can get) over getting to know, upfront, what they are willing to spend on something? (I'm stretching the word "dream" here, I know, but I think you get it). And to be honest, I believe the question: "What are you willing to spend?" (or similar) most likely will make you come off as a "car salesman". Not a good start (if you start with that question).

Of course, in business, the difference is that we want to make money. Art/pets won't (not usually, at least) let you do that - and it's (usually) not the goal either. But again, that question is quite often really hard to know; "How will this make me (a lot of) money?" And here I believe that moving to validating assumptions (instead) is a good approach. Like: "How can I grow a sustainable business out of this?" (read more in The Lean Startup. Although there are things to consider as well: Lean Start-Up, and How It Almost Killed Our Company). This is another topic.

Another quite common thing is that some things "must" be done, for various reasons (e.g. regulatory), and then the question: "How much can I get for $X?" isn't relevant either.

Also worth considering is that being explicit with your budget might not be that great (in a "low trust environment"?):

Friday 26 June 2015

My thoughts on "Agile for Humans" podcast

I recently listened to a podcast on the topic of #NoEstimates on the "Agile For Humans" podcast (follow the link for the podcast and the participants).

Overall, I think it was a great podcast! Some good points were raised and the discussions were interesting. It left me with some thoughts I'd like to share here. And I took some quick notes while listening, so I'll share them here.

First of all, on the topic of estimates, I think the discussions were very developer-centered. And there's nothing wrong or strange with that, since the participants are all software developer coaches and similar. But when it comes to the need for and use of estimates, it's not just about the software developers. So, I would have liked to hear a discussion more from a "business" perspective. That's the overall feeling I got.

There was a discussion about the name/hashtag "no estimates". The name was practically praised. Sure, it was briefly mentioned that the name was "controversial", etc. But I missed hearing a discussion of all the "bad" things that come out of it, as I see it: the binary thinking, false dichotomies, Sturgeon's law, that estimates are lies, waste, etc. Things that I constantly see on the topic. Little to none of that was covered. I would have liked to hear some thoughts about how this could be addressed.

And, sure enough. JB Rainsberger soon started to talk about that we were "brainwashed" (yes, he used that word). I actually think that's what the name/hashtag does to you - words matter! From here, the rest of the podcast kind of lost a lot of credibility to me, because no one confronted this. Why was this such a big turnoff for me? I see it like this:

Why do we use the word "brainwashed" and estimates in the same sentence? That's probably the most interesting question of all. Why do we see it like that? Am I really brainwashed with estimates? I suppose if I were, I wouldn't know, right? But am I brainwashed with using seat belts as well? Tooth brushing? Using knives? Salaries? Math? Eating? Does that sound silly? Yes it does. But why does no one see it as silly when calling use of estimates "brainwashing"? I hear you say "It's not the same thing". No it isn't, but I can mention hundreds, if not thousands of times per day when I estimate. Implicitly or explicitly. It can happen in the blink of an eye. Just count the number of times per day you check what time it is. Some might claim that it isn't estimating in a "software sense". Sure. But my point here is to show how natural and ubiquitous estimates are. Just like eating, knives or math. And why do we see the use of them as "brainwashed" just because it relates to building software? I wonder.

As Geert Bollen so nicely put it in a tweet:
you have me down as "pro-estimates", supposing that even means anything. Isn't it silly what this does to us?

Further thoughts. The topic of "How late were we?" was discussed. And it was mentioned that the need to know "How late were we?" wasn't really useful. I'd say yes, it's useful. It's called: learning. If I thought that something would take 10 hours but ended up at 5 or 15 (if the scope didn't change), there's learning there. I agree, there's not much use in knowing that for the specific situation, but learning for the future. "Last time I missed taking this thing into account. I won't miss that again." Or anything. If you have estimated, not learning from the outcome is ignoring the learning opportunity.

More thoughts. "How we always have done it" was discussed. It is a common fallacy that "how we've always done it" shouldn't matter. Often it is connected with "Things aren't the same now as it were back then". Sure. But perhaps we just need to learn why we use the approaches we do? Rather than revolt and redo learning. Sure, sometimes we need to rethink some practices because times change. But things are also how they are because sometimes it's been tested and proven to be a great way to do things. We can actually learn, we don't have to repeat history.

Lot of criticism there :) As I said in the beginning, despite these things, I really enjoyed listening to the podcast and it was really helpful and learning for me. Go ahead and listen to it!

Wednesday 27 May 2015

Minimum Viable Product = Maximum Expensive Learning?

The title is quite provocative... Don't worry, I'll explain.

The reason for this post is that I think the concept of the MVP (Minimum Viable Product, as defined in The Lean Startup by Eric Ries - a great book, read it if you haven't) is, in my opinion, a bit misunderstood. I'd claim that validating your business assumptions with software is a very expensive alternative, and hence your least desirable alternative. But MVP is not bound to building software. A product is not only software. One definition of "product" is: something that is the result of a process. (Merriam-Webster). Thus, the result - the outcome - of something you do, might be seen as a product.

Let me explain my thoughts.

One image that floats around on twitter and in various presentations is this image (I think it's originally created by Henrik Kniberg for Spotify):
(Image source: http://www.uxbooth.com)

I know. It's just a metaphor, just an image. And it shouldn't be overinterpreted. But that's actually exactly what I'm going to do now :) Because I think it works for discussing some things, explaining my thoughts.

I'm not going to dwell on the fact that the top steps in the image is actually the wrong, however you'd approach anything. So, let me start by explaining the good parts of this image, and how I think it should be interpreted.

The image is to be seen as a way to start off small. Not building everything upfront. But better; iterate your way to the "final" (nothing is ever "final", right? But it fits the image) version. I assume, myself, that the assumption in the image is that someone needs the capability (it's always about capabilities, that's what business are working for - having a new (or improved) capability, e.g. the capability to fill a market need etc.) of going from point A to point B. Later on we discover more stuff; that it would be great to have a handle to hold on to. Then later the faster bike etc. And finally we end up with a car. And who knows what the next step is...

All great.

But.

I also see potential problems here. This might actually be a very expensive approach. How?

Because software is an expensive way of iterating to a needed capability. Even if building software today is much cheaper than it was some time ago, most capabilities are not cheap enough.
And if your customers really needs cars - all the other steps were actually waste. And. You might end up in a "trap" of not ending up at that car at all.

Because, building a skateboard is not that cheap actually (if looking at it from a software perspective). Even if it's just a month or so for a couple of developers, that's still quite expensive. Of course, in some domains it might perhaps not be that much money, compared to other things, but still; not cheap. Then add all the other steps, the kickbike, the bike, the motorcycle. Expensive. Very. If compared to: asking the right questions. Getting to know your customers and market in other ways than through software. In this, simple, case all you'd have to do is ask a couple of quick questions:
- Where do you plan on going?
- Shopping, vacations, skiing etc.
- How fast would you like to get there?
- Well... couple of hours as a maximum.
- Do you often need luggage?
- Yes, like skies, groceries etc.
- How many people do you want to bring?
- Usually the entire family.
- Etc...
Only one option is now left; the car. Perhaps some other options could've been a horse with a carriage or a helicopter or something. But a rough idea of the cost and/or something else would easily solve that problem.

This is a much cheaper approach than to build all the intermediate steps in between. This is of course not always possible (and where the metaphor/image falls short) and then building these intermediate products is perhaps your only alternative.

But. There's another possible problem. About gathering information, gaining "knowledge", learning. The wrong kind.

In a more complex scenario, gathering data and learning from it is not that obvious. Using the image as an analogy, if we build and deliver skateboards to our customers. We might end up focusing on: people loving skateboards. Thus, we might end up getting feedback about how to improve the ball bearings or how to build a better shape on the board for various tricks etc. And we might lose the focus on our real valuable customers. And we might even lose those other customers (when improving on the "wrong" things) - the ones we really would like to target - those needing cars.

Thus, sometimes (or perhaps even most often?) the cheapest way to learn about your market and customers is to actually get out there. Get to know them. Marketing research and similar. Getting to know them by going straight for building and shipping software is not a cheap approach.

But, when eventually building that capability you need. Do not plan all the details up-front. Iterate.

Getting to know the "right" capability and iterating the details is cheaper than iterating your way to the "right" capability by building stuff. This goes against the idea of "The Lean Startup". But remember, Eric Ries writes in the book that the context is: extreme uncertainty. Things aren't always extremely uncertain. And if it actually is? Well go for it. But if you can ask a couple of questions (or similar activities) and gain similar knowledge - that's your option. Or, it's actually not a dichotomy. Do both. When needed. You know, that context stuff...

To finish off this post. I think this image is a better example on the evolution of an idea, as I see it:
(Image source: http://uxpodcast.com)



This post in inspired by the story here: http://www.infoq.com/articles/lean-startup-killed

Saturday 4 April 2015

"Estimates are waste"

I quite often hear "Estimates are waste" or "They aren't adding value" or "If estimates were valuable, why not do more of them? Why not only do them?" etc.
Quite often it's visualized on a linear "graph" like activities in a similar way:
The red areas are time spent doing non-value adding activities ("waste") and green areas are activities that bring value (e.g "doing the work", like writing code etc). It's quite common to put estimates in the red areas.

This might be all reasonable. Because sure, who would like to only do estimates all the time? And if not, aren't they by definition "waste"?

Is that so?

I don't see it that way. Or I misunderstand. I see it like this; Having some time to stop for a while. Pausing, reflecting. Try to see where we are. Try to see if there's somewhere (else) we should be going. That is: making (new) decisions based on the current situation. Is that waste? Of course, if we spend most of our time there, it's probably not that valuable. And keeping those periods as short as possible is of course an aim.

If we instead of a linear graph had a 2-dimensional graph. Even if we removed all "waste", how would we know we actually created value? It could just as well look like this:

(It could of course point upwards as well. But I'd claim it probably won't).

But if we pause and reflect (to make decisions), hopefully the graph might look like this:
Even if "the value" stalls for periods of time, in the long run it would probably point upwards.

This is all very simplified, I know. But it's aimed at making a point.

Sure, estimating might in the short term seem like waste of time because we're actually not doing any real work. And it doesn't guarantee value, it doesn't even really affect the outcome. That is true. But in the long run - it still brings value.

Take care of your teeth

This is metaphor. It might be a silly metaphor. But if one thinks about it, it's actually quite close. At least easy to relate to estimates (for me anyway).
I feel that taking care of my teeth is a quite boring activity. It's not value-adding at all to me. I'd rather do without actually. In short, it's waste of my time. You know; if it where valuable I would like to brush my teeth all day long, right?
But still. I realize that in the long run, it's highly valuable. Spending some time each day to take care of my teeth will bring me lots of value - even if I can't see it when actually doing it. I still wouldn't want to do it all day long, and it doesn't guarantee healthy teeth. But I'm very sure that not taking care of my teeth is a sure way to make them worse (in the end).

Could we find a way to have healthy teeth without taking care of them? In the future, who knows? That would be great! But this is where the metaphor falls apart. We need to make decisions once in a while (quite often actually). I think we all agree on that. And estimating is what we use when making decisions.
Let me quote the dictionary definition of deciding (Merriam Webster):
: to make a choice about (something) : to choose (something) after thinking about it

: to choose whether or not to believe (something) after thinking about it : to reach a conclusion about (something) because of evidence

: to cause (something) to end in a particular way : to determine what the result of (something) will be
To make a decision we weigh pros and cons. The pros might be real monetary value or just emotional value. The cons are most often the cost of doing something (and possibly when we could get it), or an emotion that we actually don't want to do it (not that common). Those pros and cons are nothing but: estimates.

Sunday 29 March 2015

The #Yestimates hashtag

I didn't invent this hashtag, I heard it in a twitter conversation some time ago. I'm dishonestly stealing the name. Let me state how I look at it. This is just a draft, I might change it later on, but it's a start at least. I think of this hashtag as the more "positive" version of #NoEstimates. I guess they mean the same thing actually, but I feel we need a more "positive" version of it. By "positive" I mean that #NoEstimates can be positive as well, just that some (including myself) might see it as a bit like saying "no", and to me that sounds a bit "negative", but that's just an opinion.

Here it goes:

"Yes, we do estimates! We see it as a way to help our customers make decisions in their hard and difficult situation where they have to make those tough decisions.
Yes, we do everything we can to be transparent and keep an honest conversation and we try to help our customers do the same thing for us.
Yes, we'd like to help them communicate their expectations (or constraints) and we will communicate how we look at things as well.
Yes, we'd like to help them discover where there might be (even more?) value.
Yes, we'd like to help them discover the value by doing the least effort possible.
Yes, we'd like to help them validate their hypotheses.
Yes, we'd like to help them make as quick decision as possible, since delaying value also has a cost.
This way we try to fight all possible dysfunctions that seem so common in the name of estimates."

Some words about decisions

I think a definition is appropriate here. A definition of what we mean by "making a decision".
This is how I define that:
Making a decision means choosing among choices. Each choice have a benefit and a cost. It doesn't have to be a clear monetary benefit, but hopefully it is. Cost is usually always clear monetary in the end. Deciding means "weighing" the available choices and picking one or more of them, or none. In other words "weighing" the different benefits and costs among the choices. That "weighing" is in itself a form of estimating.

That's basically it. Please feel free to help by suggesting improvements/changes or if things needs clarification.

Not estimating can also be problematic

Let me start by saying that I'm all for trying things. Try. Evaluate. Learn. When you've read this post you'll forget I have said this, maybe because I'm contradicting myself. I don't know.

Anyway, I'll tell you a story. I once worked as a software developer at a product company. We developed a hardware firewall/ADSL-modem. I didn't work as a consultant/contractor back then. It was a small company, like 20 people. Around 5-7 software developers working on the product.

Anyway, I almost never estimated the work I was planned to do. It was like "We really need to get this next thing done, please make sure it gets done" (not exactly of course, and in a rather friendly tone, actually). I admit, it sometimes felt kind of nice back then. But it wasn't always good. Because there still were expectations (and that is kind of why my opinion is that "no estimates" doesn't really exist). And when things took a little bit longer than someone (somehow) expected, it felt even worse. Because I "didn't meet the expectations". And I was pretty junior back then. And also because I hadn't really had the chance to say or claim anything about what I thought the effort was. I would probably have felt bad even if I'd had the chance (or not, if my claim about the effort had been close to correct). But not estimating didn't help either.

I don't know if we really were doing #NoEstimates? Probably not (because there were estimates at some level in the end after all). But I did not estimate most of the work I did, so I guess it kind of counts..?

So, if someone is planning on letting go of estimates. I'd advice don't just try it. Make sure everyone is on board with that decision. Someone might actually not like that decision. I guess that's kind of obvious, though. And no one is saying that you shouldn't do that. It's just an advice, or perhaps something to think about at least.

I guess my point is that if "dysfunctions" isn't solved, letting go of estimates might perhaps solve some problems, but you might also create other ones. And also, if you don't estimate, someone else probably will (for you), and that might feel even worse (for some). So, I don't know, probably the entire organization needs to be on board with the decision? I don't really know, it's just my experience. Guess the goal is to always keep a close feedback loop, but I'm not sure that it will remove the "expectations" of "the next thing that really needs to get done"? Or perhaps the solution is to make sure everyone is on board with not having expectations of anything? Maybe the solution is to always say: "I will have something of value to show you tomorrow"? Will that somehow remove the expectations? Or maybe the solution is to say something like "Sit here and help me make sure it gets done". Is that possible? Maybe. Maybe not. Or maybe something completely else?

Please try for yourself! I guess I'm a bit reluctant to try again myself though (coward!). But hey, times changes! :-)

Friday 27 March 2015

Estimates, decisions, prioritization, inventories

Ideas

In my personal life (or in any organization) ideas are like a flow of things. (This is nothing new for people who knows about lean etc). I see it like this:
I (or an organization for that matter) probably have a lot of ideas, or "things", of what I want to do. It's like I have an inventory of ideas/things. That's great. I hope? (I don't see anything bad with it at least).
The next step in the flow is the "materialization" of those ideas/things. Like: actually working on it, or doing some planning on what needs to be done.
And the final step is the outcome, or the idea/thing actually materialized.
That's basically how I view the flow of things in my life (and even how I see it in organizations as well).
Ideas/things comes at different sizes. I should switch tyres on my car, I should take it to service, I should put up new curtains at home, paint a room, remodel my kitchen, etc etc. One can easily correlate this with ideas/things in an organization (at probably all levels) as well.
But the thing is, I only have so much time to do all these things. Or I can't afford to "add resources" by hiring people who can help me.

Priorities, decisions

I don't need to dwell on this. I need to do these things, you need to do these things, organizations need to do this. We have limited resources/time, thus we need to make decisions, prioritize. And that's good, I'll get to that.
But let's say I need help because I'm not quite sure how to decide/prioritize. The value part is quite given, even though it might be hard to put numbers on them. But I'm not sure the effort of doing some things. I need help.
However someone gives me an idea of that effort/cost/whatever, if they judge, forecast, slice, whatever. I will treat it as an estimate. Because we can't know exactly, and unexpected things always happen.

Being reasonable

When unexpected things happen, I might question if those things wasn't possible to somehow foresee, or why not having some margin for such things. (Usually, in contracts, there are disclaimers for really unexpected things). But if there's a good reason for it, I'll buy that. That's not a big deal. "Shit happens", right? I'll somehow raise more money or stall everything. What can I do? Was it a bad decision? No. Why dwell on that? Let's do what we can right now, based on the information we now have. We made a decision based on the information we had at that moment. That's great! Be positive. I'm glad we at least started something. Now let's finish it. And learn til next time.
Would I have made another decision if I had knew? Maybe. Maybe not. But why dwell on that? It is what it is. Look ahead. Stay positive. And learn!

Feedback

Hopefully some progress will be reported. So I know ahead of time how things are going. So I can steer and adjust and coordinate. Even with the other things I want to do. Best thing is to actually show me progress, not just tell me progress. If I get to see small pieces of value being delivered and ready, I might even start using some of them. I might actually say "Stop, I think that's enough" (who knows?). And if you show me progress I would probably stop asking for estimates during this materialization, I can obviously just look at the progress and see for myself we're on track or not.

Inventory

Let's say I don't want to have an inventory of ideas, or that I want to drastically decrease it. I'd have to get more resources (or time, however that is done?) to get them done. That's fine. But you know what I think will happen? I will just have even more ideas. And we're back on square one. And that's great! It's called: growing.
Or I could just delete most, or all ideas. But that doesn't feel right.
Or I could start working on all ideas/things (and have all of them in the "materialization phase"). But then I'll probably never really finish any of them either... You know - limit WIP.

"Materialization phase"

When an idea/thing is in this "step/phase", I don't really care that much about other things than progress, as I've said earlier.
If the one I'm hiring doesn't feel it's valuable - in their process - to use estimates, that's fine. If they want to not do sprints or sprint planning or planning poker, story points, or if they want to slice or forecast or whatever, I really don't care that much. (Even though I might be interested in how they work). And this is probably where #NoEstimates have its place?
But as long as I, somehow, have an idea of how we're progressing and/or if it seems like we need to plan (or do) something differently, I'm happy.
When someone reports that progress, it's an estimate to me (because they can't possibly tell me it's fact. But if it is - great!). Thanks for those estimates! (Sorry if I've never said that, I should). I feel that's valuable to me.

Bottom line

I think this is kind of basic stuff.
Do I want to change how this works? I don't see any reasons.
Is it a smell of possible dysfunctions? I can't actually sniff any.
In fact I think it's part of living. Both personally and as an organization.

Love! Live! Fight! Learn!

And how do I look upon #NoEstimates?
It's about the dysfunctions associated with estimates. It's not about throwing babies out with the bathwater.
Could it have a better name? Hell, yeah! :-)
But it's like some kinds of "journalism", you have to make some drama to get attention to a topic. One can have lots of opinions of that. Mine is probably as good as yours :-)

Monday 2 February 2015

ApprovalTests part II (or things I didn't show at swetugg)

I gave a talk at swetugg about ApprovalTests. Swetugg is a conference arranged by Sweden .Net User Group. Thanks a lot for a great conference and all the work you put in to it!

The presentation was recorded so I'll put up the link to that video when it's available. It's in Swedish though.

Anyway, my talk and demo on ApprovalTests was merely a teaser to the tool. So I thought I was going to show some other pretty neat things you can do with it in this post. Especially how it can be used on legacy code. If you haven't seen ApprovalTests before, this post can perhaps be a little bit difficult to understand, it's mostly a follow-up on my talk. But you can watch some introduction to ApprovalTests on the site: http://approvaltests.sourceforge.net/

Nothing of what I show here is anything new, it can be seen and downloaded from other places. This is just my view on it.

Testing Console Application

There's a code kata called "Gilded Rose". The code can be downloaded here: https://github.com/NotMyself/GildedRose

But it can be any Console Application you have or any other program that might log output to some logger (most loggers support Depency Injection, and thus you can grab the output from it from your test). But let's use the Gilded Rose Program.

So, lets start by creating the gilded rose program. Copy the code from the link above in a new class, call it GildedRose.cs or something. Then also download the Program.cs from the link above.

If you run the program you'll see it will render some output to the console. But let's see what we can do instead.
Create a new test fixture class (I use xUnit, but you can use whatever you like). Add the UseReporter attribute to your test fixture (you have to grab ApprovalTests from nuget first of course). Then we can redirect the output to our own StringWriter, like this (sorry about using pictures/screen caps, but I want to show what it really looks like on my machine):

When we run the test, we will now have the output in our "received" file:

Let's approve that. This is production code so we know it works (well... it could contain bugs of course, but that's another story. We don't want to fix those bugs now anyway).

(Another pretty good thing with logging is that we can always add more. Logging doesn't (shouldn't?) break things, just bring you more information on what happens).

Great! Now we can start refactoring our code! Let's - just for fun - try and move this if-statement (we might think that's a good refactoring step?) and see if we break anything:
What happens? Let's run the test:

Nope, that broke it. I like how we easily can spot the difference using the difftool. But we really don't care what actually broke, we just care that it actually did.

Combination Tests, or "Locking code"

This is another pretty neat thing. I've stolen this directly from Llewellyn Falco (modified some, hope it's ok...) There's a video on this, but I'll show it as screen dumps here.

First, download my version of MovieNight.cs from this gist: https://gist.github.com/henebb/a4c3ac25399858234e3f

Create a new test fixture class and first write this:

If you run the test you will get an output that says: "Mr. Henrik should watch A Most Wanted Man, Sex Tape," (thus I will have time to watch "A Most Wanted Man" and "Sex Tape" during my 3.5 hour movie night).

But this doesn't really feel like we've covered much. And yes, if we start a code coverage tool (like the built in Visual Studio: Test > Analyze Code Coverage) we see that all code is not covered.

But we can modify our approval and use what is called a combination approval. It looks like this. Note how we have to change our variables to arrays (all combinations we want to try) and how we moved the function call to our CombinationApprovals:

If we run the test, we know get this output (excerpt screen dump):

The output will have 300 rows (one for each combination). And if we look at the code coverage we can now see we've covered everything. We can now approve this output and thus "lock" it. And we can refactor the code as we please.

That's two more powerful things you can use ApprovalTests for. If you download ApprovalTests you can explore some other nice reporters and approvers you can use for your specific needs.