Showing posts with label Estimation. Show all posts
Showing posts with label Estimation. Show all posts

Tuesday, January 27, 2015

5 Reasons to Manage Your Solo Project’s ALM Process

ALM soloSo I’ve been developing software on the Microsoft stack for just over 10 years, and training and consulting on managing software development with VS-ALM (a.k.a. TFS, a.k.a. Team System) for just over 4 of those years. But in all time training and consulting, I was working with teams – from small 5 person start ups, to enterprises with hundreds of users, all using Team Foundation Server (a.k.a. on-premises TFS) to develop their methodology and align their tooling with their products.

Never individuals, though.

Over the years, though,  I’ve had the opportunity to fly solo on several software projects, some short lived, some longer, as have other ALM consultants that I know. One thing we all seem to have in common is that, while as ALM experts, we have all of the skills and knowhow to apply our own trade to the software we develop, we seldom – if ever – do so.

The Cobbler’s Children Go Barefoot…

… as the saying goes. It seems that this proverb applies to our profession as much as it does to others. You’ve seen the doctor who smokes, the out-of-shape coach, the technician or plumber whose own wiring or plumbing is faulty, respectively. Perhaps it’s the same. Perhaps its just that proper ALM is more visibly useful when multiple hands are involved and they must all communicate and interact properly with each other, than when one person holds all roles.

Solo projects seem to need no management, I believe they only seem to.

Here’s why:

1. If You Don’t Use Source Code Version Control, You’re an Idiot!

The_Stupid__It_Burns_by_PlognarkI apologize for the vehemence. Let me rephrase this. If you don’t use some kind of source code version control, you’re an inexcusably dense idiot who deserves what he or she gets when you lose your source code!!!

I guess I couldn’t rephrase it without the vehemence. Lesson learned.

Beyond the fact that having some kind of backup, preferably online, means that even if you lose your laptop, or accidentally delete your source files, you still have a copy of your code, version control means that if you make a mistake, you can still rollback to that point in time and fix whatever you did wrong. Having a historical log of the changes you made to the software can also serve as release notes of sort, and a rudimentary list of the features and bug fixes you have, and even help establish some kind of time line.

Really though, version control is about security for the code – protection against loss and against mistakes. you should use it as frequently as possible.

Note that you do not need to shell out huge amounts of money to be able to use version control. In fact, as in life, so with version management, the best things are free!

I prefer using Git. You can use it locally with any operating system (Windows, Linux, Mac OS), and there are multiple online tools that work and synchronize with Git, all free: GitHub, CodePlex, and (my personal favorite) Visual Studio Online. There are, of course, other version control systems, both distributed and centralized (Mercurial is another DVCS, Subversion is a great free centralized one).

Heck, even use DropBox or some other online-syncing file-storage, just beware that it won’t protect you from deleting your code, nor will it let you rollback – but at least you’

2. Defining (just enough) Work Upfront = Less Code Writing and Rewriting

backlogMuch like TDD tends to limit the amount of production-code you write, so does defining your work. In fact what is TDD, if not the definition of a small amount of work that needs to be done to the system, along with a definition of how to know when you’re done? The same holds true for defining a development task, a user story (a.k.a. a product backlog item, or PBI) or a feature; each is an elaboration of the higher level: Feature to user story, to task, to test. When you know what needs to be done, and you know when you’re done, you know exactly what the remaining work is. That saves time.

And let’s be clear. Just enough is key here. Too little, and you’ll rewrite anything that you haven’t properly thought out and planned. Too much, of course, and changes to the reality surrounding the project and your perception of reality will bring rewrite to your plans.

3. Project Planning Makes it Possible to Plan Releases

Unless you are truly working on a hobby for yourself, you still need to be able to answer at least these two questions:

  1. When will it be done?
  2. What will be done by a certain date?

Without estimating the work you have to do, more so if you don’t even know how much work you have, you simply will not be able to give more than a POOMA estimate.

Note that unless you live alone, you might still need to answer these questions, if only so your partner, roommate or parent will know when you will finally clear the garage (or dinner table) from your junk.

4. Automation Reduces Human Error as well as Manual Labor

vitruvian-hiEven if you are a solo programmer, two things remain invariably true for you:

          1. You’re only human.
          2. You’re only one human.

Being only human means that as talented as you may be, you are still prone to repetition based errors. You’ll forget to update the version number, or forget to set the build configuration to Release, or forget to run the tests, or any number of things. When I train new developers, especially in those early days when they feel that the fact that they’re constantly making errors, and more experienced developers must be demigods or some such, in an attempt to reassure them (and lighten the atmosphere), I tell them this:

  • There are only two kinds of developers: experienced, and inexperienced developers.
  • Inexperienced developers make only two kinds of mistakes: Mistakes born of inexperience, and mistakes born of stupidity.
  • Experienced developers make mistakes only born of stupidity

Aside from the fact that this is inherently true (an experienced person, by definition, cannot make a mistake born of inexperience), it is important to note – experienced developers make mistakes!

Computers do not. Ever.

Another saying that I’m fond of repeating, the former statement being a key reason to do so, especially when consulting and or training clients on the use of ALM tools, is to never let a human do a machine’s work (and vice versa). Anything a machine can do, we shouldn’t.

Finally, there’s the fact that any time spent on doing what someone, or more importantly, something (i.e. your computer) could do, is time not spent on things only you can do (i.e. developing the product – which you’d probably prefer to do anyway). Ask yourself this – while you’re manually doing repetitive work – who is coding? Nobody.

This holds true for building, for testing, for releasing and deploying, and basically for anything that you can (and know how to) automate.

In all but the simplest and most straightforward scenarios (usually short-term, one-shot projects), you should find ways to delegate work to a machine.

5. Defining the Work Makes Change Management Easier

It’s all great when you’re hacking away at that pet-project of yours and you’re in the zone, but a week later, a month or even three later, once you start delivering a second or third release of your software, keeping track of what was done, when – not to mention writing release notes, becomes extremely difficult, if you don’t know what your changes are or were. In fact, if you use an ALM tool for tracking features, changes and defects, and associate them with automated builds, you can generate a release notes document automatically with every build, and not have to deal with it yourself.

The other kind of change, is change of management. Today this may be your solo project. Tomorrow, who knows? You may take on a partner, give someone else the code to maintain, or decide to open the source. If the only definition of the work is whatever lies inside that wonderful brain of yours, hand-off will be a bear. Trust me. I’ve had to waste two weeks once receiving a project, and I still don’t know everything about it. Finding out stuff on my own is not the best use of my time. Having a nice set of features, tasks and bugs (past and open both) shortens the time, and makes discovery pains all but gone.

Conclusion

In summary, I’d like to state my opinion that in most (again, all but the simplest and most straightforward one-shot) software projects, proper ALM is as useful and important when flying solo as when developing with a team. Lack of communication issues (though see point #5) are balanced by the increase of work of having to do it all by yourself.

Just do it right the first time. You’ll thank me for it.

Best Regards,
Assaf Stone
Occasional Solo Developer

Tuesday, May 8, 2012

One Team’s First Sprint Planning Session

This post is a bit different than my usual ones. Instead of orating off of my soapbox, I’d like to share a moment of agile bliss that I had today. One of the teams I work with at my customer’s company conducted their first “truly agile” sprint planning session. This post is a recount of the experience.

What Came Before

About a month ago, I began working on agile methodology training for a few teams at my client. The focus of the training was team work-process, mostly using Scrum as a template for how we will do things, with a few Kanban inspired overrides. In my first few sessions, I focused on what a user story is, what a task is, and the concept of focusing the team’s energy on completing user stories as fast as possible.

What I taught them boiled down to a few key take-away points:

  • A user story is an independent increment in functionality that is valuable to a stakeholder, that can be completed by the team in one sprint
  • A task is one developer’s day of work, on one component of the system that is affected by its user story, or work that needs to be done to meet the team’s quality standards.
  • Each team member pulls an unassigned task from the most important, unfinished user story

Initially there were arguments on all points: “Our stories are too big for one sprint”, “What if a task takes more or less than a day?”, “It’s easier to assign a story to a developer”. I had to convince them that these basic ideas are important and valuable, or at least to give them a try.

The Man’s Too Big…

imageWhat we did was take one “too big” user story, and look for some way to break them down in to thinner vertical slices. We ended up with stories that use naïve implementations, or serve a smaller set of clients, or handle a smaller set of scenarios, with further stories that cover the difference in robustness, clients and scenarios.

With regard to the tasks, I suggested that we simply split tasks that feel too big, or join tasks that feel to small, and not to worry about their actual length. We’ll measure the averages and variance after the end of the sprint, and look for better ways to be more predictable as needed. It was more of a let’s simply try it solution.

The Man’s Too Strong!

imageThe longest argument was over whether the first story warranted having multiple developers swarm on the tasks until the story is complete. One developer felt most strongly that it’s a waste of time to have multiple people learn the story, since we’re certain to complete it in time either way. Another developer didn’t want to be pegged in just the server (or client) coding tasks of each story, and felt that owning a whole story gave her more satisfaction. She also expressed concern over the pain of complex merges. Incidentally, it was mostly the QA engineers that immediately accepted the value of working on one story’s tasks in parallel.

Here are the points I made to alleviate their concerns:

  • Having multiple developers learn a story is a great benefit, as it will allow flexibility in task assignment now, and further down the line in future sprints that may have related stories
  • Having more people understand the story will eliminate bottlenecks (and increases the team’s bus factor – a good thing)
  • Regarding the perceived focusing on one component, some other team member chimed in, suggesting that nothing would stop her from taking a client-side task in one story, and a server-related one in the next. I was really happy that other members joined the discussion; it was no longer a teacher-class relationship, but a team of peers
  • I also reminded the team, that since we define a task as the work on one component for a day, and since we focus on completing one story before moving to the next, we are actually reducing the chance of two developers having to manipulate the same object at the same time – which of course reduces the merge conflicts! Further effort can be made to remove the rest of the edge cases almost entirely
  • I further mentioned that there will be at least two members working on each story in any case: one coder, one tester.

Planning the Sprint

imageThe rest went by rather smoothly. The team calculated how many expected work days they have in the sprint. I suggested that we simply add them up, rather than trying to split along the lines of coder / tester days & tasks.

I noticed that the backlog had some free-floating tasks, i.e. not related to any user story. I asked the team about those, stating that unless there is some value to a stakeholder, the task may be a waste of time. The team told me that it was technical debt. After some discussion, we agreed to deduct the tasks from the available work days, rather than creating fake user stories – we have to own up to the fact that we are paying back a debt to the system, not adding value to it.

The team-leader-turned-part-time-product-owner gave a rough stack ranking (there were several items high on the list with the same rank). I suggested that if he doesn’t care which of the same-rank stories we complete first, I’ll set an internal rank by the order that they appeared in the spreadsheet. He accepted.

Next we started explaining and breaking the stories. Traditionally, the PO explains the stories in the planning session’s first half, and the team breaks them down in the second. Since the PO in this case is a team member, and fully savvy of the product and solution, I suggested that we simply run down the list until we have as many tasks as we have work days (see how that works out?).

The first story was easy. We created 4 coding tasks and 2 quality related tasks.

Break it Down!

imageThe second story was a whopper. We knew it would take most of the sprint. One developer suggested that we split the story by data source – one part retrieving one source’s data, the other part retrieving the second source. the team leader shook his head. “The customer needs both”, he said. There is no value in supplying just one. We’ll split it some other way, if we have to”.

You couldn’t erase the smile from my face with truckload of erasers. I knew for sure that he got the idea of what a user story is!

With the second, larger, story, the team leader frowned and looked at the task list. “This doesn’t feel right” he said. When asked what the problem was, he said that one task is too big – it will take longer than a day. The team then split it in two parts, and added a third task.

And we were done estimating and committing. Just like that.

All in all, we spent about an hour of planning, 2-3 minutes of which were spent “estimating”, which is to say deciding how to split that “too big” task.

An Agile Manager in a Brave New World

At the end of the meeting, the team leader expressed his regrets that we don’t have a full-time technical-product-manager (TPM) which is the closest role they have to a Product Owner. The group manager, who was present for the Sprint Planning picked up the impediment and answered “Let’s take it offline; I’ll try to get one to join the team ASAP”.

Yes sir! The manager became the Scrum Master’s Scrum Master, as per my suggestion in my previous post.

Here’s What We Didn’t Do

There are several things the team didn’t do:

  • Defining proper user stories – I didn’t bother following the ceremony of form with the stories, declaring the reasoning and the interested party. I ignored this because I wanted to avoid too much change-shock at once. Next sprint we’ll focus on that. For now, I felt that understanding the concept of value was more – well – valuable
  • Estimate the effort involved in the stories. Personally? I hate estimations. I think they’re useless, most of the time. I noticed that most of the time spent in a sprint planning session is wasted on planning poker, story points and ideal hours, and in the end – they don’t help the project’s predictability in any noticeable way. Instead, we’ll measure, and seek a low-variance way to split the tasks. That will be better. and cheaper

In the End

All in all, it was a great first “agile day” for the team. It was fast, to the point, and everybody participated in every part of the session. Tomorrow I will work with them on building their Scrum-board, and walk them through their first “real” daily meeting. I can’t wait!

So, what do you think? Want to hear more such experiences? Is there anything I did that you like? Something I could have done better? Please drop a comment. Don’t be a stranger!

P.S. A special thank you goes to Mark Knopfler, lead vocals and guitar legend from the Dire Straits, for inspiring the titles of two sections, in his wonderful song.

Friday, March 9, 2012

Why Should I INVEST in User Stories?

INVEST is a another great acronym from the Computer Sciences industry, Department of Agile (yes, DOA, and I’m well aware – thank you very much); It stands for Independent, Negotiable, Valuable, Estimable, Sized (or Small), and Testable. Those are indeed valuable attributes of User Stories, of any kind of requirements at all, and the acronym itself implies some effort on behalf of the creator of user stories (i.e. the Product Owner), with a potential return on investment.

But that isn’t good enough. A catchy name may be enough for most developers to try something new (seriously, when does a developer ever need a reason to try something new?), but not so for management. To many managers, however, this can sound like Impossible, Not-gonna-deliver, Vague, Everything-is, So-what, and Test-the-whole-goddamn-project!

In this post, inspired by a twitter-conversation I had a while ago, I hope to explain what this INVESTing in user stories really means, and why it is worth doing.

I is for INDEPENDENT

imageIn my opinion, writing independent user stories will be your single greatest return on investment (no pun intended – this time). In one of my recent posts, I described how work items (stories, tasks, features, etc.) are far more likely to complete later rather than earlier if they more than one dependency, while having just a single dependency (i.e. starting the new item is simply dependent on completing the previous one) will  increase the likelihood that it will complete on time.

Of course, most of the work in writing good user stories is in this part as well. You are required to reimagine how you think of your work. You need to do away with layered one-shot thinking that is probably necessary in every industry except software. Since the actual construction of the software is for all intents and purposes near-instantaneous, you can (and should) think about vertical increments of value, when you come up with your requirements. Each such story should be worded so as to be independent of any previous (or forthcoming) stories.

In some cases, you may find it impossible to break a story’s dependencies, no matter how you try. In those cases, try to look at the story with its dependencies as a single story. Find the commonality of them and reimagine it.

To summarize – Independence in your stories will translate directly into increased probability of delivering on time!

N is for NEGOTIABLE

imageLet’s get one thing clear: This attribute is not a copout for developers! A story’s negotiability does not give developers the right to decide not to do it. The developers do not decide what the story will be, what its scope is, when it will be delivered, and what is required to make it complete. Oh, the developers can and should discuss the story and its impact with the PO, and the PO may decide to change the story based on developer input, but it is ultimately the PO’s right (and responsibility) to make any changes to the story.

The full meaning of negotiability, is that the contents of the story (i.e. scope, priority, requirements) can be changed at any time, from the moment they are introduced, and up until the the developers commit to it (i.e. the beginning of its sprint).

This is perhaps both the easiest attribute to add to your user stories, and the hardest. It is easy, because it requires no extra work from the PO or team. It is hard, because it may require you to let go of any preconceptions you may have of the story (i.e. those you had when you first wrote it). It requires you to allow people to move your cheese.

The return on this investment is quite valuable, however; Your stories will be rejuvenated, and will be more valuable to the customers and user, because they have been recently revalued, evaluated and updated. This attribute is particularly synergetic with the next one:

V is for VALUABLE

Standish GroupYou’d think that given the high premium on the time of developers, and the fact that most projects are running late, as it is, it would go without saying that everything we develop is valuable. Unfortunately that is not the case. According to the CHAOS Report, by the Standish Group, fully 64% of developed functionality is rarely or never used!

All of these features have something in common: They were added without any consideration of their value or necessity to the users or the customers!

The valuable attribute of user stories helps reduce these numbers, by making sure that anything and everything the team develops has value to the users. When the PO comes up with a story, and cannot find a way to describe its value, it is possible that there is no value. Our tendency to add these valueless stories to the product comes from the traditional idea that you have to define everything upfront, or it won’t be in the project. Thanks to negotiability, it is possible to focus only on what is valuable.

This attribute also protects the system from developers’ desires to build a framework (we all love developing frameworks – it’s a conditioning or something), or do any work that doesn’t translate into value to the customer. If some technical work is required for some story, add the work as a task to that story, not as a technical story; this will more clearly radiate to management what is required to complete a story, and will give the PO the ability to consider this extra work and its impact when evaluating the story.

E is for ESTIMABLE

imageThere is a lot of good literature out there on how to estimate properly. While there are several, equally viable techniques out there for estimating, everyone pretty much agrees that relative estimates are easier than absolute ones, and smaller items are easier than larger ones. For more info on how to estimate properly, you can read my previous post on the subject, here.

There is more to it, though, than just the techniques. The story should be clearly scoped. There should be a beginning and an end.

Imagine a story such as “As a user, I want a text editor, so that I can take notes”. How much will it take to develop that? I have no idea. If the user only needs a multi-line textbox with persistence capabilities, then I can probably have it done in a few hours (yes – a few hours, because nothing ever takes just 5 minutes!). If the user is expecting RTF capabilities with copy and paste, it’ll take longer, and if he’s looking to give MS Word some serious competition, it’ll probably take a few man-years (just guessing here; It’s too big for me to give a good estimate).

I see estimability as the culmination and grounding of the next two attributes, Size and Testability (see below).

S is for Size

image“Sized Appropriately”, or “Small”, as some call it means that the story is small enough to estimate. In Scrum, and other time-boxed methodologies it is expected that the story will be scoped in a way that makes it possible for the team to complete a few of them in one sprint – at the very least, they should be able to complete one story.

This attribute requires the PO to define in detail what it is that he wants the team to deliver. In many cases, the actual user stories that the team works on are the elaboration of the broad scopes defined in the “Epics” or “over-stories” or “super-stories”.

What the PO gains is the ability to release early and often. The team get the ability to create less complex solutions, because the problems are smaller.

Remember, the story should be defined in terms of their business value, not the implementation details! The details are to be left to the developer tasks, and are of no interest to the users or customers.

T is for Testable

imageTestability basically means that it is possible to test the product to see if it fulfills the requirements of the user story. A testable user story has a finite and measurable set of acceptance criteria, by which it is possible to prove the validity of the product.

This last, but most certainly not least of attributes serves everybody who has a stake in the product:

  • By defining the story’s acceptance criteria, the PO has the opportunity to define more specific demands of the story, than what appear simply in the “title” (“As a… I want… so that…).
  • Both the Developers and the QA gain an understanding of what the scope of the story is and what it is not, thus reducing a great source of contention about whether something is a bug, or a new feature (if I had a penny…)
  • Stakeholders gain an understanding of what they are going to receive by the release.

The acceptance criteria, like the rest of the story, should be worded in the product’s domain language, which means that they should be defined in the users’ terms, rather than the developer / QA terms.

The return on your INVESTment here should be obvious: The PO gains a tighter control of what will be produced, without unnecessarily elaborating on the technical solution, while the developers and QA have a better understanding of what they are required to do.

In conclusion

I think that every part of the INVEST model has value. Stories that you have invested in will be:

  • More likely to complete on time
  • Current and updated
  • Valuable to the customer and / or the users, with less unnecessary technical fluff
  • Clearer, with less dissonance between POs, developers and testers
  • Less complex to develop, easier to test, and quicker to get feedback as well as quicker to deploy
  • Easier to prove and validate and quicker to gain acceptance by the customer / user

That’s all. I think it’s a worthy goal, well worth the cost. What say you?

Sunday, February 19, 2012

Do Agile Estimation Techniques Really Account for Scrum Projects’ Successes?

imageAfter several years of experience working with, training and coaching teams on Scrum, I have no doubt that agile (empirical) estimation techniques are better than classic, predictive estimation techniques. One question that kept nagging me over time is whether or not using story points, T-shirt sizes, or ideal hours truly explains the increased success of Scrum based techniques?

It took me a while, but I finally figured it out. And the answer is a definite ‘NO’!

All Estimation Techniques are Merely Guesses

At the end of the day (or at least, at the end of the planning session), any estimation is just a guess: sometimes its more, and sometimes its less. This means that the distribution of the actual time versus the estimation should be pretty much normal: In large numbers, about half of the actual results will be less than the estimation, and half will be more. On average, according to the Law of Large Numbers, it evens up.

The better the estimation technique is, the steeper the curve will be, i.e. the closer actual results will be to the estimation.

My experience is that agile techniques are in fact better at predicting the actual results, but the fact remains that if you make enough of these predictions, then regardless of the technique, the variance should cancel itself out and the sum should settle on the average!

This means that the quality of the technique used to estimate each task has little or no impact of the success of estimating the time to complete the entire project!

Then Why Do Estimations Fail?

If estimations behave “normally” whole project estimations should not fail. I don’t buy into all that “programmers are notoriously optimistic” sentiment; I’m a developer myself and there is not a single soul alive that will call me optimistic, and yet, non-agile projects I worked on overran their deadlines, like everybody else’s. Something must be out there, that either changes the normal distribution, or invalidates the law of large numbers or both. And it has to be something that is present in classic waterfall-ish projects but not in agile ones.

It Is All About the Independence!

The main difference, in my opinion between agile and non-agile projects is how they are planned and constructed. An agile project plan is basically a stack of user stories, with no* dependencies between them, whereas a waterfall project plan is a Gantt chart describing tasks and their interdependencies.

Why Is This Important?

Let’s assume a task that has two dependencies. Each were estimated, and each have a normal distribution of the actual amount of time to complete vs. the estimated time. The following figure describes such a “plan”:

SimpleDependencies

Each task has a 50% chance to be completed before time and 50% to be completed after (There is of course no statistical chance of being completed “on time”). Now, let’s say we’re interested in calculating the probability of Task C to be completed before time, or not to finish late. For the sake of ease, we’ll assume all three tasks to have the same estimated duration. For Task C to begin early, it needs both of its dependencies to complete early. Since the chance of each to complete early is 50%, the chance for both is (0.5 * 0.5) = 0.25, or 25%. The chance for this not to happen, i.e. for at least one of them to be completed late is 1-0.25 or 75%. this means that there’s a 75% for Task C to complete late, and of course it has its own probability of being late, which means that the chance of Task C completing late (and making the whole project run late) is more than 75%!

It is easy to see that the more complex a project is, the more dependencies each task has the greater the chance of being late is; The distribution of the actual duration of tasks with dependencies is a negatively skewed distribution.

Scrum User Stories Have No Dependencies

With this in mind, Scrum user stories have no* interdependencies. If two stories do have a dependency, they are should be joined to be treated as one (and if they are too big, they can and should be split again, though in a way that doesn’t create dependencies). With no dependency (or, if you wish, a single dependency that is the completion of the prior story), we get a plan that looks like this:

NoDependencies

The chance of B starting early, which is the same chance of A ending early, is 50%. The chance of B completing early is still 50%! Even better the variance is reduced, which means that there is an increased probability of the total project duration being close to the the estimate (i.e. a steep probability distribution).

What Does This Mean?

This means, that while a project with multi-dependent tasks has an increased skew towards lateness with each added task, a project with no dependency (beyond sequence) has an increased tendency towards the mean, or the estimate.

This means that by the very nature of what we are estimating, agile projects tend to succeed (i.e. be on budget and on time), than non-agile.

Moreover, the actual estimation technique doesn’t matter – it is the estimation of independent units of work that makes the difference.

Therefore, my conclusion, and advice is that the very first change a team needs to make is to start working on independent features – call them user stories, features, or MMFs – it doesn’t matter; just keep them independent of each other!

Hope this helps,

Assaf.

Sunday, January 22, 2012

How to Use Agile Estimation in Scrum Projects

imageLike it or not, estimations are a big part of professional and commercial software development. Estimations are used in most projects regardless of methodology: Waterfall or Scrum, traditional or agile, every stakeholder needs to know the numbers. Projects succeed or fail as a result of these. Unfortunately, humans, as a rule, tend to suck at estimating.

This post will compare and contrast the different estimation techniques and applications used by both agile and non-agile methodologies, and explain how to use them (and not misuse them), particularly in a Scrum project.

What is Wrong with how We Estimate?

imageTraditional project plans involve using calendric estimates in order to synchronize the development efforts of the team(s) by placing the work on a time line (usually, using a Gantt chart). This means that for every part of the solution, the developers must predict the when it will be ready (i.e. the delivery date). This means they need to know when they will start, how much time it will take to deliver the work, and how much time will be “wasted” on impediments (both foreseeable and unforeseeable). While knowing this makes the project planners’ work easier, obtaining this knowledge is extremely difficult; Estimating the development effort is difficult enough, but predicting the total amount of time that will be spent not developing is near impossible.

What’s worse, is that contrary to popular belief, the Law of Large Numbers does not apply to project plans! Due to the nature of dependencies, a single late dependency is enough to push the delivery date of a dependent component. On the other hand, in order for a component to be delivered early, it is necessary that all dependencies will be completed early. In short, lateness aggregates, while earliness does not.

A side effect of this is that the more complex the Gantt based project is, the more lateness is likely to be accumulated, as the project advances, and the further we are in the project, the more we run the risk of being late.

Of course, a late project means that the software is delivered at a higher cost, with less features, and / or simply later than expected and agreed. This means that a late project often becomes a failed project.

Clearly, a different method of doing things is necessary; Agile methodologists came up with different estimation techniques to reduce the chances and ramifications of failure.

Agile Estimations Are Easier to Make

The three most common agile estimations are T-Shirt Sizes, Ideal Time, and Story Points:

  • T-Shirt Size estimations are coarse grained separation of requirements into buckets of small, medium, large, or very-large efforts. This estimate is of course extremely easy to make intuitively, and of course insufficient as a basis for project planning due to its low resolution, but is often sufficient for the purpose of pricing single features (example scenario might be when required to give a customer a cost estimate for some custom feature in the system).
  • Ideal Time estimates are similar to calendric estimates in that they involve high resolution predictions of the effort involved in developing a feature. The difference is that ideal estimates are made while ignoring all surprises and interferences! The question that ideal estimates answer is “how much time will it take to develop a solution, assuming you work on nothing else, and nothing else disturbs you”. Taking the unknowable impediment factor out of the equation improves the accuracy of the estimates, but you can’t simply substitute ideal time for calendric time because the unknown impediments still do exist, and the fact remains that humans aren’t good at making absolute estimates, with no point of reference. In Scrum, Ideal Hours are often used to estimate the effort of completing tasks, which should be short, simple and straightforward.
  • Story Points are a radically different method of estimating development effort. With story points you do not go out and say how much work something is, in and of itself, but rather state how the effort to develop one component compares to the effort of developing another. These estimates are much easier to make, and surprisingly accurate because we are much better at comparing things than estimating them. Of course, comparisons and relationships have no unit of measure, and thus can’t be used to establish a timeline. At least not in the normal sense. In Scrum, Story Points are usually used to estimate the effort of completing user stories (hence the name).

Agile Estimation Techniques Make Better Plans

imageWhile agile estimation techniques tend to end up more accurate than calendric estimates, the estimations are still guesses, and guesses still mean risk. Agile teams try to reduce this risk by reducing the amount of guesswork involved in the project plan.

Classic project plans involve estimating the time it will take to deliver each component, and add up all the estimates to create one big guess-of-a-plan. While agile project plans involve estimating the work on each component as well, that is where the similarity ends.

imageThe agile project plan is not based on the prediction of work effort, but rather on empirical evidence! Only the team’s first iteration (ever) is planned based on a prediction. The rest are based on the measured evidence of previous iterations (or sprints, as they are often called in Scrum-shops). The team estimates that it will successfully complete as much work in the next iteration as it did in the previous iteration(s).

For example, if a team successfully completed 120 ideal hours worth of work (that is, the amount of work they did added up to what they estimated as 120 ideal hours), then they will estimate that they will do the same in the next iteration. This technique works the same with story points: A team that can complete an average of 50 story points in an iteration is likely to repeat that success in the next one. A team’s velocity, or cadence can then be measured in ideal hours per iteration or story points per sprint.

What about interruptions? Well here’s the beauty of the technique: You can consistently account for the predictable ones (e.g. vacations, repeat meetings, scheduled events) and factor those out, and you can consistently ignore the unplanned ones, as they are likely to be more or less an equal part of every iteration of the team.

Where’s the Difference?

The difference between predictive plans (classic, waterfall-ish, Gantt based) and empirical plans (Agile, Scrum) are:

  1. Predictive plans are based on more estimation (i.e. guesses, i.e. risk) than empirical ones
  2. Empirical plans have an increasingly sound statistical ground as the project progresses
  3. Predictive plans have an increasing chance of running late, as the project progresses
  4. With predictive plans the unknown factors aggregate, with empirical plans they cancel each other out.

Finally, as stated in the previous post, agile plans, being based on stories that are designed to be discrete units of business-value, with as little interdependencies as possible, agile plans further mitigate the risk of estimation failure by causing the ramification to be failure to deliver on time only the least valuable features!

In Conclusion

To me, the agile choice is a no brainer. Agile plans are simply safer to execute, the chance of failure is lesser, and the result is less harmful. I would even go as far as to say that companies promising to deliver a non-trivial project without using an agile plan of some sorts are being less than honest – with themselves as well as the customer.

imageMoreover, I think it is high time that customers demand their contractors to use agile methods in order to win contracts.

Thursday, January 5, 2012

Demystifying Agile: How to Plan Releases with Scrum

no-release-voodooThe agile release process might sometimes seem like voodoo (or even dark voodoo) to some managers and developers, particularly those more accustomed to the traditional way of planning processes used in waterfall projects. This post will explain the inherent problems with waterfall projects, how agile methodologies such as Scrum overcome these problems, and of course, the reasons that the agile release planning method works.

You Can’t Control Everything

imageThe PMBOK call it the project management triangle; the agile community call it the Iron Triangle. Regardless of the name, it is a real and hard constraint: Of the three aspects of the project – scope, resources (or cost), and time you may pick and control any two. The third cannot be controlled without harming the quality, and is calculated (roughly) by the following formula:

image

This means that if you have a project where the budget is fixed, and there is a hard deadline (i.e. release date), then you cannot control the scope that will be delivered by the team by the time of the release.

Moreover, Brook’s Law, states that adding people to a late (i.e. already started) project will only make it later. According to this, since the cost of a software project is mostly the cost of the team members, the resources must remain fixed for the project (or release).

Nevertheless, traditional project managers seem to try to define the entire scope of a project, define the releases and milestones (i.e. the time), with a fixed preset budget (i.e. cost).

It is simply doomed to fail. According to the Standish Group and the Chaos Report, most projects (68 percent) do.

Why do Projects Fail?

According to Martin Fowler, it is not the projects that fail, but rather the estimates that fail; it is our attempt to calculate how much time it will take to deliver a certain scope with a certain amount of resources.

I would argue, that the failure to estimate is the reason that projects fail. We really don’t know how to estimate. Projects fail because they rely too heavily on the estimates.

How do Agile Methods Reconcile with the Iron Triangle?

Given values such as courage and honesty, it is no surprise that agile methodologists simply acknowledge that estimation is not a good basis for a release plan, and search for alternatives.

Given the Iron Triangle, Brook’s Law, and the fact that most customers demand a hard release date, an agile PM accepts that he cannot control the scope that will be delivered; that is, while the PM may make guesses (i.e. estimates) as to the content that will be delivered on the deadline, he cannot be sure that every last task will be completed.

What he can promise, and make certain of, is that he will deliver the best product possible on the release date, with the rest of the desired features in consecutive releases.

How do Agile Methods Deliver the Best Product on Time?

By slicing the entire scope into vertical slices, small increments in valuable functionality (often called user stories), the agile PM now has a stack of user stories to be released, or a product backlog. By defining the stories in such a way that they have as few interdependencies as possible, the PM may order them as he desires. The PM will of course, choose to order them by business value to the customer.

The development team, in turn, will swarm on each story, by order of value (priority), and focus on completely delivering one story at a time (completely means coding, testing, integrating and anything else required).

In this way, at any given moment, including the release date, the team has a product that is shippable and is the most valuable product they could possibly develop.

Is the Best Product Possible Good Enough?

While the best product might not be everything that was envisioned, because again – that is simply not possible: not with waterfall, not with agile, not anyway – the “Best Product” will have the capabilities most valued by the customer.

The findings of the Standish Group show that only 20% of the features in any given product are frequently used, while 64% are rarely or never used. Therefore, statistically speaking, even if you only complete 36% of your scope, you may still have a good enough product!

Of course, assuming that you aren’t slacking, and the project’s deadline isn’t insanely short, it is more likely that you will complete a significantly larger portion of your scope. It is even possible that you will have a “good enough” product that you can ship even before the deadline! There is not a single customer that will say no to reducing time to market!

So I’d say, yes – the Best Product is good enough.

And that’s how it’s done. I hope

How about your products? Do you work this way? Are your experiences similar? Share!

Thanks,

Assaf.

Saturday, December 10, 2011

How to Measure Technical Debt

Or: An Executive’s Guide to the Cost of Software Architecture Shortcuts

poor-monopoly-guy-290x280In my experience, in any software project of sufficient complexity, it is necessary to cut corners in the software architecture. These shortcuts are decision that you make now so that you will be able to release your next version of the software that much earlier; these shortcuts are the result of not having enough time to do things “right”.

This is called your project’s Technical Debt.

Like financial debt, which is the metaphor used to explain it, technical debt happens when you “buy” more software features than you have resources (developers and time) for. Like financial debt, it accrues interest over time. And like financial debt, if you don’t pay it back, you may eventually get foreclosed and go bankrupt. That last part is not a metaphor. Your software project simply ends up costing more to maintain than the value you derive from it, and either the project or the company gets shut down, or both.

There are essentially two ways to deal with debt (technical, like financial):

  1. You pay it all at once – This means that you do not “withdraw” any more features, until your debt has been fully paid. It has the benefit of being able to make a fresh start. It has the disadvantage of risking “starvation” while you try to pay the “bank”.
  2. You pay it in installments – This means that you pay off your debt, a small part at a time, still “withdrawing” features, but less than you normally would, cutting down to essentials, for as long as you are still paying off the debt. It takes longer to finish, but you can still “eat” while paying.

Of course there’s the third option of continuing to withdraw as before, until you go broke, or luckily win the ‘lottery” (investment, buy-out, etc.)

There is no one clear way to deal with debt (financial or technical). It will depend on the situation, and the players. And most of all it will depend on the amount of debt.

Quantifying the Debt

With financial debt, it is easy to determine how much you owe. There’s the amount of money you borrowed, and the interest rate. Both are usually well known, and it is just a matter of keeping track of it, to make sure you can withstand it.

With technical debt, you still have the amount you borrowed, and the interest rate, but you usually don’t know the numbers.

Let’s start with figuring out the derived value of taking the loan. Whenever you demand that a feature or set of features be completed faster than it would normally take if the team were to maintain an appropriate quality level, you are “borrowing” from the architecture. Figure out how much time you saved, and multiply it by the hourly cost of the developers. That is the amount of money you borrowed from the architecture. Figure out what the reduction in the Time to Market will earn you, and that is the derived value of the loan.

The interest rate is not so easily calculated a-priori, but it can be estimated (with about as much accuracy as the development cost estimates your team makes – which may be good or bad; depends on your experience):

  1. Map out the areas impacted by the shortcut
  2. Estimate how much it would cost (developers x time) to develop a feature in the impacted areas, before the shortcut(s)
  3. Now, estimate the cost to develop the same feature after introducing the technical debt

The difference in the cost is the interest you are paying. Keep track of the interest as it will change with every change you make to the system. A good practice is to calculate the interest every iteration or sprint. You should consider doing this as part of the sprint planning. This will have the added benefit of giving the product owner the opportunity to pay off some of the debt.

Note that like financial debt, the interest grows over time. The rate varies, and it is difficult to guess what it will be, except by projecting based on past experience. As you begin to accumulate this data, you will be able to make educated decisions on when you can afford to go into technical debt, and when you should avoid it. You should reassess your technical debt’s interest rate every sprint planning.

Armed with this knowledge, developers should now be able to collaborate with management in order to decide how to develop and whether or not it is okay to cut a corner on that next feature.

Finally, remember: Your architecture is a cold blooded loan shark, who will just as soon take your installments, as he will bust your project’s kneecaps over a missed payment.

What do you think? Do you have a preferred method of calculating / estimating technical debt, or a way to handle it?

Please share!

Wednesday, December 7, 2011

10 Reasons to Avoid Test Driven Development

no-TDDTest Driven Development (TDD), and all of its derivatives (BDD, ATDD) are, in my opinion great methods to drive a team’s development efforts, and raise the quality of the product. But TDD is not a silver bullet. It doesn’t fit every project. The following post lists the top ten reasons not to write automated tests for your code.

 

If one or more of these conditions apply to you, you should consider forgoing TDD, and in fact any agile techniques, as they will be a waste of effort:

10. There is no client

Sometimes you are developing a product that will not be used by anyone. In this case, any effort expended on raising the quality is a complete waste. Nobody will care.

9. The client is an avid tester

Some people love nothing more than to beta-test new software. The joy of finding new bugs and trying to figure out what went wrong is what they live for. Others are scientists at heart, and love trying to go over stack traces, in order to reverse-engineer the code. If your client happens to be one of those, then writing automated tests will take all the fun out of using your software. Don’t do it!

8. The project is short, simple and straight-forward

If your team can complete the project in a short period of time (no more than a few weeks), and will never, ever have to reopen it for maintenance, then the benefits of maintainability, reusability and extensibility will be lost on you. Spending time and effort on these values is wasteful.

7. Your architecture is perfect

If there is no way to improve your architecture, then there is no need for it to be extensible. TDD, by the nature of its incremental development flow, forces the architecture to be extensible, simply by making you extend it as you go, and this, like lipstick on a pig, is something you just don’t need.

6. Your documentation is perfect

You never miss an API, and any change you make to your software gets documented instantaneously. The tests you create with TDD serve as a form of documentation, an example of how to use the API. Properly named and written tests actually explain the context of the test, so it is easy to find the test that shows you what you need to understand. Your documentation is so complete, that writing tests are a clear violation of the DRY principle, so you should clearly avoid tests.

5. Your team never changes and all members’ memories are perfect

The collective memory never forgets a single line of code it wrote, nor what the context was when writing it. You therefore, do not need the tests to remind you what the code does, why it does it, or how to use it. This also means that your team members never leave, nor are any new members recruited, because if this were to happen, you’d lose memories, or have members who don’t remember the code (having not been there when it was written). If this is the case, don’t bother with tests; they will just interfere with your incredible velocity.

4. “Done” means the code is checked in

Many teams have a definition of done (DoD) that means that the feature is “done” when it is in a state that the end user can receive and run (coded, tested, deployed, documented, etc.). Many others however, your team included, prefer a simpler and more easily achieved definition that accepts “checked in” as “done”. For you it is sufficient that the developer declared that he or she completed his part, and anything else is someone else’s responsibility. If you don’t need the code to be tested for the product owner / manager / user to accept it, then you are better served moving on to the next feature as soon as you can, instead of dragging on your relationship with this feature.

3. You’re paid to code, not test

Ignoring the fact that unit tests are code (sophistry), testing is what we have testers for. Perhaps your team's testers are fast enough that they can run tests on your code and give you feedback within mere moments, pinpointing the areas where you broke the code, so you can fix it while the changes are fresh in your mind, as well as a complete regression suite on the product, in case you broke something in a different component every night (they don’t mind working nights; they love the peaceful quiet). Good for you, cherish those testers, and make sure they have enough work so they won’t get bored and move on to a more challenging company.

2. Debugging doesn’t count, and testing takes too long

Like with any competitive company, your team must deliver on time, which means they must make estimates on the time it will take to deliver. Since your DoD doesn’t include testing, and you probably can’t guess how long it will take to debug the feature, what with all the cycling back and forth from development to QA, you estimate how long it will take to code it. If you want to meet your commitment, you can’t be adding a 20% overhead to your delivery time or you’ll miss the deadline. Worse, if you add 20% to your estimates, your manager might call you out on padding the estimates, which is his job. If that happens, who knows what might happen? Better play it safe.

1. It’s just a theory

Like Evolution (and Gravity), it’s just a theory. Even if all of the above reasons weren’t valid, nobody has ever successfully proven that this product could be completed faster and with better quality using new-age development methodologies like TDD. It’s just a matter of opinion.

Test yourself

Now, to test whether or not you should use test driven development, go over the above list. Count how many reasons apply to you. If you scored ten points, don’t use TDD. In fact, if you scored more than one (reason #8 might actually be legitimate), don’t write any code at all. Perhaps you’d be better served choosing a career that has fewer unknowns and moving parts. Perhaps paving roads?

Disclaimer: This post was written… Aw, just figure it out yourself!

Tuesday, August 9, 2011

Another Stab at Work Estimation

The Story

A couple of days ago, I was at a client, helping with his company’s migration from one version control system to another. As part of the work, we had to get the latest version of the source (i.e. download them to a local folder on the PC), copy it to the target system’s workspace (i.e. perform a standard windows copy to another folder on the PC), and then check in the sources.

I want to discuss the copying process. Yes, the copy.

filescopy

Not much to copying you say? Well, essentially, you’re right. You select the folder that you are going to copy from (I selected a folder that was just over 1GB in size, and just over 50,000 files altogether) . Then you copy it to the target folder. That’s it. Oh, and you wait.

That’s what I really want to discuss. The Wait.

This was my experience:

  1. First I had to wait for a few seconds (I think it was about 10 seconds) while windows calculated
  2. Next, it started copying, and said it would take roughly 3 minutes.
  3. After a few more seconds, of copying it reevaluated the work at 4 minutes or so.
  4. After that windows started to progressively update the estimate until it reached about 5:30 minutes.
  5. At that point, the estimation seemed to be on track, as the remaining time steadily went down.
  6. About 5 seconds or so from the end of the copy, it began taking longer for each second to complete (i.e. it took longer than 5 seconds to complete from that point, and it took longer than a second to tick off one second from the remaining time).
  7. Finally, Windows reported that it was done. 0 remaining seconds. Means it’s done, doesn’t it? I thought so. Windows seems to have a different definition of done, apparently. It took 10 more seconds till it was really done.

Sounds familiar to anyone? It should. Anyone who ever used a computer (I’ll hazard a guess that other PCs and operating systems have similar experiences) probably doesn’t even notice it anymore, seeing as it is such a common occurrence.

That’s not what I’m talking about, though. I’m talking to all of you corporate developers, team leaders and managers.

The Moral

In my experience many (perhaps most?) teams go through a frighteningly similar cycle whenever they estimate the cost of development on a feature, or a project.

This is (all to often) my experience:

  1. The team spends some effort “calculating” the effort required to develop each feature. Usually, it seems like the same 10 seconds are spent, the results are equally optimistic, and just as “accurate”.
  2. The team makes their estimate (optimistic and inaccurate).
  3. Next, they start working on the problem. Soon enough, the team encounters the first problem they didn’t think of. The estimate goes up.
  4. This goes on several times, pushing the deadline into the slack assigned to development.
  5. Eventually, the team reaches some rhythm, and it seems like the team is on track. At least they report that they are.
  6. Towards the end, when the deadline is just around the corner. In what is sometimes called “Crunch-time”, or the “robustness” phase, the team keeps announcing “we’re almost done”, or “we’ll finish tomorrow”. This usually takes quite a bit more than a day, of course.
  7. Then they say that they’re done. Only they’re not. They completed part of the work – coding. “Testing? What’s that – that’s what QA are for. We meant done coding, not ready to deploy. What’s the matter with you?!?” It will take some more time before it actually is fit to let out in public.

And everybody is surprised.

That is what I don’t get. A computer, performing a copy, one of the simplest and straightforward tasks, can’t get the estimates right. And I was copying locally! And that was to another folder on the same disk (even the same logical drive).

How do you expect a human being, performing an act of research & development, doing something nobody did before (or at the very least not done by this human being, in this environment), to out-perform a computer?

I know managers and customers depend on these numbers. I suggest a 12 step program to deal with that. There are alternatives.

Welcome to the world of Agile development.

See you in the next post..

Assaf.

P.S. The next folder I had to copy, I used FastCopy. Took me less than half the time. Didn’t waste any of it on estimations.