Pages

Sunday, December 25, 2011

A Software Developer’s New Year’s Resolution

Or: 5 Things You Can Do Today to Improve Your Software Product

imageNew Year’s eve is around the corner. It is the end of this year, and there’s a whole new year full of potential to be better, just waiting to begin. So how can you make it better? If you, like me, are a professional developer, then you can improve your life (and your boss’s and your team’s) by reducing software delivery related stress. Here are 5 things you can do, effective immediately, to make your product better:

1. Make a Version Management Plan

If your version control system is a mess, the first thing you should do is straighten it out. If you are not using one, then you should begin doing so immediately. There really is no excuse not to do so. For starters, there are plenty of high quality tools out there that are open source and free (git, mercurial, and subversion are the most common ones), as well as other commercial tools (I happen to use and consult on the use of Microsoft’s TFS). Either download and start using a free one, or get one that offers a trial period.

If you are using a VCS, but it’s a mess, and you find yourself mired deeply in merge-hell every time you want to deliver your software (internally or externally; it doesn’t matter), then you should straighten it out. Unfortunately, if you have a considerably sized source repository, this may seem as an impossible endeavor. Here’s one way to alleviate at least some of the pains of software delivery, in a few easy steps:

  1. Create a new root-level folder named sources. This folder will contain your new source code repository.
  2. Create a branch from your current repository, and call it main (e.g. /sources/main). This will represent your latest and greatest version, and should be kept as stable as possible, at any moment. No development will be done on it; you will only merge completed changes into it.
  3. Create a branch from main, named dev (e.g. /sources/dev). This branch will be the branch that developers work on to create new features, checking code in and out from there. If there are multiple independent teams, you may have multiple dev branches.
  4. Create a releases folder, which will contain release branches from main. Every time you wish to release your software (internally or externally), create a release branch from main (e.g. /sources/releases/1.2).

This is just one possible branching plan. It works particularly well for agile development, where effort is made towards developing as many features as possible before a release, merging them into main when ever they are done, and branching when the iteration is over. Note the following division of work:

  • New features are developed only in dev. They are merged back into main when completed.
  • Fixes are developed on release branches and reverse-integrated into main, and then back to dev, so that they won’t get lost.

This structure allows you to develop new software and stabilize a release at the same time without one effort polluting another. It is also a step towards being able to release a new version anytime.

If your development and release mechanism is well established, but not suited to fit this branch plan, make one that does work. Just make sure to be able to support feature and fix separation as well as the ability release anytime you need.

2. Automate Your Build

Even if all you do is copy sources from the repository to some drop folder, you have some sort of a build process. Automate it. Even just a part of it. Even just the compilation phase. Every step you let a computer do rather than yourself, is one less step to worry about. If you aren’t already using an automated build tool, you should start now. There are many open source tools as well as commercial ones. Most are powerful enough to suit your needs. Remember – If you can run it from command line, you can automate it as well.

Every time you run the build, you will not only create a deployment package of your system, but you will get a good indication about the health of your product (if the product doesn’t build, you will want to know sooner rather than later). Additionally, you will have the start of a platform for testing the system.

For a synergetic benefit with your versioning plan, schedule the following automated builds:

  • Build the software every time you check in changes to your code (on dev or a release branch); most build and VCS tools support this – it is called continuous integration.
  • Build the software every time you wish to merge dev to main. It is even better if you require a successful build as a prerequisite for the merge. This is called a gated check-in in many tools.
  • Schedule a nightly build on main. This will let you know what the state of your system is. Fix any problems here first. Keeping a healthy build will allow you to deliver software at the drop of a hat. Amaze your PM!

3. Add a Health Report

Even a simple email message to the team in case a build fails will greatly improve your ability to fix problems with your software. Shorter feedback cycles means that you can fix problems when their causes are still fresh in your mind! Let your build system email you if it fails. Almost every system supports it. Conversely, you may wish to post successful build reports to some company wiki. Let everyone know of your successes.

4. Add an Automated Test

While the benefits of TDD as a design mechanism are far greater than just having a suite of tests, we should not belittle the benefit of an automated regression test. Even a simple record & replay automated smoke test, with a black-box test tool will greatly improve your system. Start with a single test that runs through as many parts of your system as you can easily manage. (Note – this might mean you need to add a deployment action to your automated build). Report if it fails. Run it every time you build. The more tests you add, the greater confidence you’ll have in your build system! Maintain your tests; fix any tests that fail due to legitimate changes and remove those that are no longer relevant. You don’t want your system to cry wolf!

5. No Broken Windows!

imageThe most important thing you can do is to make a new year’s resolution that you will not suffer any problem with your software delivery lifecycle to remain unfixed! Once you fall off the wagon, your system will start to decay back to uncontrollable chaos. Think of maintaining your ALM like brushing your teeth. You may not like it but you do it every day (more than once), or your teeth will rot out until ultimately you won’t be able to eat!

In conclusion,

My new year’s resolution is to vigilantly keep my teams’ products “on the wagon”.What is yours?

Happy Holidays, and Happy New Year!

See you all next year,

Assaf.

Saturday, December 10, 2011

How to Measure Technical Debt

Or: An Executive’s Guide to the Cost of Software Architecture Shortcuts

poor-monopoly-guy-290x280In my experience, in any software project of sufficient complexity, it is necessary to cut corners in the software architecture. These shortcuts are decision that you make now so that you will be able to release your next version of the software that much earlier; these shortcuts are the result of not having enough time to do things “right”.

This is called your project’s Technical Debt.

Like financial debt, which is the metaphor used to explain it, technical debt happens when you “buy” more software features than you have resources (developers and time) for. Like financial debt, it accrues interest over time. And like financial debt, if you don’t pay it back, you may eventually get foreclosed and go bankrupt. That last part is not a metaphor. Your software project simply ends up costing more to maintain than the value you derive from it, and either the project or the company gets shut down, or both.

There are essentially two ways to deal with debt (technical, like financial):

  1. You pay it all at once – This means that you do not “withdraw” any more features, until your debt has been fully paid. It has the benefit of being able to make a fresh start. It has the disadvantage of risking “starvation” while you try to pay the “bank”.
  2. You pay it in installments – This means that you pay off your debt, a small part at a time, still “withdrawing” features, but less than you normally would, cutting down to essentials, for as long as you are still paying off the debt. It takes longer to finish, but you can still “eat” while paying.

Of course there’s the third option of continuing to withdraw as before, until you go broke, or luckily win the ‘lottery” (investment, buy-out, etc.)

There is no one clear way to deal with debt (financial or technical). It will depend on the situation, and the players. And most of all it will depend on the amount of debt.

Quantifying the Debt

With financial debt, it is easy to determine how much you owe. There’s the amount of money you borrowed, and the interest rate. Both are usually well known, and it is just a matter of keeping track of it, to make sure you can withstand it.

With technical debt, you still have the amount you borrowed, and the interest rate, but you usually don’t know the numbers.

Let’s start with figuring out the derived value of taking the loan. Whenever you demand that a feature or set of features be completed faster than it would normally take if the team were to maintain an appropriate quality level, you are “borrowing” from the architecture. Figure out how much time you saved, and multiply it by the hourly cost of the developers. That is the amount of money you borrowed from the architecture. Figure out what the reduction in the Time to Market will earn you, and that is the derived value of the loan.

The interest rate is not so easily calculated a-priori, but it can be estimated (with about as much accuracy as the development cost estimates your team makes – which may be good or bad; depends on your experience):

  1. Map out the areas impacted by the shortcut
  2. Estimate how much it would cost (developers x time) to develop a feature in the impacted areas, before the shortcut(s)
  3. Now, estimate the cost to develop the same feature after introducing the technical debt

The difference in the cost is the interest you are paying. Keep track of the interest as it will change with every change you make to the system. A good practice is to calculate the interest every iteration or sprint. You should consider doing this as part of the sprint planning. This will have the added benefit of giving the product owner the opportunity to pay off some of the debt.

Note that like financial debt, the interest grows over time. The rate varies, and it is difficult to guess what it will be, except by projecting based on past experience. As you begin to accumulate this data, you will be able to make educated decisions on when you can afford to go into technical debt, and when you should avoid it. You should reassess your technical debt’s interest rate every sprint planning.

Armed with this knowledge, developers should now be able to collaborate with management in order to decide how to develop and whether or not it is okay to cut a corner on that next feature.

Finally, remember: Your architecture is a cold blooded loan shark, who will just as soon take your installments, as he will bust your project’s kneecaps over a missed payment.

What do you think? Do you have a preferred method of calculating / estimating technical debt, or a way to handle it?

Please share!

Wednesday, December 7, 2011

10 Reasons to Avoid Test Driven Development

no-TDDTest Driven Development (TDD), and all of its derivatives (BDD, ATDD) are, in my opinion great methods to drive a team’s development efforts, and raise the quality of the product. But TDD is not a silver bullet. It doesn’t fit every project. The following post lists the top ten reasons not to write automated tests for your code.

 

If one or more of these conditions apply to you, you should consider forgoing TDD, and in fact any agile techniques, as they will be a waste of effort:

10. There is no client

Sometimes you are developing a product that will not be used by anyone. In this case, any effort expended on raising the quality is a complete waste. Nobody will care.

9. The client is an avid tester

Some people love nothing more than to beta-test new software. The joy of finding new bugs and trying to figure out what went wrong is what they live for. Others are scientists at heart, and love trying to go over stack traces, in order to reverse-engineer the code. If your client happens to be one of those, then writing automated tests will take all the fun out of using your software. Don’t do it!

8. The project is short, simple and straight-forward

If your team can complete the project in a short period of time (no more than a few weeks), and will never, ever have to reopen it for maintenance, then the benefits of maintainability, reusability and extensibility will be lost on you. Spending time and effort on these values is wasteful.

7. Your architecture is perfect

If there is no way to improve your architecture, then there is no need for it to be extensible. TDD, by the nature of its incremental development flow, forces the architecture to be extensible, simply by making you extend it as you go, and this, like lipstick on a pig, is something you just don’t need.

6. Your documentation is perfect

You never miss an API, and any change you make to your software gets documented instantaneously. The tests you create with TDD serve as a form of documentation, an example of how to use the API. Properly named and written tests actually explain the context of the test, so it is easy to find the test that shows you what you need to understand. Your documentation is so complete, that writing tests are a clear violation of the DRY principle, so you should clearly avoid tests.

5. Your team never changes and all members’ memories are perfect

The collective memory never forgets a single line of code it wrote, nor what the context was when writing it. You therefore, do not need the tests to remind you what the code does, why it does it, or how to use it. This also means that your team members never leave, nor are any new members recruited, because if this were to happen, you’d lose memories, or have members who don’t remember the code (having not been there when it was written). If this is the case, don’t bother with tests; they will just interfere with your incredible velocity.

4. “Done” means the code is checked in

Many teams have a definition of done (DoD) that means that the feature is “done” when it is in a state that the end user can receive and run (coded, tested, deployed, documented, etc.). Many others however, your team included, prefer a simpler and more easily achieved definition that accepts “checked in” as “done”. For you it is sufficient that the developer declared that he or she completed his part, and anything else is someone else’s responsibility. If you don’t need the code to be tested for the product owner / manager / user to accept it, then you are better served moving on to the next feature as soon as you can, instead of dragging on your relationship with this feature.

3. You’re paid to code, not test

Ignoring the fact that unit tests are code (sophistry), testing is what we have testers for. Perhaps your team's testers are fast enough that they can run tests on your code and give you feedback within mere moments, pinpointing the areas where you broke the code, so you can fix it while the changes are fresh in your mind, as well as a complete regression suite on the product, in case you broke something in a different component every night (they don’t mind working nights; they love the peaceful quiet). Good for you, cherish those testers, and make sure they have enough work so they won’t get bored and move on to a more challenging company.

2. Debugging doesn’t count, and testing takes too long

Like with any competitive company, your team must deliver on time, which means they must make estimates on the time it will take to deliver. Since your DoD doesn’t include testing, and you probably can’t guess how long it will take to debug the feature, what with all the cycling back and forth from development to QA, you estimate how long it will take to code it. If you want to meet your commitment, you can’t be adding a 20% overhead to your delivery time or you’ll miss the deadline. Worse, if you add 20% to your estimates, your manager might call you out on padding the estimates, which is his job. If that happens, who knows what might happen? Better play it safe.

1. It’s just a theory

Like Evolution (and Gravity), it’s just a theory. Even if all of the above reasons weren’t valid, nobody has ever successfully proven that this product could be completed faster and with better quality using new-age development methodologies like TDD. It’s just a matter of opinion.

Test yourself

Now, to test whether or not you should use test driven development, go over the above list. Count how many reasons apply to you. If you scored ten points, don’t use TDD. In fact, if you scored more than one (reason #8 might actually be legitimate), don’t write any code at all. Perhaps you’d be better served choosing a career that has fewer unknowns and moving parts. Perhaps paving roads?

Disclaimer: This post was written… Aw, just figure it out yourself!

Friday, November 25, 2011

How to Break a User Story into Tasks

ScrumBoardOne of the vaguest aspects of Scrum is breaking stories into tasks, which is done in (the second half of) the Sprint Planning. Most of the questions I get when training or coaching boil down to one question: “How do I break a user story into tasks”?

Defining The Problem

My favorite definition for a user story is that it is a thin, vertical slice of functionality, describing a small increment in value to the user (or customer).

A task, on the other hand is a step taken by the development team, required to fulfill the requirements of the user story. I further like to define, as a rule of thumb, that a task should take half a day to 2 days to complete.

The question therefore should be what are the steps the team needs to take for the story to be considered complete?

Look at Your Architecture

Explicitly or implicitly, your project has an architecture. It might be anything from a master piece drawn out in painstaking elaboration by a the architecture team, to a high-level conceptual architecture diagram. You should look at the components in your system, and ask your self which components do you need to modify (or add) for the story.

For example, Let’s say that we have a product that is a social networking web site of some sorts. given a story such as In order to broaden my network, as a user, I want to be able to import contacts from Gmail. Looking at a typical architecture for such a product, my tasks could be something like:

  • Add a new Import Contacts page
  • Add a service to get contacts from Gmail
  • Modify the contact class to accommodate Gmail specific data (let’s assume such exists)
  • Modify the contacts data schema

Assuming these tasks are right-sized (4-16 hours), that’s it. If one of these tasks is bigger, break it down. If a task is smaller, merge it with a related one (if such exists. If not, don’t sweat it. My 4-16 rule is just a guideline). We now have all the tasks for us to complete the coding of the story.

Look at the Definition of Done

Of course, just coding your story isn’t sufficient for the story to be truly done. In Scrum, “done” means ready to ship, or as I like to define it: the story is as it will be received by the customer / user. Agile teams have a list of requirements that should  be true for any story they complete. You should look at your Definition of Done (DoD) to see what tasks remain.

For example, let’s say your DoD includes the following:

  • Code Complete
  • Automated UAT (User Acceptance Tests)
  • Documentation is updated
  • Added to shipping package (e.g. InstallShield)

In this case, you should add tasks that help you achieve the story’s DoD. For the above example you might do the following:

  • Write UATs for the story
  • Document the Contacts Import feature

Assuming you didn’t have to add any new assemblies (i.e. libraries, packages, DLLs, executables, etc.), to your installation, you probably wouldn’t have any need to modify your packaging tool. No need to add a task for that.

That’s It!

You now have 6 tasks for the story:

  • Add a new Import Contacts page
  • Add a service to get contacts from Gmail
  • Modify the contact class to accommodate Gmail specific data (let’s assume such exists)
  • Modify the contacts data schema
  • Write UATs for the story
  • Document the Contacts Import feature

You can now estimate how much time each one will take, and are ready to move on, and break down the next story.

Tuesday, November 22, 2011

How to Refactor Your Code

Red-Green-RefactorIf you develop your products using TDD or any of its derivatives (ATDD, BDD, etc), you are following the RedGreen → Refactor loop. If you are just getting started with TDD, you might find that while the red and green (write a failing test, and then make the test pass) are simple enough to understand and to demonstrate, the refactor phase is much less clear, and I’ve found few sources that explain it properly. In this post I will attempt to thoroughly explain what I do, and what I consider to be a best practice.

Refactor to Remove Redundancy

I usually use the refactor phase to reconstruct my code by removing duplications in code. I do this mostly in my production code (the code that will ship to my customer). I often leave duplications in my test code (the code that tests the production code), because for my tests I value readability over adhering to the DRY principle.

You should always start with your failing test (or unsatisfied specification, if you're into BDD). Production classes and methods that you pull from your tests should be public.

  • Note: If you're developing in .NET, you may wish to consider using the internal keyword (and adding the InternalsVisibleTo assembly attribute to your production assembly, so that your test project can use the code) instead. Then you would make it public only if another production assembly depends on it.

When you reach the refactoring phase (after making at test pass, or before writing a failing one), you should look at the code and walk through the following steps:

  1. Any substantial or complex code that you need in two (or more) methods should be extracted into a private method, and called from both (all) of the original locations.
    • Note: Any method that you create as part of the refactoring phase of your TDD should be made private.
  2. Make helper methods protected only when a derived production class needs them.
  3. Any method created in the refactoring phase (now private or protected), that you need in another production class should be extracted to a helper class and made internal (or public in any language that doesn’t support the concept).
  4. If the helper class is needed in another assembly, then:
    • If the other assembly already depends on the helper class' assembly, make the helper class public (if it isn't already).
    • If the other assembly does not depend on the helper class' assembly, extract the helper class to a helper assembly (the best fit, or a new one) and make it public. Be sure to make the original assembly reference the new helper assembly, if it doesn't already.

Refactor for Readability

There are no real steadfast rules for this. What I usually do is go over the code I just wrote (or modified) and look for any violations of team coding standards, or poorly named identifiers (classes, methods, variables, etc.). I usually don’t have very many of those, as I tend to name them properly (which is to say, to my own satisfaction, but then again I’m often considered the strictest coder on my teams).

One trick I do use, is to go over the methods and see if I it reads easily. By easily I mean is it fluent? I don’t go all-out trying to develop a fluent API, but just see if the classes, methods and variables lend themselves towards some semblance of the English language. The following is an example of a block of “fluent” code:

Code Snippet
  1. var user = new User(usernameInput.Text, passwordInput.Text);   
  2.  
  3. if (userManagementService.Authenticates(user))   
  4. {   
  5.     Navigate.To(requestedWebPage);   
  6. }

Fluency isn’t a must-have, but I find it better than commenting. Comments can lie. Code doesn’t.

Refactor for Complexity Reduction

Cyclomatic Complexity is a code metric that measures how many distinct paths exist in a program’s code. A high value is more difficult to read and maintain. I find that sometimes reducing complexity makes it easier to maintain the code. I therefore advise on doing this kind of refactoring especially when you need to improve your understanding of the codebase, rather than after making every test pass.

Here’s a list of techniques that can reduce the complexity of your code:

  • Invert conditional statements – If your code is nested in if statements, where on one branch you do something and on the other you do nothing, you might invert the condition (checking whether the condition does not occur), and then return (exit the method) if the if-statement evaluated to true. The following block of code will run if the statement evaluates to false (what would originally have run if it did evaluate positively).
  • Extract blocks of code into private methods. The complexity is moved to a different method

 

Refactor for Extensibility

This refactoring technique ties in with complexity reduction, and requires the use of some design patterns. I usually use the Strategy Pattern for this kind of refactoring. The idea is to remove multi-branch conditionals (such as chained if statements or switch-case statements) and extract the different cases into concrete strategy classes.

There are several benefit to this refactoring technique:

  1. New cases can be added without modifying (or with minimal modification to) existing code. This reduces the chance that existing code may break
  2. Cyclomatic complexity is reduced
  3. The strategies can be tested separately if you feel the need to (remember that they are originally covered by the original tests).

Other Refactorings

There are many more reasons to refactor code, but the above cover most of the cases I come across. If you are interested in reading more on refactoring software, I suggest reading the following books:

Hope this helps.

Assaf.

Monday, August 29, 2011

What Will Microsoft Give Away at the Build Conference?

In just over two weeks, on September 13th, the Build Windows conference will begin in Anaheim, CA. Microsoft is expected to unveil Windows 8 and show software and hardware developers how to use it. The good ol’ guys in Redmond have kept a really tight lid on what exactly is going on there, and even now, at the time of the writing of this post, they haven’t revealed any information about the agenda and the sessions.

There have been several posts, some seem better informed, while others seem like baseless gossip, regarding what is going to happen with WPF, .NET and the rest of the existing development tools Microsoft have supported for the last decade or so, given some unfortunate remark regarding HTML5, CSS & Javascript being the new tools of choice for developing Windows 8 “immersive” applications.

One thing that they have kept extremely quiet about however, is what they will giveaway as gifts to those who came to the conference

Giveaways

The year 2011 is, without doubt, the year of the tablet. IPad2, several Android Honeycomb tablets and now the demos that Microsoft made at Computex and Channel9 (all of which show case Windows 8 on tablets) all point towards a tablet being the main focus of development this year.

Moreover, in Google I/O 2011, Google handed out 5000 Samsung 10.1 Galaxy Tabs, one to each registered guest.

Microsoft can’t afford to, with all the hype created around Windows 8 and tablets this year, do any less.

It is my best, educated guess, that Microsoft will be handing out tablets with the latest build of Windows 8!

So, What’ll it be, exactly?

In the aforementioned Computex demo, in the 19th minute (18:30, to be exact), Mike Angiulo, Microsoft’s Vice President of Windows Planning demoed the new ARM based developers’ tablet (designed especially for software and hardware developers, with extra ports, and all), and said, and I quote:Windows8DeveloperSystem

“Now we have these developer reference systems that are all up and running… these are systems that are built for hardware and software developers… what they do represent are integrated, working PCs”

Mike went into some detail about the developer tablet’s specs:

  • ARM based, Snapdragon, 1.2GHZ chip
  • 13mm thick
  • 11.6” Screen
  • Extensive USB support (seamless connection to a DOK was demonstrated, worked as expected)
  • 720p rear camera
  • All of the common sensors (proximity, orientation, etc.)

It looks like it runs (“flies” even) beautifully, instantly loading and playing an H.264 full HD video. Real-world usage will probably not be as good as demoed, but still…

Based on this, I believe that we (yes, I’m registered, and going to be there) will receive the developer reference tablet running Windows 8.

Remember, if I was right, you read it here first!

See you at “Build”,

Assaf.

Friday, August 12, 2011

Alternatives to Planning Poker

This blog post is inspired by a tweet I saw:

image

The best thing about planning poker, and in my opinion the one thing that it brings to the table that no other system does, is the escape it offers from herd mentality. Planning Poker allows each member of the team to make his own opinion uninfluenced by others, and then allows the team as a whole to use Wisdom of Crowds to reach a good estimate.

The downside is that it is silly.

At least that’s how it looks from the outside. I mean, come on! A bunch of grownups spending precious hours of the work day, at the office, taking up a conference room while playing cards, until someone pulls a Joker (or whichever card signals the need for a coffee break)? I know the value of games and having fun at work, but I’m pretty sure that most adults are capable of enjoying good solid productive work without having to look goofy.

Here’s what I suggest:

  1. Discuss the PBI (Product Backlog Item) for a short while (2 minutes seems like a good average).
  2. Everyone takes a quiet moment to make a gut estimate. It doesn’t matter how it is made – relative (i.e. story points), absolute (ideal days) or rough (T-Shirt sizes), and writes it down.
  3. Alongside the estimate everybody lists the top reason (or few reasons, if you must – but that’s all!) he believes his estimate to be true. Examples could be:
    1. “We’ve done this before. Just extract and modify the ACME algorithm”
    2. “Nobody’s done it here before”
    3. “The data access layer is a mess. It will take a week to refactor”
    4. “Only George knows how to do it, and he’s off, snorkeling in Aruba…”.
    5. “A simple state machine and we’re done! I’m the state machine guru!”.
    6. “Haven’t got a clue. Just want to play it safe.”
  4. When everyone is done, merge the reasons, and decide on the best estimate.
  5. Rinse and repeat: If consensus is reached, repeat with the next item. Otherwise, repeat with the same.

Does it look familiar? It should. But shhhh! Don’t tell anyone. Let’s keep it our secret.

Oh, and you can call it team estimation. Now that’s a good, serious meaningful name, isn’t?

Now run along, have some fun (the good productive, serious kind, not the goofy kind).

Assaf.

Tuesday, August 9, 2011

Another Stab at Work Estimation

The Story

A couple of days ago, I was at a client, helping with his company’s migration from one version control system to another. As part of the work, we had to get the latest version of the source (i.e. download them to a local folder on the PC), copy it to the target system’s workspace (i.e. perform a standard windows copy to another folder on the PC), and then check in the sources.

I want to discuss the copying process. Yes, the copy.

filescopy

Not much to copying you say? Well, essentially, you’re right. You select the folder that you are going to copy from (I selected a folder that was just over 1GB in size, and just over 50,000 files altogether) . Then you copy it to the target folder. That’s it. Oh, and you wait.

That’s what I really want to discuss. The Wait.

This was my experience:

  1. First I had to wait for a few seconds (I think it was about 10 seconds) while windows calculated
  2. Next, it started copying, and said it would take roughly 3 minutes.
  3. After a few more seconds, of copying it reevaluated the work at 4 minutes or so.
  4. After that windows started to progressively update the estimate until it reached about 5:30 minutes.
  5. At that point, the estimation seemed to be on track, as the remaining time steadily went down.
  6. About 5 seconds or so from the end of the copy, it began taking longer for each second to complete (i.e. it took longer than 5 seconds to complete from that point, and it took longer than a second to tick off one second from the remaining time).
  7. Finally, Windows reported that it was done. 0 remaining seconds. Means it’s done, doesn’t it? I thought so. Windows seems to have a different definition of done, apparently. It took 10 more seconds till it was really done.

Sounds familiar to anyone? It should. Anyone who ever used a computer (I’ll hazard a guess that other PCs and operating systems have similar experiences) probably doesn’t even notice it anymore, seeing as it is such a common occurrence.

That’s not what I’m talking about, though. I’m talking to all of you corporate developers, team leaders and managers.

The Moral

In my experience many (perhaps most?) teams go through a frighteningly similar cycle whenever they estimate the cost of development on a feature, or a project.

This is (all to often) my experience:

  1. The team spends some effort “calculating” the effort required to develop each feature. Usually, it seems like the same 10 seconds are spent, the results are equally optimistic, and just as “accurate”.
  2. The team makes their estimate (optimistic and inaccurate).
  3. Next, they start working on the problem. Soon enough, the team encounters the first problem they didn’t think of. The estimate goes up.
  4. This goes on several times, pushing the deadline into the slack assigned to development.
  5. Eventually, the team reaches some rhythm, and it seems like the team is on track. At least they report that they are.
  6. Towards the end, when the deadline is just around the corner. In what is sometimes called “Crunch-time”, or the “robustness” phase, the team keeps announcing “we’re almost done”, or “we’ll finish tomorrow”. This usually takes quite a bit more than a day, of course.
  7. Then they say that they’re done. Only they’re not. They completed part of the work – coding. “Testing? What’s that – that’s what QA are for. We meant done coding, not ready to deploy. What’s the matter with you?!?” It will take some more time before it actually is fit to let out in public.

And everybody is surprised.

That is what I don’t get. A computer, performing a copy, one of the simplest and straightforward tasks, can’t get the estimates right. And I was copying locally! And that was to another folder on the same disk (even the same logical drive).

How do you expect a human being, performing an act of research & development, doing something nobody did before (or at the very least not done by this human being, in this environment), to out-perform a computer?

I know managers and customers depend on these numbers. I suggest a 12 step program to deal with that. There are alternatives.

Welcome to the world of Agile development.

See you in the next post..

Assaf.

P.S. The next folder I had to copy, I used FastCopy. Took me less than half the time. Didn’t waste any of it on estimations.

Tuesday, June 28, 2011

Our Talk at Sela Dev-Days–ALM Best Practices with TFS 2010

Today Yuval Mazor and I gave the talk above in front of ~20 people, as part of the ALM day of Sela Group’s Dev Days 2011. People were quite receptive – and I believe we got them to understand not only how you do things in TFS 2010, but also why. Some of the things we talked about:

  • Best practices for managing software development projects with TFS 2010
  • What are work items and how to use them (including customizations and links)
  • How to properly build a branching plan
  • Where and how to apply automated builds and CI

Unfortunately, we didn’t have enough time to cover TFS 2010 reports in detail – which means another talk is in order!

The presentation is available online here.

Thanks to Shai Raiten for the photography:

ALMBestPracticesSelaDevDays2011_thum

A big thank-you to our audience – it was great having you!

Software & I

Hello all,

It took me a considerable amount of time to come up with a name for this blog. It was particularly difficult, because I didn’t want to pigeon-hole myself and limit the blog to just one subject. My other blog, Nightmare on ALM Street, specializes in gems about my current primary job – implementing application lifecycle processes with Microsoft Team Foundation Server.

But there’s more to me – even professionally – than just that. I’m a software developer, and while I mostly develop on the Microsoft technology stack (e.g. .NET and C#), I also dabble in other languages such as Java for Android development, non TFS / Agile / Scrum methodology, and even things that are not software related.

So I needed a blog that will talk about all that – Software being a major part of it, and other things that interest me.

In the end I came up with a simple and straight forward title – Software and I.

I  promise to talk about anything that interests me – and hopefully, it will be of interest to you as well.

Lastly, I would like to give a very special thank you, to my good friend, Maor Linn, for designing the blog’s title banner. If you like it, give a holler – I’ll be sure to let him know.

Thanks,

Assaf.