Tuesday, August 21, 2012

Windows 8 RTM is Available for Evaluation

imageIf you haven’t been living under a rock, then you probably heard of Windows 8. It’s new, it’s different, it moved your cheese to a different time zone.

And it’s freely available for a 90 day evaluation!

You can download the Enterprise edition here.

A few things to note before you try it:

  • This evaluation cannot be upgraded!
  • You will have to install a retail version of Windows when the trial period is over! Don’t say I didn’t warn you…
  • To revert to a previous version of Windows (say, Win-7), you’ll have to do a clean install.

You may wish to try installing it on a virtual machine for evaluation. Personally, I’m partial to Virtual Box, though you can of course create one with any platform.

I’ve already installed it. So far, so good. Everything but my Dell’s fingerprint reader works. Grrrr….

Saturday, June 9, 2012

How Does the LinkedIn Hack Affect Me?

June 10th Update: I added a new GUI tool, to make it easier to check your password’s hash

salted-hash-linkedinIf you have a LinkedIn account, you may be the owner of one of the 6.4 million accounts that have been recently hacked. You should check to be sure. If you have been hacked, your password has been forever compromised in all of your current and future accounts! In this post I will try to help you check whether or not you’re safe, explain what LinkedIn did wrong, and how and why all developers should be more careful.

First Things First: Was I Hacked?

hash-checker-iconI wrote a simple command-line tool that will help you check whether you were hacked or not. What you do is enter your password, and it will check the SHA-1 hash of your password against the published list of hacked hashes, and tell you whether you’re safe or not. Don’t understand what I’m talking about? Never mind. You soon will.

  • Is it safe? Yes. It is an offline tool, and doesn’t connect to the internet in any way – especially not to publish your password, nor is it saved on the disk or added to the list.
  • Don’t believe me? No problem – disconnect your internet connection while using it, then delete the application from your machine, before reconnecting to the internet.
  • Still worried? Still no problem. I’ve published the sources. You can check what I do, and compile the checker yourself.

The hash-checker project is stored at http://hashchecker.codeplex.com. You can get both the binary and the source code there.

The list of compromised hashes can be found here.


To check whether you were affected by the aforementioned LinkedIn attack, download the tool and the published list of stolen hashes. Make sure the text file and executable are in the same location.

  1. Run the tool (Icon added to your desktop – you can’t miss it).
  2. To check against the LinkedIn hacked hash list, just type your password and select the hash list file.
  3. Optionally, you may unhide the characters, decide not to pad, or change the number of characters padded.
  4. Press “Search”, and wait a bit. Hopefully, the result will be in green, like mine…

In case you’re wondering about the padding, the list has the first 5 characters zeroed so as not to expose the hashes completely. Pretty much like with credit-card hacks, they leave out the last 4-6 digits.

Here’s a screen shot of my self-testing:


I sincerely hope you’re in the clear. And that this is the entire list of compromised hashes…

Either way, take a deep breath, and move on. Time to discuss what happened.

What is a Hash, and What Happened to LinkedIn?

Okay, here’s a crash course of password management. First, in order to authenticate users, most sites store the unique user identifier (username, email address, etc.) in a database, with some form of password.

Plain Text Passwords

The most naïve (and dangerous – never do it like that) solution is to store the password in plain-text, i.e. unencrypted. What these sites do, when you log in, is send the user-name and password (often unencrypted) over the internet, to the site, and then compare the given password with the stored one. If they’re the same, the user authenticated.

The problem is, that if someone manages to hack into the site’s database, that person now has your credentials, and can impersonate you on the site.

That might not seem very harmful, especially if it is not an important site, and you didn’t give it any sensitive information like your Social Security Number, or banking credentials. Unfortunately, that is not true! Statistically speaking, most users reuse their usernames and passwords in multiple sites, including ones where sensitive information is stored!

This means that if you register with www.someunimportantsite.com, with your email address, and password 12345678, it is extremely likely that you used the same password when you registered with the email service, and with www.paypal.com, and many other sites as well.

Knowing this, a malicious hacker may crack someunimportantsite.com, and with the credentials he got there, try to log in and impersonate you on paypal.com! Congratulations – you have now lost money!

Hashes and Hashed Passwords

hero_hash-brownsIn order to protect the passwords, most sites that care about their users (and about malpractice lawsuits), will encrypt their users’ passwords in such a way that it is impossible to derive the password from them. These sites use a form of encryption that is based on a one-way encryption algorithm. Simply put, it is an algorithm that easily and deterministically transforms some plain-text input into an encrypted output, but no algorithm exists that can reverse the operation and derive the input, based on the encrypted output. This encrypted output is called a hash. Two commonly used algorithms are SHA-1 and MD-5. LinkedIn, by the way, use SHA-1.

What these sites do, when you log in, is to encrypt your input, and compare it to the stored encrypted password. If the encrypted input matches the encrypted password, you’ve authenticated.

Problem is, this isn’t good enough.

Rainbow Tables and Rainbow Attacks

imageAs previously mentioned, while it is impossible to reverse a hash, and derive a password, it is easy to generate a hash based on some input. And the same input will always generate the same hash. A hacker will use a rainbow-table, a table that has known passwords and their hashes, to hack a site that stores hashed-passwords.

What the hacker does, is take a hash, look it up in the rainbow table, and if he finds it, he looks up the password that creates it. Congratulations, you’ve now been compromised.

This is what happened with LinkedIn. This is the danger you face.

What Should LinkedIn’s Developers Have Done?

I’ve got no problem with how LinkedIn dealt with the attack, after it happened. My grief is that this fiasco could have been prevented!

What Is a Salted Hash, and How Does it Protect Users?

imageThe problem is that many people reuse passwords, and that many passwords are commonly used by different people: 12345, 1qaz!QAZ, any word in the dictionary, “p@ssw0rd”, and 1337-speak (you pseudo-hacker-cool-boys know who you are). They all appear in every respectable rainbow table.

If a site’s developer would add some random and unique characters to the password, the resulting hash would be vastly different from the hash of the same unsalted password. It would be vastly different from the same password hashed with a different salt.

In short, a salted hash defeats rainbow attacks, because the uniquely salted hash would never appear in it, and no two users would have the same hash, and no user would have the same hash in two different sites.

The great thing is that since the salt cannot be separated from the salted password, the salt doesn’t need to be secret! The site can safely store the salt alongside the salted hash, without fearing for the security.

I repeat – all the site’s developers need to do is create a salt (256 bits of random bytes will suffice) and then hash the password concatenated with the salt:

saltedHash = Algorithm.ComputeHash(password+randomBytes);

Finally, save the salted hash and the salt in the same record in the database.

That is what LinkedIn should have done!

What Can You Do?

Unfortunately, beyond writing your congressman, brow-beating your site’s developers, and boycotting unsafe sites, you can’t do anything about how the passwords are saved. You probably won’t even know how they manage your password until something bad happens and it gets published.

You can, however make sure not to use commonly used passwords, and not make sure that your accounts in sensitive sites have unique, strong passwords, that you don’t reuse.

A password manager tool (I paid for, and love AI RoboForm) can help you keep passwords without having to remember them, and can generate strong random passwords for you.

If this seems like too much work, use levels of security – have one password for unimportant sites, and another safer one for important sites, and a unique one for each extremely sensitive or money-related site (email service, bank, PayPal, etc.).

I hope this helped clear out the air around LinkedIn, hashes, salts, and your own safety.


Tuesday, May 8, 2012

One Team’s First Sprint Planning Session

This post is a bit different than my usual ones. Instead of orating off of my soapbox, I’d like to share a moment of agile bliss that I had today. One of the teams I work with at my customer’s company conducted their first “truly agile” sprint planning session. This post is a recount of the experience.

What Came Before

About a month ago, I began working on agile methodology training for a few teams at my client. The focus of the training was team work-process, mostly using Scrum as a template for how we will do things, with a few Kanban inspired overrides. In my first few sessions, I focused on what a user story is, what a task is, and the concept of focusing the team’s energy on completing user stories as fast as possible.

What I taught them boiled down to a few key take-away points:

  • A user story is an independent increment in functionality that is valuable to a stakeholder, that can be completed by the team in one sprint
  • A task is one developer’s day of work, on one component of the system that is affected by its user story, or work that needs to be done to meet the team’s quality standards.
  • Each team member pulls an unassigned task from the most important, unfinished user story

Initially there were arguments on all points: “Our stories are too big for one sprint”, “What if a task takes more or less than a day?”, “It’s easier to assign a story to a developer”. I had to convince them that these basic ideas are important and valuable, or at least to give them a try.

The Man’s Too Big…

imageWhat we did was take one “too big” user story, and look for some way to break them down in to thinner vertical slices. We ended up with stories that use naïve implementations, or serve a smaller set of clients, or handle a smaller set of scenarios, with further stories that cover the difference in robustness, clients and scenarios.

With regard to the tasks, I suggested that we simply split tasks that feel too big, or join tasks that feel to small, and not to worry about their actual length. We’ll measure the averages and variance after the end of the sprint, and look for better ways to be more predictable as needed. It was more of a let’s simply try it solution.

The Man’s Too Strong!

imageThe longest argument was over whether the first story warranted having multiple developers swarm on the tasks until the story is complete. One developer felt most strongly that it’s a waste of time to have multiple people learn the story, since we’re certain to complete it in time either way. Another developer didn’t want to be pegged in just the server (or client) coding tasks of each story, and felt that owning a whole story gave her more satisfaction. She also expressed concern over the pain of complex merges. Incidentally, it was mostly the QA engineers that immediately accepted the value of working on one story’s tasks in parallel.

Here are the points I made to alleviate their concerns:

  • Having multiple developers learn a story is a great benefit, as it will allow flexibility in task assignment now, and further down the line in future sprints that may have related stories
  • Having more people understand the story will eliminate bottlenecks (and increases the team’s bus factor – a good thing)
  • Regarding the perceived focusing on one component, some other team member chimed in, suggesting that nothing would stop her from taking a client-side task in one story, and a server-related one in the next. I was really happy that other members joined the discussion; it was no longer a teacher-class relationship, but a team of peers
  • I also reminded the team, that since we define a task as the work on one component for a day, and since we focus on completing one story before moving to the next, we are actually reducing the chance of two developers having to manipulate the same object at the same time – which of course reduces the merge conflicts! Further effort can be made to remove the rest of the edge cases almost entirely
  • I further mentioned that there will be at least two members working on each story in any case: one coder, one tester.

Planning the Sprint

imageThe rest went by rather smoothly. The team calculated how many expected work days they have in the sprint. I suggested that we simply add them up, rather than trying to split along the lines of coder / tester days & tasks.

I noticed that the backlog had some free-floating tasks, i.e. not related to any user story. I asked the team about those, stating that unless there is some value to a stakeholder, the task may be a waste of time. The team told me that it was technical debt. After some discussion, we agreed to deduct the tasks from the available work days, rather than creating fake user stories – we have to own up to the fact that we are paying back a debt to the system, not adding value to it.

The team-leader-turned-part-time-product-owner gave a rough stack ranking (there were several items high on the list with the same rank). I suggested that if he doesn’t care which of the same-rank stories we complete first, I’ll set an internal rank by the order that they appeared in the spreadsheet. He accepted.

Next we started explaining and breaking the stories. Traditionally, the PO explains the stories in the planning session’s first half, and the team breaks them down in the second. Since the PO in this case is a team member, and fully savvy of the product and solution, I suggested that we simply run down the list until we have as many tasks as we have work days (see how that works out?).

The first story was easy. We created 4 coding tasks and 2 quality related tasks.

Break it Down!

imageThe second story was a whopper. We knew it would take most of the sprint. One developer suggested that we split the story by data source – one part retrieving one source’s data, the other part retrieving the second source. the team leader shook his head. “The customer needs both”, he said. There is no value in supplying just one. We’ll split it some other way, if we have to”.

You couldn’t erase the smile from my face with truckload of erasers. I knew for sure that he got the idea of what a user story is!

With the second, larger, story, the team leader frowned and looked at the task list. “This doesn’t feel right” he said. When asked what the problem was, he said that one task is too big – it will take longer than a day. The team then split it in two parts, and added a third task.

And we were done estimating and committing. Just like that.

All in all, we spent about an hour of planning, 2-3 minutes of which were spent “estimating”, which is to say deciding how to split that “too big” task.

An Agile Manager in a Brave New World

At the end of the meeting, the team leader expressed his regrets that we don’t have a full-time technical-product-manager (TPM) which is the closest role they have to a Product Owner. The group manager, who was present for the Sprint Planning picked up the impediment and answered “Let’s take it offline; I’ll try to get one to join the team ASAP”.

Yes sir! The manager became the Scrum Master’s Scrum Master, as per my suggestion in my previous post.

Here’s What We Didn’t Do

There are several things the team didn’t do:

  • Defining proper user stories – I didn’t bother following the ceremony of form with the stories, declaring the reasoning and the interested party. I ignored this because I wanted to avoid too much change-shock at once. Next sprint we’ll focus on that. For now, I felt that understanding the concept of value was more – well – valuable
  • Estimate the effort involved in the stories. Personally? I hate estimations. I think they’re useless, most of the time. I noticed that most of the time spent in a sprint planning session is wasted on planning poker, story points and ideal hours, and in the end – they don’t help the project’s predictability in any noticeable way. Instead, we’ll measure, and seek a low-variance way to split the tasks. That will be better. and cheaper

In the End

All in all, it was a great first “agile day” for the team. It was fast, to the point, and everybody participated in every part of the session. Tomorrow I will work with them on building their Scrum-board, and walk them through their first “real” daily meeting. I can’t wait!

So, what do you think? Want to hear more such experiences? Is there anything I did that you like? Something I could have done better? Please drop a comment. Don’t be a stranger!

P.S. A special thank you goes to Mark Knopfler, lead vocals and guitar legend from the Dire Straits, for inspiring the titles of two sections, in his wonderful song.

Saturday, April 28, 2012

How to Manage a Self-Managing Team?

imageAs we all know, there’s no shortage in confusing and counter-intuitive ideas and practices in the Agile world, but the idea of a self-managing team has to be one of the best. Indeed, the idea that a manager can manage a self-managing team sounds oxymoronic at best, and at worst – plain moronic. In this post I will try to clear the air and explain what it means for a team to be self-managing, and what is the role of a manager of such a team.
By the way, this post is the fourth in my series of Demystifying Agile. If you like this one, you may want to read the others.

What Does It Mean to Self-Manage?

imageA self managing team, or as it is sometimes also called, a self-organizing team, is, above all, a mature team. It doesn’t even have follow the percepts of any agile methodology per se, but in fact most do. Probably something to do with maturity.
The first thing such a team does is to take charge of how they do their work, which is to say, they define the tasks they will have to complete in order to deliver the solution (requirement / user story / feature).
They will also be in charge of estimating the cost of their work, decide what work they can complete until the next milestone (or iteration, or sprint), and commit to it.
They will take control over their development process and the responsibility of improving it in order to develop a better product, while reducing the cost, as much as they can.
They will (to some extent) be in charge of identifying anything that may impede their progress, any issue that may stop them from completing a task or meeting a deadline, and either resolve what they can, or report to their superiors what is beyond them to solve.
Depending on whether or not the Product Owner is considered part of the team, they will define what needs to be done: user stories and priorities.
In short, one must merely point them at a goal, and they will do everything needed to achieve it!

So What Do I need a Manager for?

imageAs a manager, you may worry that you have become obsolete. Indeed, all but one manager I’ve worked with, has expressed some form of anxiety over this change and subconsciously hindered my efforts to help the team become self-managing. Incidentally, that one manager simply asked me to tell him what his new role would be in this new agile world. All I had to do was convince him that it will work (if you’re reading this, and you know who you are – thanks for making my job enjoyable!)
At any rate, you are not obsolete – in fact, you are needed more than ever; Your team(s) need you to do those things that nobody else can do:
First off, there’s the whole matter of human resources and financing (some might say it is the same). No agile methodology that I know of, suggests that the team is responsible for the following:
  • Hiring new employees
  • Negotiating compensation and benefits
  • Purchasing equipment, software and other tools
  • Authorizing personal absences (see below however)
  • Anything else that is not directly related to the team’s work
  • Yes - Firing a team member that cannot or will not do his job properly

Regarding the matter of personal absences, a mature team could handle the matter of absences by themselves, accounting for planned absences when planning the iteration, and dealing with the consequences when they are unforeseen. The absences would still be accounted and authorized within what was agreed in the compensation package, but the manager might be relieved of having to deal with the day-to-day issues.
What else? In a traditional hierarchy, a manager is in charge of team leaders, who in turn are in charge of the team. In an agile world, the team leader is often replaced by the Scrum Master (or Agile Coach), who serves the team by removing impediments from its path.
An agile manager could serve as the Scrum Master of Scrum Masters. A scrum master who encounters an impediment that he cannot resolve, needs someone to turn to. Enter the Agile Manager, the servant leader for the teams. With the backing of his position the manager can do more to resolve the issues than the scrum masters might do on their own.

All This Sounds Great, But How Do I Get There?

All of this may sound great in theory, but you know your team; they are simply not ready for all this responsibility. So you may wish to “thank” me for wasting your time, right?
Look back at the list of things that the self managing team does. Now back at your tasks. Now back at the team’s. Looks familiar? They are – or were – your tasks. Now look at it again. It’s their tasks, only you were handling their work for them!
Work with the team, relinquish these tasks, one by one, suggest that they take over them. This can (should) be part of their Kaizen, their continuous improvement.
Sooner or later – sooner, actually – you will find yourself the proud agile manager of self managing teams!
Now you will finally have the time to do all of those things that you never had the time to do.


The Agile Manager is the financer and builder of the self-managing teams he is in charge of. The agile manager is also the scrum master of scrum masters and solves the issues that they cannot resolve on their own.
The path to self management is not a short one, but not a very difficult one. You can get there, one step at a time.
Try it, I believe that you will find it rewarding and liberating.
If you have any experience managing self managing teams, why don’t you leave a comment with a pearl of wisdom of your own? Be a sport.

Thursday, April 19, 2012

How to Reinstall the Fingerprint Reader on a Dell Laptop

dell-fpI love my Dell Mobile Workstation. It’s a Precision M6600. Yeah. The big one. With (almost all) the works (no touch screen or SSD drive). Recently I reformatted it. I don’t mind, really, I was thinking of doing so anyway. A digital spring cleaning if you will. Spring cleaning and Decrapification.

The Story of My Woes

It took me about an hour or so to get Windows 7 up and running, and (most of) the drivers working. Just extracted the big giant drivers’ CAB for my machine, and pointed every unknown driver in the device manager at the root folder, and got it all up and running.

All but the finger print reader.

I couldn’t get it to work. There was no unidentified driver left, but there was no biometric device defined either.

Dell’s support site was no help either; I went through the site, having entered my Service Code, and got a list of drivers and applications for my machine. Nothing there pointed me in the right direction.

After a day or so, I found the finger print reader app from Authentec, and tried to get it to work, but I couldn’t. While I got Windows to identify my finger print reader module, and it read my fingers, I couldn’t get it to “enroll” my finger prints.

And for some reason the interface really didn’t look familiar.

Took another day scouring the web, looking for a solution. I started worrying that I somehow damaged my device. And that really drove me nuts. I love using the FP reader to authenticate.

And then I found -

The Solution

Important note: This download is for the Dell Precision M-series Mobile Workstations for 64 bit Windows. Before installing any driver, firmware or application, verify that your machine make is supported!

In the end, no single blog had the solution for me. I had to mix parts of this and that, but here is the result:

  1. If you’ve already futzed with the FP reader, uninstall ControlVault and and Finger Print related software
  2. Install the Dell Data Protection | Access -- Driver Package. You can get it here.
  3. Install the Dell Data Protection | Access -- Middleware Package. You can get it here.
  4. Install the Dell ControlVault Windows Biometric Framework Integrated. You can get it here.
  5. Install the Dell Data Protection | Access – Application Package. You can get it here.

You will be told to restart once or twice. Make sure you do so! It’s important.

That’s all there is to it. Unfortunately, these apps did not appear in my list of suggested apps. The installation instructions on Dell’s support site are misleading.

I hope you have an easier time of it than I had.


Friday, March 9, 2012

Why Should I INVEST in User Stories?

INVEST is a another great acronym from the Computer Sciences industry, Department of Agile (yes, DOA, and I’m well aware – thank you very much); It stands for Independent, Negotiable, Valuable, Estimable, Sized (or Small), and Testable. Those are indeed valuable attributes of User Stories, of any kind of requirements at all, and the acronym itself implies some effort on behalf of the creator of user stories (i.e. the Product Owner), with a potential return on investment.

But that isn’t good enough. A catchy name may be enough for most developers to try something new (seriously, when does a developer ever need a reason to try something new?), but not so for management. To many managers, however, this can sound like Impossible, Not-gonna-deliver, Vague, Everything-is, So-what, and Test-the-whole-goddamn-project!

In this post, inspired by a twitter-conversation I had a while ago, I hope to explain what this INVESTing in user stories really means, and why it is worth doing.


imageIn my opinion, writing independent user stories will be your single greatest return on investment (no pun intended – this time). In one of my recent posts, I described how work items (stories, tasks, features, etc.) are far more likely to complete later rather than earlier if they more than one dependency, while having just a single dependency (i.e. starting the new item is simply dependent on completing the previous one) will  increase the likelihood that it will complete on time.

Of course, most of the work in writing good user stories is in this part as well. You are required to reimagine how you think of your work. You need to do away with layered one-shot thinking that is probably necessary in every industry except software. Since the actual construction of the software is for all intents and purposes near-instantaneous, you can (and should) think about vertical increments of value, when you come up with your requirements. Each such story should be worded so as to be independent of any previous (or forthcoming) stories.

In some cases, you may find it impossible to break a story’s dependencies, no matter how you try. In those cases, try to look at the story with its dependencies as a single story. Find the commonality of them and reimagine it.

To summarize – Independence in your stories will translate directly into increased probability of delivering on time!


imageLet’s get one thing clear: This attribute is not a copout for developers! A story’s negotiability does not give developers the right to decide not to do it. The developers do not decide what the story will be, what its scope is, when it will be delivered, and what is required to make it complete. Oh, the developers can and should discuss the story and its impact with the PO, and the PO may decide to change the story based on developer input, but it is ultimately the PO’s right (and responsibility) to make any changes to the story.

The full meaning of negotiability, is that the contents of the story (i.e. scope, priority, requirements) can be changed at any time, from the moment they are introduced, and up until the the developers commit to it (i.e. the beginning of its sprint).

This is perhaps both the easiest attribute to add to your user stories, and the hardest. It is easy, because it requires no extra work from the PO or team. It is hard, because it may require you to let go of any preconceptions you may have of the story (i.e. those you had when you first wrote it). It requires you to allow people to move your cheese.

The return on this investment is quite valuable, however; Your stories will be rejuvenated, and will be more valuable to the customers and user, because they have been recently revalued, evaluated and updated. This attribute is particularly synergetic with the next one:


Standish GroupYou’d think that given the high premium on the time of developers, and the fact that most projects are running late, as it is, it would go without saying that everything we develop is valuable. Unfortunately that is not the case. According to the CHAOS Report, by the Standish Group, fully 64% of developed functionality is rarely or never used!

All of these features have something in common: They were added without any consideration of their value or necessity to the users or the customers!

The valuable attribute of user stories helps reduce these numbers, by making sure that anything and everything the team develops has value to the users. When the PO comes up with a story, and cannot find a way to describe its value, it is possible that there is no value. Our tendency to add these valueless stories to the product comes from the traditional idea that you have to define everything upfront, or it won’t be in the project. Thanks to negotiability, it is possible to focus only on what is valuable.

This attribute also protects the system from developers’ desires to build a framework (we all love developing frameworks – it’s a conditioning or something), or do any work that doesn’t translate into value to the customer. If some technical work is required for some story, add the work as a task to that story, not as a technical story; this will more clearly radiate to management what is required to complete a story, and will give the PO the ability to consider this extra work and its impact when evaluating the story.


imageThere is a lot of good literature out there on how to estimate properly. While there are several, equally viable techniques out there for estimating, everyone pretty much agrees that relative estimates are easier than absolute ones, and smaller items are easier than larger ones. For more info on how to estimate properly, you can read my previous post on the subject, here.

There is more to it, though, than just the techniques. The story should be clearly scoped. There should be a beginning and an end.

Imagine a story such as “As a user, I want a text editor, so that I can take notes”. How much will it take to develop that? I have no idea. If the user only needs a multi-line textbox with persistence capabilities, then I can probably have it done in a few hours (yes – a few hours, because nothing ever takes just 5 minutes!). If the user is expecting RTF capabilities with copy and paste, it’ll take longer, and if he’s looking to give MS Word some serious competition, it’ll probably take a few man-years (just guessing here; It’s too big for me to give a good estimate).

I see estimability as the culmination and grounding of the next two attributes, Size and Testability (see below).

S is for Size

image“Sized Appropriately”, or “Small”, as some call it means that the story is small enough to estimate. In Scrum, and other time-boxed methodologies it is expected that the story will be scoped in a way that makes it possible for the team to complete a few of them in one sprint – at the very least, they should be able to complete one story.

This attribute requires the PO to define in detail what it is that he wants the team to deliver. In many cases, the actual user stories that the team works on are the elaboration of the broad scopes defined in the “Epics” or “over-stories” or “super-stories”.

What the PO gains is the ability to release early and often. The team get the ability to create less complex solutions, because the problems are smaller.

Remember, the story should be defined in terms of their business value, not the implementation details! The details are to be left to the developer tasks, and are of no interest to the users or customers.

T is for Testable

imageTestability basically means that it is possible to test the product to see if it fulfills the requirements of the user story. A testable user story has a finite and measurable set of acceptance criteria, by which it is possible to prove the validity of the product.

This last, but most certainly not least of attributes serves everybody who has a stake in the product:

  • By defining the story’s acceptance criteria, the PO has the opportunity to define more specific demands of the story, than what appear simply in the “title” (“As a… I want… so that…).
  • Both the Developers and the QA gain an understanding of what the scope of the story is and what it is not, thus reducing a great source of contention about whether something is a bug, or a new feature (if I had a penny…)
  • Stakeholders gain an understanding of what they are going to receive by the release.

The acceptance criteria, like the rest of the story, should be worded in the product’s domain language, which means that they should be defined in the users’ terms, rather than the developer / QA terms.

The return on your INVESTment here should be obvious: The PO gains a tighter control of what will be produced, without unnecessarily elaborating on the technical solution, while the developers and QA have a better understanding of what they are required to do.

In conclusion

I think that every part of the INVEST model has value. Stories that you have invested in will be:

  • More likely to complete on time
  • Current and updated
  • Valuable to the customer and / or the users, with less unnecessary technical fluff
  • Clearer, with less dissonance between POs, developers and testers
  • Less complex to develop, easier to test, and quicker to get feedback as well as quicker to deploy
  • Easier to prove and validate and quicker to gain acceptance by the customer / user

That’s all. I think it’s a worthy goal, well worth the cost. What say you?

Sunday, February 26, 2012

Proving Pair Programming: How and Why it Works

In the first two posts in the Demystifying Agile, I wrote about how to plan release with Scrum and how to use agile estimation techniques in a Scrum project. Delving deeper into the agile software development process, there is one practice that is by far the least popular with traditional managers, and even with many developers. It is the practice of pairing on development tasks. In this post I will attempt to clear the air a bit, explaining why it works and how to make it work for you.

First, a story…

Roughly five years ago I had to rewrite a project from scratch. The details that led to that point are of lesser import, but what is important is that it was a codebase that took three developers eight months to develop and it had four months worth of ugly patches that sorely needed to be refactored. And a few breaking changes in the requirements. And I had 30 days. And two developers (myself included).

Rather than splitting the work, we decided to pair. We paired for 6-7 hours out of eight working hours every day.

The result was more robust, extensible and better performing than before.

And we did it in 20 days.

Why did it work?

From a strictly mathematical standpoint, it shouldn’t have been done. I’ve rewritten projects before, and while it often doesn’t take as much time as it originally did, it doesn’t take a lot less time. We took the work of 24 man-months, and completed it in 40 man-days.

Here are what I believe to be the reasons we achieved the results:

  1. With two developers, equally participating on the project, we had immediate code-review done as we develop. It is a (should be) well known fact that the shorter the feedback cycle, the easier it is to act on the feedback. With near-instantaneous feedback, the improved quality is simply automatically baked in.
  2. Two developers have two different sets of experiences, bringing in a different point of view. Any iffy idea is immediately examined. You don’t get to to sip your own cool-aid when you pair.
  3. With another developer near you all day (or most of the day), you are more focused on your work. You don’t go off on a tangent, reading up on a new technique or gizmo, and you don’t get lost in the quagmire of email: You explicitly decide which distractions to acknowledge and which to postpone. And if you don’t, your pair will help you.
  4. It’s much more difficult to cut corners with someone else (your pair) looking over your shoulder; your guilt will stop you, or his will. Any corners cut (or conversely, taking too long to complete a task) will only be the result of an explicit decision by both of you. Explicitness tends to cut down on occurrences.
  5. Two-legged-distractions (a.k.a corporate-drive-by incidents, a.k.a. managers and coworkers) tend to bother you less when you’re sitting with someone else. They often go look for someone easier to single out if possible. There’s strength in numbers. If that fails, the fact that you’re already sitting with someone will give you a good valid excuse to ignore the distraction.

As you can see, these observations, my experiences, are not easily measurable. They are, however, no less real.

How to make it work for you

First, forgive me for the following copout: YMMV (Your Mileage May Vary); nothing I can write is guaranteed to work for everybody. But I do expect the following ideas to work for most developers who are willing to give it a shot:

  1. Give it a shot. Seriously, it won’t bite. And you can always quit (though winners don’t quit, and quitters don’t win).
  2. Don’t do it the whole day, and don’t work a long day. Pairing is very draining because it demands a focus far greater than you are likely to achieve on your own. Work for no more than 8-9 hours a day. Pair for no more than 6-7 of them. Leave an hour or so for email, meetings, easy no-brainer tasks, and other nuisances.
  3. Pair with someone that could review your work. Offer to pair with that developer on his task after that.
  4. Don’t hog the keyboard. Switch every now and then. Otherwise you have one developer and a bored observer who’s wasting his or her time.
  5. Don’t declare it – just do it. Pairing is a hard sell. Results aren’t. Besides, it is usually easier to get forgiveness than it is to get permission.

I hope this post helps you add a trick to your agile bag. It worked wonders for me.


Technorati Tags: ,,

Sunday, February 19, 2012

Do Agile Estimation Techniques Really Account for Scrum Projects’ Successes?

imageAfter several years of experience working with, training and coaching teams on Scrum, I have no doubt that agile (empirical) estimation techniques are better than classic, predictive estimation techniques. One question that kept nagging me over time is whether or not using story points, T-shirt sizes, or ideal hours truly explains the increased success of Scrum based techniques?

It took me a while, but I finally figured it out. And the answer is a definite ‘NO’!

All Estimation Techniques are Merely Guesses

At the end of the day (or at least, at the end of the planning session), any estimation is just a guess: sometimes its more, and sometimes its less. This means that the distribution of the actual time versus the estimation should be pretty much normal: In large numbers, about half of the actual results will be less than the estimation, and half will be more. On average, according to the Law of Large Numbers, it evens up.

The better the estimation technique is, the steeper the curve will be, i.e. the closer actual results will be to the estimation.

My experience is that agile techniques are in fact better at predicting the actual results, but the fact remains that if you make enough of these predictions, then regardless of the technique, the variance should cancel itself out and the sum should settle on the average!

This means that the quality of the technique used to estimate each task has little or no impact of the success of estimating the time to complete the entire project!

Then Why Do Estimations Fail?

If estimations behave “normally” whole project estimations should not fail. I don’t buy into all that “programmers are notoriously optimistic” sentiment; I’m a developer myself and there is not a single soul alive that will call me optimistic, and yet, non-agile projects I worked on overran their deadlines, like everybody else’s. Something must be out there, that either changes the normal distribution, or invalidates the law of large numbers or both. And it has to be something that is present in classic waterfall-ish projects but not in agile ones.

It Is All About the Independence!

The main difference, in my opinion between agile and non-agile projects is how they are planned and constructed. An agile project plan is basically a stack of user stories, with no* dependencies between them, whereas a waterfall project plan is a Gantt chart describing tasks and their interdependencies.

Why Is This Important?

Let’s assume a task that has two dependencies. Each were estimated, and each have a normal distribution of the actual amount of time to complete vs. the estimated time. The following figure describes such a “plan”:


Each task has a 50% chance to be completed before time and 50% to be completed after (There is of course no statistical chance of being completed “on time”). Now, let’s say we’re interested in calculating the probability of Task C to be completed before time, or not to finish late. For the sake of ease, we’ll assume all three tasks to have the same estimated duration. For Task C to begin early, it needs both of its dependencies to complete early. Since the chance of each to complete early is 50%, the chance for both is (0.5 * 0.5) = 0.25, or 25%. The chance for this not to happen, i.e. for at least one of them to be completed late is 1-0.25 or 75%. this means that there’s a 75% for Task C to complete late, and of course it has its own probability of being late, which means that the chance of Task C completing late (and making the whole project run late) is more than 75%!

It is easy to see that the more complex a project is, the more dependencies each task has the greater the chance of being late is; The distribution of the actual duration of tasks with dependencies is a negatively skewed distribution.

Scrum User Stories Have No Dependencies

With this in mind, Scrum user stories have no* interdependencies. If two stories do have a dependency, they are should be joined to be treated as one (and if they are too big, they can and should be split again, though in a way that doesn’t create dependencies). With no dependency (or, if you wish, a single dependency that is the completion of the prior story), we get a plan that looks like this:


The chance of B starting early, which is the same chance of A ending early, is 50%. The chance of B completing early is still 50%! Even better the variance is reduced, which means that there is an increased probability of the total project duration being close to the the estimate (i.e. a steep probability distribution).

What Does This Mean?

This means, that while a project with multi-dependent tasks has an increased skew towards lateness with each added task, a project with no dependency (beyond sequence) has an increased tendency towards the mean, or the estimate.

This means that by the very nature of what we are estimating, agile projects tend to succeed (i.e. be on budget and on time), than non-agile.

Moreover, the actual estimation technique doesn’t matter – it is the estimation of independent units of work that makes the difference.

Therefore, my conclusion, and advice is that the very first change a team needs to make is to start working on independent features – call them user stories, features, or MMFs – it doesn’t matter; just keep them independent of each other!

Hope this helps,


Monday, February 6, 2012

How to Write Automated Tests for Console Applications

command_line_interfaceI don’t know about the rest of you, but I still find myself writing a lot of console applications. Most of the time these are utilities or services or some kind of customization or process automation scripts. Until recently, if I wanted to write automated tests for these, I would have to separate the business logic from the program itself and test the functions. While this is obviously a good practice for any moderately complex project, it sometimes might be overly complicated for what is needed. Moreover, this doesn’t work for black-box user acceptance tests.

The System.Console Class

In the .NET Framework console apps revolve around, well, the Console class. You get user input with Console.Read() (and ReadLine(), and ReadKey(), and so on). Conversely, you send output to the user with Console.Write() and Console.WriteLine().
So let’s take a look at the code inside the .NET Framework. By the way, I used Telerik’s JustDecompile to browse the code. If you don’t already have a decompiler of choice, I suggest you take a look at it. It’s simple, elegant and free (and no, I don’t own stock).

So, here's the code behind Console.ReadLine():
   1: public static string ReadLine()
   2: {
   3:     return Console.In.ReadLine();
   4: }

And here’s Console.WriteLine():

   1: public static void WriteLine()
   2: {
   3:     Console.Out.WriteLine();
   4: }

As it turns out, Console’s read and write methods are merely pass-through methods that call the same methods on the In and Out TextReader and TextWriter properties (again - respectively).

The text reader and writer classes, as so eloquently put in the class descriptions on MSDN, “represent a reader [and writer] that can read [and write] a sequential series of characters”. Well, that still is more helpful than some descriptions I’ve seen... Anyway, they are the abstract parents of the string and stream reader (and writer) classes. Why is this important? Well, the In and Out properties are by default set to the keyboard input and screen output streams.

Now, if only there was a way to redirect those streams…

Enter Console.SetIn() and Console.SetOut() respectively. Hmm, that’s a lot more respect than I usually show in a blog post…

With SetIn, I can redirect the In text-reader to a StringReader, and with SetOut, I redirect the output to a StringWriter.

I can now, quite easily initialize the string reader to a value that contains whichever input, or set of inputs I want to use, and the console application will read characters or lines as needed. I can then read the output of the string writer, and compare it with expected results.

And the most beautiful part of it is that there is (almost) no need to modify the console application in any way for this to work.


The following example was written in C#, using MS-Test (forgive me) to test the code. The application is a simple one that plays the game 7-Boom (a local variation of the game BizzBuzz). As you will see, except for making the Program class and Main() method public in lines 1 and 3, so that I can reach the code from another (test) assembly, I did nothing I wouldn’t do in any other console application:

   1: public class Program
   2: {
   3:     public static void Main()
   4:     {
   5:         Console.Write("Enter upper limit: ");
   6:         var range = int.Parse(Console.ReadLine());
   8:         for (var i = 0; i < range; i++)
   9:         {
  10:             var num = ((i % 7 == 0) || i.ToString().Contains('7')) ? "BOOM" : i.ToString();
  11:             Console.Write("{0}, ", num);
  12:         }
  13:     }
  14: }

And here is a test:

   1: // Arrange
   2: using (var sw = new StringWriter())
   3: {
   4:     using (var sr = new StringReader("100"))
   5:     {
   6:         Console.SetOut(sw);
   7:         Console.SetIn(sr);
   9:         // Act
  10:         SevenBoom.Program.Main();
  12:         // Assert
  13:         var result = sw.ToString();
  14:         Assert.IsFalse(result.Contains('7'));
  15:     }
  16: }

That’s it! I set the input that the user would have entered in line #7, and redirect the I/O in lines 6-7. All that remains is to assert that the output in line #14 meets the expectations.

By the way, this works exactly the same in Java as it does in C#. The difference is that in Java you will use System.setIn() and System.setOut() to set the PrintStreams (the In and Out properties used in Console.Read & Console.Write).

So now you can write test-driven (and behavior driven) console applications with greater ease.

Hope this helps,


Technorati Tags: ,,,,,,