Thursday, April 19, 2007

The Nano-Date

In our software engineering class, we've been reading Waltzing with Bears (by DeMarco and Lister), a book about risk management. It's geared towards software development, but a lot of the ideas would be the same no matter what kind of project you're running.

The basic idea about risk management is that the odds of nothing "unexpected" occurring to delay your (non-trivial) project are very small, and by not acknowledging that, you are lying to yourself and other stakeholders and unjustifiably relying on luck to let you finish within the time and budget you've allotted.

When most people are given a project to accomplish, and asked how long it will take, they either guess (this is bad) or estimate (this is better) using an assumption that nothing will go wrong. In the software industry, there are tools like COCOMO for estimating how long a project should take. In your own life of doing projects, you probably have a sense of how long things should take you under normal circumstances. This is the date most people report as their answer.

DeMarco and Lister call this "the nano-date" because, given that it's basically the earliest date you might finish, so that the odds of finishing before that date are nil, the odds of finishing on that date are very very small. According to their research, the distribution of completion dates looks like this:



I wasn't able to label the vertical axis, but it represents the probability of finishing on a particular date. The area under the curve adds up to 100% - you're certain to finish overall, but the exact date is uncertain. N is the nano-date, and from the graph you can see why the odds of finishing on that date are so small. The right skew (the way it's higher on the left side and then slopes more gently on the right) is because projects that finish before their "most likely date" (the top of the hump) are likely to finish only a little before, while late projects can drag out forever.

According to their research, the range of very likely completion dates goes from about 250% to 300% of N. That is to say, if your "best case" says your project will take 10 months, it will probably in reality take 25 to 30 months.

That sounds terrible! But if you turn it around, it means that, given a realistic estimate of when your project will probably be done, it might be done in as little as 1/2 - 1/3 of that time. Your project could be way early if you do a good job estimating the probable effects of things that might go wrong and use that estimate to pad your original distribution curve properly. And if you apply proper risk management, you can also avoid, mitigate, or contain your risks, but that's for another post.

Meanwhile, here are the top five risks DeMarco and Lister have identified for software projects, in order from worst to uh...best? (They studied how much harm these factors usually do to a project, and this is based on that average damage.)

schedule flaw: the original estimate of the schedule (from whence N and the rest of the unrisked distribution should derive) was totally bankrupt to begin with, and not because other things went wrong later, but because it was just an incompetent estimate of how long a project should take in your organization.

requirements creep: the client (internal or external) keeps adding new things they want the product to do. The U.S. Department of Defense did a study in which they estimated that the size of a software project grows by about 1% per month.

turnover: important people leave your organization & this messes up your schedule

specification breakdown: negotiations with your client totally break down, and the whole project is cancelled. DeMarco and Lister have found this happens to about 1 in 7 projects. Obviously this isn't an incremental risk like the others - it's just a flat risk. Once you get past a certain point - where everything important has been absolutely nailed down and signed by all parties - this risk disappears.

underperformance: the people working on your project don't work as effectively as they reasonably would be expected to. This risk actually breaks even - sometimes you get overperformance instead (as you'd expect, given the nature of estimation).

The two takeaway ideas I have from this so far are...

1. Don't give the nano-date as your estimate of when a project will finish, no matter what the size of the project is. Recognize that there is a distribution of when you might finish, and use a more likely point in that distribution as your commit date. (A useful heuristic: given your estimate, is there a good chance you'll finish early? If not, your estimate is a lie.)

2. You can get a list of risks (perhaps from problems encountered on previous projects) and use estimation of their average effects to pad your estimates.

Soon I'll try to post about strategies for dealing with the risks themselves.

9 comments:

rvman said...

My immediate thought is that college 'all-nighters' stem from a) procrastination until b) N days/hours before the due date. And then there is the extension, requested when "N hours" includes time normally used for sleep.

I generally managed to do good estimates, but start my project at about certainty time without sleep, meaning a probable at least 'half-nighter'. I'd say the average time of completion for a paper of less than 10 pages in my undergraduate tenure was about 3AM. (My written papers got pretty decent grades - low A/high-mid B. It was the math stuff that I never managed to do well. So I choose to major in, at various times...Engineering, Statistics, and Economics. Not only was I a lousy student, I was a masochistic or stupid one as well.)

Sally said...

rvman, wouldn't you agree that a primary problem with your undergraduate quant classes was lack of effort rather than ability? After all, you did somehow manage to get the top grade in half a dozen PhD level econometrics classes. I won't tolerate you defining the standards downward in this way. If you have "never managed to do well" at "math stuff," my own quant future is dire indeed.

Sally said...

How does the 100% completion rate in the graph and the risk of specification breakdown interact? Or is the graph just definitionally showing dates for projects that are completed.

Right now, requirements creep, turnover, and underperformance are plaguing a major (non-software-development, obviously) project of mine. I am basically praying for nothing to go seriously wrong from here on out and for performance to pick up as we come closer to the deadline (when the Ps in the team finally start contributing a lot just as the Js are flagging?), which is scary but maybe not inappropriate. I think of this phenomenon as "making up time in the air."

I know this may not be applicable to my project, but do they have any data/insight on where the divergence between schedules and actual accomplishments tend to be greatest? For instance, when you are 50% through the work on the project, have you typically experienced 50% of the problematical aspects or are you facing a larger or smaller amount of crap to occur in the last half?

Tam said...

The graph I mimicked is one from pretty early in the book, where they're just trying to tell you about risk management in general. The graphs later in the book have a rectangular area to the right of the main curve, taking up about 1/7th of the total area, that represents total project failure due to specification breakdown.

I don't remember reading in the book about where stuff shows up (except that it's better if project killers show up early so you can kill it and move on), but in class I got the impression that risks start to really manifest about midway through the project, which is often where ambiguities, shortcuts, things swept under the rug, etc., start biting you.

Their own risk assessment tool Riskology (freely available) uses Monte Carlo simulation to...do something or other. Their recommended way of presenting these results to management, recognizing that management may not be tech-savvy, is to say something like, "We ran this project 500 times in the simulator, and it was finished by March 7% of the time."

That's not germane to what you asked, I just thought it was interesting.

Tam said...

One problem I have run into with math is that, at the point where it gets hard, you really can't just power your way through it the way you can with a paper.

Don't get me wrong - papers are much improved if you start on them early, so that when you get 3/4 done and in the middle of the next night realize how you need to totally redo them, you still have time to do that.

But you can power your way through a paper and end up with something to turn in that, given sufficient talent, will get you a decent grade. If someone told me I was required to turn in a 5-page paper on the Battle of Waterloo in 12 hours, I bet I could produce at least a B paper in that time.

But in the math courses I'm taking lately, if I don't start early I'm just screwed, because I need a certain amount of time to just process the problems, and that time only starts running after I have made attempts at solving them. Writing tricky code is the same way - you have to start early to get results. A small coding project might only take 12 hours, but you can't do it in 12 consecutive hours - you need 2 hours every other day for 2 weeks, perhaps.

Anonymous said...

Boo !
< hug >

rvman said...

The problem in my undergrad quant classes was that I started too late and gave up too early. I could have done them had I given them the attention they needed, but I didn't.

rvman said...

My first comment was a bit unclear. It was estimating needed time I never got right. So I'd start too late, or I'd get frustrated and give up when I couldn't process the problem right away. I had - literally - zero experience with intellectual endeavors which I didn't 'get' instantly, so the need to struggle and find the 'trick' was foreign to me, and I reacted poorly. The 'second time through', in grad school, I had a better understanding of what was needed.

Tam said...

I totally get what you mean about being used to getting things instantly and then giving up when you don't. I'm finally getting over that these past couple of years. I've now had at least half a dozen academic situations where I needed to apply effort - mental effort over time, not just physical "busywork" effort - to grasp something.