In our software engineering class, we've been reading Waltzing with Bears (by DeMarco and Lister), a book about risk management. It's geared towards software development, but a lot of the ideas would be the same no matter what kind of project you're running.
The basic idea about risk management is that the odds of nothing "unexpected" occurring to delay your (non-trivial) project are very small, and by not acknowledging that, you are lying to yourself and other stakeholders and unjustifiably relying on luck to let you finish within the time and budget you've allotted.
When most people are given a project to accomplish, and asked how long it will take, they either guess (this is bad) or estimate (this is better) using an assumption that nothing will go wrong. In the software industry, there are tools like COCOMO for estimating how long a project should take. In your own life of doing projects, you probably have a sense of how long things should take you under normal circumstances. This is the date most people report as their answer.
DeMarco and Lister call this "the nano-date" because, given that it's basically the earliest date you might finish, so that the odds of finishing before that date are nil, the odds of finishing on that date are very very small. According to their research, the distribution of completion dates looks like this:
I wasn't able to label the vertical axis, but it represents the probability of finishing on a particular date. The area under the curve adds up to 100% - you're certain to finish overall, but the exact date is uncertain. N is the nano-date, and from the graph you can see why the odds of finishing on that date are so small. The right skew (the way it's higher on the left side and then slopes more gently on the right) is because projects that finish before their "most likely date" (the top of the hump) are likely to finish only a little before, while late projects can drag out forever.
According to their research, the range of very likely completion dates goes from about 250% to 300% of N. That is to say, if your "best case" says your project will take 10 months, it will probably in reality take 25 to 30 months.
That sounds terrible! But if you turn it around, it means that, given a realistic estimate of when your project will probably be done, it might be done in as little as 1/2 - 1/3 of that time. Your project could be way early if you do a good job estimating the probable effects of things that might go wrong and use that estimate to pad your original distribution curve properly. And if you apply proper risk management, you can also avoid, mitigate, or contain your risks, but that's for another post.
Meanwhile, here are the top five risks DeMarco and Lister have identified for software projects, in order from worst to uh...best? (They studied how much harm these factors usually do to a project, and this is based on that average damage.)
schedule flaw: the original estimate of the schedule (from whence N and the rest of the unrisked distribution should derive) was totally bankrupt to begin with, and not because other things went wrong later, but because it was just an incompetent estimate of how long a project should take in your organization.
requirements creep: the client (internal or external) keeps adding new things they want the product to do. The U.S. Department of Defense did a study in which they estimated that the size of a software project grows by about 1% per month.
turnover: important people leave your organization & this messes up your schedule
specification breakdown: negotiations with your client totally break down, and the whole project is cancelled. DeMarco and Lister have found this happens to about 1 in 7 projects. Obviously this isn't an incremental risk like the others - it's just a flat risk. Once you get past a certain point - where everything important has been absolutely nailed down and signed by all parties - this risk disappears.
underperformance: the people working on your project don't work as effectively as they reasonably would be expected to. This risk actually breaks even - sometimes you get overperformance instead (as you'd expect, given the nature of estimation).
The two takeaway ideas I have from this so far are...
1. Don't give the nano-date as your estimate of when a project will finish, no matter what the size of the project is. Recognize that there is a distribution of when you might finish, and use a more likely point in that distribution as your commit date. (A useful heuristic: given your estimate, is there a good chance you'll finish early? If not, your estimate is a lie.)
2. You can get a list of risks (perhaps from problems encountered on previous projects) and use estimation of their average effects to pad your estimates.
Soon I'll try to post about strategies for dealing with the risks themselves.