Phillip G. Armour's June 2007 quarterly column on The Business of Software in Communications of the ACM was Twenty Percent: Planning to fail on software projects (full text for a fee). I flagged this article when I first read it, but am only now making time to write about it.
I love his tone and take on the industry's inability to plan for success. He even does some back-of-the-envelope analysis on how the average "success" drops down to 20% so consistently, which includes lots of optimism on the part of management and results in the following conclusion:
So the probability of success for the project starts out at a nominal 50%. Each optimistic assumption that is postulated removes resources from the project and hides the risk in the assumption. This has the effect of reducing the overall probability of completing the project using the assigned resources. When [the probability] approaches a 20% likelihood of success, the estimators and team members can finally muster a sufficiently forceful argument that any further reduction is simply too unlikely and that, despite further pressure, they cannot be persuaded to buy into any further reduction.
My further question is that if the industry "knows" about this problem from any variety of sources, why does it persist the way it does? Does each company think they don't have the problem, or that it is someone else's issue? Are the surveys wrong that say failure rates are this high?