History is the Key to Estimation Success

Source: Shutterstock
Source: Shutterstock

Posted: March 14, 2016 | By: Kate Armel

Expert Judgement vs. Empiricism

Regardless of which estimation methods are used in your organization, uncertainty and risk cannot be eliminated and should never be ignored. Recognizing and explicitly accounting for the uncertainties inherent in early software estimates is critical to ensure sound commitments and achievable project plans.

Measures of estimation accuracy that penalize estimators for being “wrong” when dealing with uncertain inputs cloud this fundamental truth and create powerful disincentives to honest measurement. Recording the difference between planned and actual outcomes is better suited to quantifying estimation uncertainty and feeding that information back into future estimates than it is to measuring estimation accuracy.

So how can development organizations counter optimism bias and deliver estimates that are consistent with their proven ability to deliver software? Collecting and analyzing completed project data is one way to demonstrate both an organization’s present capability and the complex relationships between various management metrics. Access to historical data provides empirical support for expert judgments and allows managers to leverage tradeoffs between staffing and cost, quality, schedule and productivity instead of being sandbagged by them.

The ideal historical database will contain your own projects, collected using your organization’s data definitions, standards, and methods but if you haven’t started collecting your own data, industry data offers another way to leverage the experiences of other software professionals. Industry databases typically exhibit more variability than projects collected within a single organization with uniform standards and data definitions, but QSM’s three-plus decades of collecting and analyzing software project metrics have shown that the fundamental relationships between software schedule, effort, size, productivity and reliability unite projects developed and measured over an astonishingly diverse set of methodologies, programming languages, complexity domains and industries.

Software estimators will always have uncertainty to contend with, but having solid data at your fingertips can help you challenge unrealistic expectations, negotiate more effectively, and avoid costly surprises. Effective measurement puts managers in the drivers’ seat. It provides the information they need to negotiate achievable schedules based on their proven ability to deliver software, find the optimal team size for new projects, plan for requirements growth, track progress, and make timely mid-course corrections.  The best way to avoid a repeat of history is to harness it.

Endnotes

  1. The Standish Group, New Standish Group report shows more project failing and less successful projects.http://www1.standishgroup.com/newsroom/chaos_2009.php, (April 23, 2009).
  2. J. Laurenz Eveleens and Chris Verhoef, The Rise and Fall of the Chaos Report Figures,http://www.cs.vu.nl/~x/chaos/chaos.pdf, (January/February 2010).
  3. Philip G. Armour, The Inaccurate Conception, http://dl.acm.org/citation.cfm?id=1325558&bnc=1, (March 2008).
  4. Carmen Reinhart and Kenneth S. Rogoff, This Time Is Different: Eight Centuries of Financial Folly, (New Jersey: Princeton University Press, 2009).
  5. Kate Armel, An In-Depth Look at the QSM Database, http://www.qsm.com/blog/2011/depth-look-qsm-database, (September, 2011).
  6. Paul Below, Part II: Team Size and Productivity, http://www.qsm.com/blog/2010/part-ii-team-size-and-productivity (April, 2010).

Want to find out more about this topic?

Request a FREE Technical Inquiry!