KEY: Data Driven Software Effort Estimation and Tracking
One of the primary root causes for many software project failures is that the effort was poorly estimated; or that the effort was accurately estimated but the program’s senior leader drove the development organizations to “accept the challenge” of significantly reduced cost and accelerated schedule without reducing the planned capabilities or requirements. Aggravating and increasing the negative impacts of poor estimation and tracking is the lack of control over requirements and interface volatility.
The key to success in estimating software efforts is to establish and maintain detailed historical data on cost, schedule and technical performance. Ideally, software projects should use the historical data from their own programs for effort estimation/validation versus trying to use data from other programs as this is rarely successful given that there are too many variables involved to ensure an apples-to-apples comparison (different tools, team experience, development processes, requirements, constraints, etc.).
Software efforts are often “hidden” under the “system engineering” efforts when planning and controlling a project. Project teams must take the time and make the investment to establish a well-defined work-break-down structure (WBS) that breaks out the various software development activities (requirements, architecture, design, code, integration, test phases, etc.) and includes the ability to develop and track associated productivity factors (development-hours-required per work-unit). Teams must establish the process, tools and discipline to accurately collect and utilize the estimated-vs-actual data. It is NOT recommended to estimate and track coding efforts by source-lines-of-code (SLOC). Higher level work-unit abstractions such as Objects, Files, or Function-points are much better than SLOC. Note that although SLOC based estimates are a poor method for estimating and tracking, it is still important to know the system size as measured in SLOC and for normalizing software quality (calculating defect ratios).
There should be detailed planned vs actual cost and schedule plans for each Computer Software Configuration Item (CSCI). Caution must be exercised in combining the cost and schedule performance indicators for multiple CSCIs. This is because over-performance by one CSCI may mask high risk under-performance by another CSCI. The software cost and schedule plans should be traceable and linked directly into the higher level integrated system level cost and schedule plans.
Program leaders that drive volatility but refuse to trade off other capabilities or extend cost and schedule are destined to fail. A formal Change Management (CM) process must be institutionalized and strictly adhered to. All requirements and associated quantifiable and verifiable Key Performance Parameters (KPPs) must be allocated and traced to architecture, design, code and test organizations and artifacts. Program leaders must resist requirements creep/volatility and changes late in the cycle. All content changes must be accompanied by revised cost, schedule and technical performance impact assessments.
The project leaders and software team lead(s) must review planned content, cost and schedule performance indicators, and technical performance indicators on a very frequent, regular, and structured basis. Variance thresholds must be defined and formal performance risks and mitigation plans must be documented and tracked to closure.