Cost Controls

How Government Can Get More Bang for Its Buck

1 | 2 | 3

Base Budgets on Realistic Cost Estimates

The U.S. Department of Defense and the military services have historically underestimated the costs of new weapon programs. The result is cost growth, which is the increase from established baseline estimates to actual cost. In a study of 68 programs from the past 30 years, we found that, after adjusting for changes in the quantity of systems produced, costs grew by 46 percent on average over what had been estimated upon development approval (known as milestone B). We found no significant difference in cost growth from one decade to the next or from one military branch to the next. Only helicopters and space systems showed higher rates of cost growth than the overall average. This cost growth has remained high despite many acquisition reforms.

Enormous pressure can be placed on the analysts to generate optimistic estimates so that certain programs might be approved.

To examine why the cost estimates are consistently undershot, we narrowed our focus to 35 major weapon programs involving aircraft, missiles, electronics systems, launch vehicles, munitions, armored vehicles, and satellites. We focused on this smaller set of programs because of the labor-intensive nature of the work.

Figure 2 — Government Decisions and Estimation Errors Caused Nearly All of the Cost Growth Beyond Cost Estimates

Government Decisions and Estimation Errors Caused Nearly All of the Cost Growth Beyond Cost Estimates
SOURCE: Sources of Weapon System Cost Growth, 2008.
NOTES: Miscellaneous causes include reporting errors, unidentified variances, and external events. Financial matters have to do with the differences between predicted and actual inflation or between predicted and actual exchange rates.

We found that total costs for the 35 programs grew an average of 60 percent beyond what had been estimated at milestone B. This higher average represents the growth when no adjustment is made for changes in the quantity of systems produced. But this higher rate of cost growth is less salient than its composition. We found that government decisions accounted for more than two-thirds of the cost growth, while estimation errors accounted for nearly a quarter (see Figure 2).

The most costly government decisions involved changes in requirements (usually for added performance and functionality), changes in the quantity of weapons ordered, and changes in schedules. The estimation errors included not just inaccurate cost and schedule estimates but also unforeseen problems that arose from technical difficulties.

Estimation errors can be made by the government, contractors, or subcontractors. Government analysts base their estimates on inputs from the defense department and contractors. Enormous pressure can be placed on the analysts to generate optimistic estimates so that certain programs might be approved. Estimation errors can thus result from incorrect cost data or models, inaccurate engineering estimates, or overoptimistic assumptions regarding the work, time, and resources required to develop the weapon systems.

We caution that isolating the causes of cost growth in major weapon programs is not an exact science. Our cost data are drawn from Selected Acquisition Reports, which are documents prepared at least annually for Congress by the defense department. Allocating the cost variances in these reports to one specific cause can be especially difficult. In many programs, a technical issue forces development to fall behind schedule. The contractor and a government official might then decide to change the weapon design to incorporate a different, more costly technical solution. The solution takes longer to implement, which extends the schedule further. Here, a technical issue results in cost growth and program delays that could be attributed to government decisions, because the decisions made did not necessarily have to be made. In such cases, it is not always possible to conclusively identify the source of a cost variance.

At such times, government managers, military leaders, and Congress should balance their efforts to reduce cost growth between error-related causes and decision-related ones. A higher baseline cost estimate will align expectations with reality and indirectly reduce costs by diminishing the churn of program changes triggered by the common mismatch between plans and activities.

Ensure Rigorous Oversight

To deepen our understanding of what drives defective cost estimates, we conducted in-depth studies of two major space programs for the Space and Missile Systems Center of the U.S. Air Force. In both cases, we found that the government did not adequately guide and oversee the contractors. We also found the cost-estimation processes to be too closely tied to bureaucratic interests that held advocacy positions. We concluded that rigorous oversight, monitoring, and assessment of contractor costs, cost data, and technical designs throughout all phases of the proposal process and program execution are critical for developing credible cost estimates.

The first program we studied is the nation’s next-generation missile warning system, known as Space Based Infrared System — High (SBIRS High). The program links satellites in low earth orbits and geosynchronous earth orbits with infrared sensors on satellites in highly elliptical earth orbits. Lockheed Martin is the prime contractor; Northrup Grumman is the major subcontractor. The second program we studied is the Global Positioning System (GPS). The GPS prime contractor was originally Rockwell International, which was later acquired by Boeing.

Figure 3 — Estimation Errors Drove the Cost Variances for One Space Program; Government Decisions, for Another

Estimation Errors Drove the Cost Variances for One Space Program; Government Decisions, for Another
SOURCE: Improving the Cost Estimation of Space Systems, 2008.
NOTES: SBIRS High = Space Based Infrared System — High. GPS = Global Positioning System.

Figure 3 summarizes the cost variances in both programs. Although SBIRS High had remarkably stable requirements from 1996 through 2005, it experienced cost growth of 300 to 350 percent over that time, when adjusted for quantity changes. Total cost variances amounted to more than $9.5 billion, with a net increase of $6.5 billion over the estimated cost at milestone B. All but $1 billion of that increase was attributable to estimation errors that had been made by the contractor and accepted by the government. The cost estimate has grown from $3.7 billion for five satellites to $10.2 billion for just three of those satellites, which are still in production.

GPS had a reputation as a well-managed program, but we found areas of concern. In 1996, the $7-billion program was intended to fund 78 satellites. In 2002, the requirement fell to 33 satellites, thanks to predecessor satellites that have remained operational longer than expected. The GPS program had an aggregate cost underrun, but most of that had to do with the sharply reduced quantity of satellites required. Meanwhile, significant components of the program experienced substantial cost growth, stemming in large part from cost-estimating errors.

Both programs suffered from an acquisition reform initiative ironically called Total System Performance Responsibility (TSPR). Begun in 1996 at the height of the Clinton administration’s implementation of several acquisition reform measures, TSPR requires that a contractor propose its own technical solution to meet high-level performance requirements; that the contractor, with minimal government oversight, be responsible for implementing the solution; and that the contractor be relieved of what are assumed to be costly and cumbersome reporting requirements.

The goal of acquisition reforms throughout the 1990s was to radically reduce the regulatory and oversight burden placed on industry. The catchphrase at the time was “insight, not oversight.”

The goal of TSPR and other acquisition reforms throughout the 1990s was to radically reduce the regulatory and oversight burden placed on industry, under the assumption that this would save money. The catchphrase at the time was “insight, not oversight.” This was construed as invalidating the need for much of the documentation and reporting requirements of contractors, including their technical and cost data. Some government program managers interpreted these initiatives as orders, in effect, to abdicate their responsibility to monitor contractors.

Prior to TSPR, the level of oversight had been much more scrupulous. Government personnel typically attended tests and visited the plants of contractors and subcontractors to evaluate their facilities, personnel, and process controls. Design reviews were carried out in a formal structured way involving data packages, design analysis reports, risk analyses, and mitigation plans. The TSPR environment not only lacked this level of rigor for monitoring and assessing contractor capabilities but also served to rationalize the reduction of the federal workforce and its expertise.

The TSPR approach was bound to impair the ability of government officers to analyze the estimated costs and technical risks. The practical effects of such reforms also translated into heavy pressures on contractors to meet demanding performance requirements at much lower — often unrealistically low — cost, compared with what would have typically been thought possible in the past. The reformed acquisition process fueled overoptimistic estimates and eroded the government’s ability to oversee contractor activities in both the SBIRS High and GPS programs.

Simultaneously, a shift within the Space and Missile Systems Center to decentralize management of cost analysts and to place them in program offices had the effect of stripping many analysts of their independence from the program offices in which they worked. Because program offices are natural advocates for the systems that they are charged with developing, the cost analysts became advocates as well, introducing further conflicts of interest into the process. To make matters worse, there was an inadequate number of experienced analysts, a lack of relevant data to deal with space system complexities, insufficient coordination among cost analysts and engineers, and a lack of independent risk-assessment processes and methods.

All these factors contributed to an overreliance on contractors. We even found that a large portion of the government’s cost analysis itself had been outsourced to contractors, who appeared to carry out much of the day-to-day work of the Space and Missile Systems Center.

We offered the U.S. Air Force detailed recommendations, most of which have been adopted by the Space and Missile Systems Center. Such changes might also be needed for other federal programs that have been subjected to the acquisition reforms of the 1990s. Our key recommendations are as follows:

  • Institute independent program reviews, with independent teams of experts working alongside cost estimators. The government agency’s chief engineer should review all assumptions.
  • Place a special emphasis of the independent reviews on technical risk assessment, with the collection of more-relevant data and an increased visibility into contractor capabilities.
  • Adopt an organizational structure that maximizes the independence of cost analysts who perform major cost estimates while retaining the specialized skills of decentralized analysts who understand the complexities of individual systems.
  • Ensure that major estimates are led by experienced and qualified government analysts. The government’s approach to hiring, assigning, and promoting civil servants and military officers should be redesigned to attract and to retain first-rate cost analysts.

In the meantime, TSPR and other acquisition reforms of the 1990s that postulate savings without proof should be either abandoned or amended. They have inhibited government oversight of contractor performance and prevented the collection of needed cost and technical risk data.

Next section: Rethink Technical Requirements
1 | 2 | 3