Download eBook for Free

Full Document

FormatFile SizeNotes
PDF file 4 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Research Summary

FormatFile SizeNotes
PDF file 0.1 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.


Purchase Print Copy

 FormatList Price Price
Add to Cart Paperback80 pages $35.50 $28.40 20% Web Discount

Research Questions

  1. What are quantitative methods that can be used to make the unified certification process for nuclear systems more rigorous and efficient? Specifically, what numerical methods are faster than traditional Monte Carlo simulations and can be used efficiently on very large graphs?
  2. How does the critical path for program execution depend on the overall structure of how tasks are scheduled?
  3. How can the reallocation of resources change the risk of meeting expected schedules?

During the execution of programs, managers must orchestrate the planning and performance of upwards of tens of thousands of interdependent tasks. Many of these tasks have interdependencies such that failure to meet scheduling expectations in one task can affect the execution of many others. As programs grow larger and the interdependencies of the program activities get complex, tools are needed to introduce more rigor into program management to reduce schedule risk.

The Ground Based Strategic Deterrent (GBSD) currently under development — a complete replacement for the aging LGM-30 Minuteman III intercontinental ballistic missile system — is a good example of a large, complex program with numerous interdependent tasks. Given the tight schedule by which GBSD is bound and the numerous organizations participating, it is necessary that the required certification process for nuclear systems proceeds in an efficient manner. In this report, the authors present the underlying rationale behind algorithms already delivered to the GBSD program office that were designed to reduce the likelihood of rework in program execution, provide better insight into schedule risk, and provide insights into how to restructure task dependencies to manage schedule risk. Although the novel methods described in this report were developed for the GBSD program, they are general and can be applied to any program.

Key Findings

The authors examined three classes of graph topology: a simple topology of parallel chains, random graphs, and scale-free graphs

  • For the simple chain topology, although performing tasks in parallel can theoretically reduce the overall time, it does not do so as much as expected; the likelihood of having at least one bad chain increases with the number of chains.
  • Parallelizing development reduces variance — but also increases end-to-end time — more than expected.
  • Although working tasks in parallel remains desirable, the project manager's expectations for time savings should be tempered.
  • For Erdős-Rényi topology (random graphs), analysis shows the same trade-off between variance and average completion time as seen in the simple chain topology, though it is more subtle. In this case, interlinking paths make it less likely that a single chain will dominate.
  • For the scale-free graph topology, analysis shows that graphs with a single, dominant longest path are much more likely to complete on time than graphs with several parallel paths that may each be critical paths. This result means that the project manager can reduce the overall completion time with only a marginal increase in the corresponding risk by placing important tasks in series with one another.

The authors present a novel numerical approach to find the critical path in a large graph

  • The fastest, and preferred, method to find the critical path uses Chebyshev polynomials as a basis. The other method uses trapezoidal integration but is preferred only when software packages limit the ability to manipulate Chebyshev polynomials.

Table of Contents

  • Chapter One


  • Chapter Two


  • Chapter Three


  • Chapter Four

    Resource Constraints

  • Chapter Five

    Analytic Approach for Calculating the Criticality Index

  • Chapter Six


  • Appendix A

    Equivalence of Representations

  • Appendix B

    The PERT and CPM Algorithms

  • Appendix C

    Proof That Convolution of Independent, Identical Beta Distributions Converges to a Normal Distribution

  • Appendix D

    Derivation of Maximum Order Statistic F

Research conducted by

The research described in this report was commissioned by the Commander of the Nuclear Weapons Center and Program Executive Officer for Strategic Systems and conducted by the Force Modernization and Employment Program within RAND Project AIR FORCE.

This report is part of the RAND Corporation Research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit

The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.