Jan 1, 1998
The Department of Defense (DoD) has long recognized the importance of quality personnel in carrying out the country's national security policy and devotes considerable resources to attracting and retaining the best people. A substantial portion of these resources—several billion dollars annually—goes toward Quality of Life programs. However, many of these programs were designed when the military was very different from what it is today; bases were isolated, most members were single, and salaries were lower compared with those of their civilian peers. Are yesterday's programs the right ones for today's military? Are they properly funded? RAND researcher Richard Buddin proposes a way to answer these questions in Building a Personnel Support Agenda: Goals, Analysis Framework, and Data Requirements. He evaluates the goals and funding mechanisms, a set of methodological tools, and data sources. His analysis reveals that DoD goals for its Quality of Life programs need to be more specific; that no single tool is adequate to evaluate these programs but a combination of tools would be; and that DoD needs more and better data to assess service-member problems and the programs that address them.
DoD has stated goals for its Quality of Life programs, but these tend to be too general to support evaluation. Contribution to readiness is a traditional goal, but readiness is extraordinarily difficult to measure. Defining how personnel programs advance readiness is even more difficult. Some outcomes, such as retention, are easily measured, but charting the relationship between outcomes and programs is problematic. For example, the Army spent about $700 per member on personnel support programs in FY96. Increasing expenditures by 10 percent would amount to only $70, and it would be difficult to assess how such a small change affected retention.
More intermediate goals are needed. For example, if long separations reduce retention, a program that eases problems caused by separation could be established. Still needed—and still elusive—are objective standards. However, good comparison groups could be defined (e.g., families whose members do not deploy compared with those whose members do), and more focused goals would enable better analysis.
Funding for Quality of Life programs comes from congressional appropriations and from income generated by the programs themselves. Probably the best-known program is the military post exchange. In recent years, revenue from the exchanges has fallen because of com-petition from large discounters such as Wal-Mart. The military has responded by cutting costs and seeking to use revenues from profitable programs to offset costs for unprofitable ones. However, this tax-and-subsidy approach is neither efficient nor fair; people tend to overuse the subsidized programs and underuse those with inflated prices. Similarly, some pay a disproportionate share of the tax, and others receive an unfair portion of the benefit.
Buddin examined five research methodologies for assessing personnel support programs: nonwage benefits, compensating wage differential, individual well-being, community environment, and program use and retention. All offer benefits. For example, nonwage benefits comprise a familiar economic tool that offers a rigorous and systematic method for determining whether a program ought to be part of a compensation package. Compensating wage differentials are good at accounting for workplace differences, and several aspects of military service—danger, separations, frequent moves—argue for such differentials. And the community environment approach facilitates comparisons across different military communities.
However, each tool has drawbacks. For example, nonwage benefits require a careful calculation of costs and outcomes—difficult information to gather for personnel support programs. Compensating wage differentials, while effective at identifying good or bad aspects of the workplace, are not effective at identifying programs. Constructing a community environment index could involve substantial data collection.
Drawing on a large, multipurpose survey periodically conducted by DoD, Buddin analyzed three types of available data: well-being, program use, and resource. The well-being data indicate that most military members are satisfied with their service life, but satisfaction varies according to demographics, service, and rank. For example, married service members accompanied by their spouses are the most satisfied, and older members show slightly more satisfaction than do younger ones. Marines are the most satisfied, and Army members the least. Junior enlisted personnel are the least satisfied; satisfaction level increases with rank. However, these data could be improved by consulting other surveys that have more numerous and effective indicators; these could provide more-precise measurements as well as enable comparisons with the civilian community.
Program use data reflect two types of programs: community and family support programs and morale, welfare, and recreation (MWR) programs. In the first category, the most used programs are housing, legal assistance, family support centers, and chaplains—but many members use none, and others only a few. Median program use is two. Those who use the programs report satisfaction with them. A number of factors affect use, including age, sex, marital status, housing location, employment status, and service.
MWR program use varies widely. The programs most used are the exchange, the commissary, the shoppette, and the fitness center. Military members use these much more than the community and family support programs that are designed to address specific problems such as financial management difficulties. More than 99 percent of the members surveyed had used an MWR program, and median number of programs used is 11. Officers use programs more than enlisted members, and senior ranks more than junior. Women bowl and ride horses more frequently than men but play less golf. Geographic bachelors (members temporarily separated from their families for job-related reasons) are the heaviest program users of all.
Determining the resources spent on programs is critical to evaluation. Because of the design of the accounting systems, this information is virtually impossible to collect. It is central to any evaluation to know how much programs cost. Of equal importance is the ability to compare costs for the same program across locations to determine if the added benefits from a larger program are worth the increased costs. A unified base accounting system is needed, along with better information about program availability and use at different bases.
Given the high cost and potential benefit of these programs, a sound methodology for assessing them appears to be critical. Knowing which programs to fund and at what level is increasingly important in an era of tight budgets and concerns about recruiting and retention.