As the national suicide rate continues to rise, an increasing number of stakeholders—including local governments, educators, and employers—are looking within their own communities and asking: “Do we have a suicide problem?” It's a difficult question to answer. Some institutions don't have adequate systems for tracking suicides. A company, for example, may keep track of the number of employees who died, but not the cause of death.
If they do have the data, however, the actual question they often may ask is: “How does our suicide rate compare to others?”
The response I usually give is: “Which others?”
In a report my RAND colleagues and I produced in 2011, we compared the Army's suicide rate to that of the United States. Historically, the Army's rate had been roughly equal to that of the U.S. population, and in 2005 it started to rise above the national rate. But we knew that such a comparison, while it wasn't comparing apples to oranges, was like comparing two different types of apples: Golden Delicious to Gala.
We knew the two groups were very different, especially with respect to gender and age.
While the U.S. population is pretty evenly split between men and women, the Army is comprised mostly of men. The youngest soldier is 17, and very few are over age 50. Why do these differences matter? In the United States, the rate of suicide among men is four times the rate among women, and the rate for men increases as they get older.
An epidemiologic strategy called “adjusting” helped us correct for these differences and let us make a more “Gala to Gala” comparison. In doing so, we found that the Army suicide rate was historically lower than a comparable civilian population, and that in 2005 it started to rise to meet it.
Same data, different story.
But is the story complete? The differences between the Army and general populations go beyond soldiers' gender and age profiles. Soldiers tend to live on or near Army bases, not in cities and suburbs where most Americans reside. They may be less likely to have mental health conditions like psychosis, but more likely to have conditions like PTSD. They have high-stress jobs that may make them more similar to some occupations, like police officers, and less similar to others, like architects. All Army soldiers are trained in the use of firearms, while only a third of U.S. adults own a gun.
When accounting for suicide rates, do these differences matter, just like gender and age do?
This was a question the Army recently posed to my colleagues and me at the nonpartisan RAND Corporation. We're investigating these differences now, and in doing so, we are considering three things.
First, what data are available for us to work with? While the Army has volumes of data on its soldiers, there is less data on the U.S. population, and even less data on the dead. We've learned, for example, about the severe limitations in how a person's occupation and industry are coded in death records. For example, there are inconsistencies in the codes used across different states, and jobs are often combined in larger categories—police officers and firefighters often fall under “protective service occupations.” Sometimes, the occupation is not even marked down by coroners or medical examiners.
We're also looking into what statistical tools may be available to help us make the most valid comparisons. The project benefits from being co-led by a RAND statistician, Beth Ann Griffin, and from being carried out by an interdisciplinary study team that includes an economist, psychologists, psychometricians, and a computer programmer.
Finally, we're thinking about why things like geography matter. For example, we're asking: In calculating a person's risk of suicide, what matters more—where a person resides or where they are from? This is critical because if the evidence suggests that where a person is from matters more, we have to ask what data the Army has on where people lived before joining the Army.
We're learning that the evidence is too thin to answer some of the questions that need answering. It can provide insight, but not enough for us to draw firm conclusions. This means that the field of suicide research is full of gaps in knowledge that need to be filled.
We're continuing to work on this project, and who knows what we will find? Perhaps we'll show that with some statistical adjustments, the differences between the Army and a more comparable general population go away. Conversely, we may find that the factors we looked at don't matter much at all—that the patterns look the same as they do when we only account for gender and age differences.
But either way, while adjusting is a powerful strategic tool, it doesn't eclipse the fact that suicides still occur within these institutions. Institutions that ask the question, “How do we compare?” also inevitably ask, “How can we use this information to address suicide?” Understanding how rates compare may be just the first step in answering this latter question.
Rajeev Ramchand is a senior behavioral scientist at the nonprofit, nonpartisan RAND Corporation.
This commentary originally appeared on The Injury Control Research Center for Suicide Prevention on March 30, 2018. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.