Loose Clicks Sink Ships: When Social Media Meets Military Intelligence

commentary

(U.S. News & World Report)

U.S. soldiers take pictures of President Barack Obama at U.S. military base Yongsan Garrison in Seoul, South Korea, April 26, 2014

U.S. soldiers take pictures of President Barack Obama at U.S. military base Yongsan Garrison in Seoul, South Korea, April 26, 2014

Photo by Lee Jin-man/Pool/Reuters

by Douglas Yeung and Olga Oliker

August 14, 2015

For as long as soldiers have written loved ones back home, militaries have monitored and — if deemed necessary — censored those communications. “Loose lips sink ships,” a common refrain during World War II, alludes in part to the U.S. military's desire to prevent sensitive information from slipping into soldiers' correspondence. Leaders worried that troops would reveal their location or movement, or the outcomes of battles.

Modern technology has added a new wrinkle. In the past, censors could screen letters before they went out. But stopping a soldier from posting a geotagged tweet or Instagram photo presents a far more complicated set of challenges.

Every social media update has the potential to reach many more people than the handwritten letters of old. Communication is also much more immediate. Journalists, analysts, watchdog groups and adversaries can access this information in close to real time. This may reveal strategic positions, call attention to behavior that might violate agreements (or human rights), or unlock a range of military intelligence.

Real-life examples abound. Researchers and reporters have combined geolocation analyses and Russian soldiers' social media posts to demonstrate the presence of Russian forces in eastern Ukraine. (These contradict the Kremlin's denials of direct military involvement in the ongoing conflict.) In early June, an ISIS militant's selfie allowed the U.S. Air Force to locate and bomb one of the extremist group's command posts. And in 2012, several U.S. Marines were prosecuted for war crimes after YouTube videos surfaced showing them urinating on Taliban corpses.

As various groups try to exploit this information, militaries and governments will increasingly seek ways to keep “loose clicks from sinking ships” — or from just causing embarrassment. So far, this has involved training and access restrictions. Military personnel are taught to self-censor and not reveal anything that adversaries could exploit. In addition, access to social media is often blocked on government networks, but this does little to thwart soldiers with smartphones or tablets. Authorities have tried banning such personal devices with mixed success.

Social media can also be employed strategically. For instance, fake geolocational data can be hidden or sent out in an attempt to fool adversaries. But this may require network control, access to devices that deployed personnel might use, or training soldiers to “spoof” their geotagged posts with fake locations.

During wartime, reading personal letters and blacking out passages by hand was once a laborious, time-consuming process. And it didn't always work. But examples from today's conflicts suggest that modern attempts at controlling the sharing of personal information via the Internet will be both more difficult and less successful.

Governments or militaries may need to make hard decisions based on this new calculus. They may decide that leaks from social media are sometimes inevitable. If technology has transformed the information environment, perhaps full control — and effective censorship — won't always be possible. This would suggest that a different approach is necessary. Authorities may need to focus on what is most important (operational security) and what is feasible (limiting social media access on the front lines, but not for personnel at rest).

Technology providers increasingly realize that there is a growing market for software or hardware that gives soldiers — and civilians — more secure social media access. Militaries and governments could facilitate adoption of these secure options by making them the default, or even mandatory. At the same time, both the “good guys” (human rights monitors and a soldier's own intelligence and reconnaissance) and the “bad guys” (adversaries) will likely improve at dissecting public posts to discover sensitive information.

These changes could help shape who wins the information component of any given conflict. The sheer scale and diversity of open-source intelligence generated today is overwhelming. That may mean that the parties that maintain the tightest information control will no longer realize the greatest benefits. Rather, those who invest in tools for rigorous analysis will be better able to harness the data flood, extracting key insights to support decisions made in a new, virtual fog of war.


Douglas Yeung is a social psychologist at the nonprofit, nonpartisan RAND Corporation. Olga Oliker is director of the RAND Center for Russia and Eurasia.

This commentary originally appeared on U.S. News & World Report on August 14, 2015. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.