Some of society's brightest minds have warned that artificial intelligence (AI) may lead to dangerous unintended consequences, yet leaders of the U.S. intelligence community—with its vast budgets and profound capabilities—have yet to decide who within these organizations is responsible for the ethics of their AI creations.
When a new capability is conceived or developed, the intelligence community does not assign anyone responsibility for anticipating how a new AI algorithm may go awry. If scenario-based exercises were conducted, the intelligence community provides no guidelines for deciding when a risk is too great and a system should not be built and assigns no authority to make such decisions.
Intelligence agencies use advanced algorithms to interpret the meaning of intercepted communications, identify persons of interest and anticipate major events within troves of data too large for humans to analyze. If artificial intelligence is the ability of computers to create intelligence that humans alone could not have achieved, then the U.S. intelligence community invests in machines with such capabilities.
To understand the ethical dangers of AI, consider the speed-trading algorithms commonly used in the stock market—an example of the employment of AI in a highly competitive, yet non-lethal, environment. A computer algorithm issues orders to buy a stock and floods the market with hundreds or thousands of apparently separate orders to buy the same stock. Other algorithms take note of this sudden demand and start raising their buy and sell offers, confident that the market is demanding a higher price. The first algorithm registers this response and sells its shares of stock for the newly higher price, making a tidy profit. The algorithm then cancels all of its buy orders, which it never planned to complete anyway.
The sequence of events takes place in less than one second, faster than any human could have observed what was occurring, let alone make a decision to buy, sell or hold. The Securities and Exchange Commission reports that only 3 to 4 percent of stock orders are filled before they are canceled, an indication of how widespread this practice has become. The first algorithm was successful because it gamed the system; it understood how its competitors collect and analyze information in the environment, and it used the competitors' decisionmaking criteria against them....
The remainder of this commentary is available on nationalinterest.com.
Cortney Weinbaum is a former intelligence officer in the U.S. intelligence community. She is a national security researcher with the Intelligence Policy Center at the nonprofit, nonpartisan RAND Corporation.
This commentary originally appeared on The National Interest on July 18, 2016. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.