Money, Markets, and Machine Learning: Unpacking the Risks of Adversarial AI

commentary

Aug 31, 2023

Financial technology concept with monetary symbols in a digital environment, photo by metamorworks/Adobe Stock

Photo by metamorworks/Adobe Stock

This commentary originally appeared on The Hill on August 30, 2023.

It is impossible to ignore the critical role that artificial intelligence (AI) and its subset, machine learning, play in the stock market today.

While AI refers to machines that can perform tasks that would normally require human intelligence, machine learning (ML) involves learning patterns from data, which enhances the machines' ability to make predictions and decisions.

One of the main ways the stock market uses machine learning is in algorithmic trading. The ML models recognize patterns from vast amounts of financial data, then make trades based on these patterns—thousands upon thousands of trades, in small fractions of a second. These algorithmic trading models learn continually, adjusting their predictions and actions in a process that occurs continuously, which can sometimes lead to phenomena like flash crashes, when certain patterns instigate a feedback loop, sending certain segments of the market into a sudden freefall.

Algorithmic trading, despite its occasional drawbacks, has become indispensable to our financial system. It has enormous upside; which is another way of saying that it makes some people an awful lot of money. According to the technology services company Exadel, banks stand to save $1 trillion by 2030 thanks to algorithmic trading.

Algorithmic trading, despite its occasional drawbacks, has become indispensable to our financial system.

Share on Twitter

Such reliance on machine learning models in finance is not without risks, however—risks beyond flash crashes, even.

One significant and underappreciated threat to these systems is what's known as adversarial attacks. These occur when malevolent actors manipulate the input data that is fed to the ML model, causing the model to make bad predictions.

One form of this adversarial attack is known as “data poisoning,” wherein bad actors introduce “noise”—or false data—into the input. Training on this poisoned data can then cause the model to misclassify whole datasets. For instance, a credit card fraud system might wrongly attribute fraudulent activity where there has been none.

Such manipulations are not just a theoretical threat. The effects of data poisoning and adversarial attacks have broad implications across different machine learning applications, including financial forecast models. In a study conducted by researchers at the University of Illinois, IBM, and other institutions, they demonstrated the vulnerability of financial forecast models to adversarial attacks. According to their findings (PDF), these attacks could lead to suboptimal trading decisions, resulting in losses of 23 percent to 32 percent for investors. This study highlights the potential severity of these threats, and underscores the need for robust defenses against adversarial attacks.

Financial institutions need to implement robust and efficient testing and evaluation methods that can detect potential weaknesses and safeguard against adversarial attacks.

Share on Twitter

The financial industry's reaction to these attacks has often been reactive—a game of whack-a-mole in which defenses are raised only after an attack has occurred. However, given that these threats are inherent in the very structure of ML algorithms, a more-proactive approach is the only way of meaningfully addressing this ongoing problem.

Financial institutions need to implement robust and efficient testing and evaluation methods that can detect potential weaknesses and safeguard against these attacks. Such implementation could involve rigorous testing procedures, employing “red teams” to simulate attacks, and continually updating the models to ensure they're not compromised by malicious actors or poor data.

The consequences of ignoring the problem of adversarial attacks in algorithmic trading are potentially catastrophic, from significant financial losses to damaged reputations for firms, or even widespread economic disruption. In a world increasingly reliant on ML models, the financial sector needs to shift from being reactive to proactive to ensure the security and integrity of our financial system.


Joshua Steier is a technical analyst, and Sai Prathyush Katragadda is a data scientist, at the nonprofit, nonpartisan RAND Corporation.