How Can Platforms Deal with Toxic Content? Look to Wall Street

commentary

(Barron’s)

Icons and lights coming out of a cell phone on a flat surface, photo by David Peperkamp/Getty Images

Photo by David Peperkamp/Getty Images

by James V. Marrone

May 26, 2023

On Thursday, the Supreme Court opted to keep in place a law that shields tech platforms from liability for hosting toxic content. The decision came in a ruling that the platforms cannot face certain legal accusations of aiding and abetting terrorism. The message is clear: the path to reforming tech platforms is not through the courts.

Instead, Congress can and should regulate the industry. And there's already a regulatory framework for doing so that accounts for freedom of speech concerns—namely, a risk-based framework like the one that was used for Wall Street after 2008.

A risk-based framework acknowledges the realities of social media content moderation: No system will be perfect. Social media platforms already perform content moderation, and companies such as Meta employ thousands of people and spend billions of dollars doing it. But a long list of toxic material still reaches users' feeds: misinformation, propaganda, conspiracy theories, hate speech, and incitement to violence are just a few. The problem is the sheer volume of content, more than any group of humans can review, even with algorithmic help.

The risk of toxic content is a constant, yet we navigate risks every day, sometimes without much thought. You might risk jaywalking on a city street, but you probably won't risk walking across an interstate highway. The question is one of degrees, of how much risk one is willing to accept. How many cars, how busy is traffic, how wide is the road?

The problem tech companies face when assessing the risk of toxic content on social media platforms is how to go about identifying the threshold.

Share on Twitter

On social media, the question is quite similar: how much is too much? It might be relatively benign and even acceptable to allow some political propaganda to slip through. But allowing a coordinated foreign propaganda campaign to spread across multiple platforms is almost certainly too much risk—although Russian operatives have repeatedly succeeded in doing just that, in 2016, 2020, and again in 2022.

The problem tech companies face when assessing the risk of toxic content on social media platforms is how to go about identifying the threshold. What, in other words, presents too much risk? And what is a generally acceptable level of risk? Roads and cars have laws and traffic patterns. Content on the internet is slipperier. Just who defines the risk threshold, and on what basis, are the questions at the heart of this vexing problem.

Fortunately, these questions aren't particularly new. The United States has been here before, just in a different sector. Leading up to the 2008 financial crisis, Wall Street foreshadowed the tech sector in several ways, most of all in its increasing reliance on algorithms. Whereas social media companies use algorithms to assist humans in content moderation and to figure out whether or not an account is a bot, financial firms were using them (and still do) to calculate portfolio risk. Just like tech's algorithms today, the financial algorithms in 2008 had serious limitations. The newest financial products were so complex that their prices could not really be calculated. Even the ratings that investors use to gauge riskiness were poorly understood. As a result, the risk estimates were imprecise and opaque, even to the firms themselves.

Instead of formally addressing the financial system's risks through regulation, government officials trusted markets to develop and enforce their own standards through “self-regulatory organizations (SROs)” such as stock exchanges. Government officials granted SROs regulatory authority and believed that they would properly address the risks of new financial products. Here, too, the parallels with today's social media debate are striking. Professors at Harvard and the University of Chicago have recently suggested that SROs are the best regulatory option for the tech sector. The success of that argument depends on the potential of still-nascent tech SROs such as the Digital Trust and Safety Partnership to define and monitor industry best practices for safety and content moderation. But tech SROs appear even more toothless than their financial counterparts, as they are self-organized by the firms themselves and do not have any government-sanctioned regulatory authority.

History proved time and time again that self-regulation was insufficient to mitigate financial risks. So why should it work for tech? In fact, history already shows that it doesn't. Twitter's decision to replatform former-president Donald Trump, for example, shows that companies can easily backtrack on their own content moderation decisions with no repercussions. Gaps in content moderation allow malicious content to jump from fringe sites into the mainstream. Self-regulation isn't closing those gaps and likely won't in the future, because niche and fringe sites explicitly refuse to self-regulate. Such sites will always remain untethered by any industry standard, serving as ground zero for new conspiracy theories and misinformation.

Neither the judicial system nor the industry itself will rein in the risks of tech platforms. Instead, the regulation of Wall Street after 2008 offers a way forward. The 2010 Dodd-Frank Act, for example, mandated that financial systemic risk be monitored by a regulatory oversight body, now called the Financial Stability Oversight Council. The Dodd-Frank framework offers several benefits for tech regulation. First, the framework mandates transparency, as audits preserve companies' privacy while ensuring fair comparisons by the regulator. And second, the framework offers flexibility, as regulatory standards can evolve over time to adapt to new technologies, such as deepfakes, and to new uses of existing technology, like the use of emojis as code for drugs.

Neither the judicial system nor the industry itself will rein in the risks of tech platforms. Instead, the regulation of Wall Street after 2008 offers a way forward.

Share on Twitter

European Union regulators have already shown what a tech version of Dodd-Frank might look like, with the passage of the landmark Digital Services Act. The DSA contains several stipulations that echo Dodd-Frank. It creates an independent Board for Digital Services Coordinators, much like the FSOC; it provides for annual audits of tech platforms, which might look much like the Fed's stress tests; it manages access to platforms' data, analogous to regulators' access to bank data; and it designates “very large online platforms” that are subject to additional regulation, similar to FSOC designation of systemically important financial institutions.

Putting risk at the center of a regulatory framework forces society to accept what is feasible, rather than what is ideal. As a society, we might ask how much risk we are willing to tolerate in exchange for the huge benefits of globally connected platforms—while taking, from recent history, lessons on regulating and managing that risk.


James Marrone is an economist at the nonprofit, nonpartisan RAND Corporation.

This commentary originally appeared on Barron’s on May 19, 2023. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.