Platforms Should Use Algorithms to Help Users Help Themselves

commentary

Jul 20, 2021

Woman using smartphone with social network icons, photo by Vladimir Vladimirov/Getty Images

Photo by Vladimir Vladimirov/Getty Images

This commentary originally appeared on Carnegie Endowment for International Peace on July 20, 2021.

All social media platforms are based on the posting and sharing of user-generated content. The trouble starts when this content is either false or harmful or both. Each platform has specific rules against objectionable content, but enforcing these rules can be extremely difficult at scale.

Social media users generate massive volumes of content, which then spreads at extraordinary speeds. Yet platforms generally rely on a slow process of human moderation to remove prohibited content. In most cases, moderators review content only after it has already been posted and then identified as potentially objectionable (either by other users or an algorithm). This post hoc process means that hateful, violent, or false material can spread wildly before it is flagged, reviewed, and finally removed.

What if moderation could happen before the content is even posted? There is a way: platforms could build systems that prompt users to self-moderate before they post objectionable content. Platforms are cautiously experimenting with this approach but have only done so in a few narrow contexts (like when users want to share content that platforms have already determined is false) and haven't yet applied state-of-the-art technology. Platforms should prompt users regarding a wide range of problematic content—from hate speech to harassment—that would be identified using artificial intelligence. The technology already exists; major platforms only need to tailor, scale up, refine, and employ it.…

The remainder of this commentary is available at carnegieendowment.org.


Christopher Paul is a senior social scientist at the nonprofit, nonpartisan RAND Corporation and professor at the Pardee RAND Graduate School. Hilary Reininger is an assistant policy analyst at RAND and a doctoral student at Pardee RAND Graduate School.