The speed of online recruitment for violent extremist organizations has challenged existing efforts to intervene with counter-radicalization, leading to global instability and violence. The U.S. government's counter-radicalization messaging enterprise may benefit from using promising emerging technology tools, particularly bots, to rapidly detect targets of such recruitment efforts and deliver counter-radicalization content to them.
Download
Download eBook for Free
Format | File Size | Notes |
---|---|---|
PDF file | 1.1 MB | Use Adobe Acrobat Reader version 10 or higher for the best experience. |
Purchase
Purchase Print Copy
Format | List Price | Price | |
---|---|---|---|
Add to Cart | Paperback154 pages | $28.00 | $22.40 20% Web Discount |
Research Questions
- What are the possible applications of bot technology to VEOs that conduct online recruitment?
- What are the legal and ethical implications of these applications?
- How should the U.S. government weigh the potential high rewards of implementing bot programs with the equally high risks of such an enterprise?
The speed and diffusion of online recruitment for such violent extremist organizations (VEOs) as the Islamic State of Iraq and the Levant (ISIL) have challenged existing efforts to effectively intervene and engage in counter-radicalization in the digital space. This problem contributes to global instability and violence. ISIL and other groups identify susceptible individuals through open social media (SM) dialogue and eventually seek private conversations online and offline for recruiting. This shift from open and discoverable online dialogue to private and discreet recruitment can occur quickly and offers a short window for intervention before the conversation and the targeted individuals disappear.
The counter-radicalization messaging enterprise of the U.S. government may benefit from a sophisticated capability to rapidly detect targets of VEO recruitment efforts and deliver counter-radicalization content to them. In this report, researchers examine the applicability of promising emerging technology tools, particularly automated SM accounts known as bots, to this problem. Their work has implications for efforts to counter the growing threat of state-sponsored propagandists conducting disinformation campaigns or radicalizing U.S. domestic extremists online and assesses the feasibility and advisability of the U.S. government employing social bot technology for counter-radicalization and related purposes. The analysis draws on interviews with a range of subject-matter experts from industry, government, and academia as well as reviews of legal and ethical considerations of using bots, the literature on the development and application of bot technology, and case studies on past uses of social bots to influence individuals, gather information, and conduct messaging campaigns.
Key Findings
Bots should be tailored for use in specific environments
- The platforms, cultures, and governmental regimes in which a bot is deployed matter.
- A social bot's profile characteristics, such as apparent social influence and group identity, should be taken into account.
- The network characteristics of users that a bot is attempting to befriend or influence, such as friend counts and network density, must be factored in.
Bot programs raise legal and ethical issues
- Bot programs, even if used exclusively domestically, have international consequences, potentially setting precedents that normalize other states' actions.
- The U.S. government must integrate information it collects via bots into established mechanisms for collecting information and protecting privacy.
- The U.S. government should not use a bot to conduct actions that would be legally or ethically prohibited if conducted without the device, but maintaining honesty and transparency will alleviate some degree of ethical risk.
- Seeking permission from internet platforms before deploying bots might help avoid strain with social media platforms.
Assessing bot concepts of operation for risks and opportunities can help determine which devices to use
- The use of bots is a viable approach for a range of technologically feasible and effective interventions.
- There is increased risk of unexpected negative outcomes if human oversight is not involved. Decisionmakers must carefully weigh the risks and potential rewards of proposed automated bot programs.
Recommendations
- Analyze the international precedent that may be set by any proposed U.S. government bot program to avoid normalizing other states' invasive actions and behaviors that erode cybersecurity by interfering with the confidentiality, integrity, or availability of information online.
- In response to concerns about the Establishment Clause, free speech, privacy, and the Smith-Mundt Act, focus engagement on narrowly targeted audiences of concern abroad; avoid targeting users based on religious criteria; and, where appropriate, erect firewalls between certain bot programs and law enforcement, intelligence agencies, or international partners.
- With respect to SM platform's terms of service and possible issues, seek companies' permission before deploying bots whenever necessary and practicable.
- Given the likelihood of U.S.-sponsored bot activities becoming public knowledge, make U.S. government bot operations as transparent as possible, within operational constraints.
- To ensure legal compliance, conduct specific legal review for each bot deployment operation, under the applicable titles.
- Communicate across agency lines about bot technology initiatives to develop a common conceptual framework and cross-agency operating picture.
- Conduct a full interagency legal review regarding principles that U.S. government bot programs should follow.
- Promulgate doctrine about how U.S. government actors intend to conduct operations to maximize transparency even while protecting sensitive operational details.
- Test the efficacy and advisability of bot programs gradually by collaborating with nongovernmental organizations or partner nations or by implementing an internal-facing bot program.
- Promote bot-detection technologies to make it harder for adversaries to engage in bot-enabled deception.
Table of Contents
Chapter One
Social Chatbots: An Introduction
Chapter Two
Current Status of Bot Technology
Chapter Three
Potential Legal and Ethical Risks
Chapter Four
Concepts of Operation and Assessment
Chapter Five
Recommendations
Appendix A
Technology Review: Methods and Goals
Research conducted by
This research was sponsored by the U.S. Department of State and the Combating Terrorism Technical Support Office and conducted within the International Security and Defense Policy Center of the RAND National Security Research Division (NSRD), which operates the RAND National Defense Research Institute (NDRI).
This report is part of the RAND Corporation Research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.