Artificial Intelligence

Featured

Technology, machines, and software that have the ability to be self-directed and learn from their actions are generally known as artificial intelligence. In the early days of computing, RAND researchers examined and tried to develop such technology and apply it for use in political and military simulations.

  • Network illustrations depicting online conspiracy theories, images by miakievy and Cecilia Escudero/Getty Images

    Report

    Machine Learning Can Detect Online Conspiracy Theories

    Apr 29, 2021

    As social media platforms work to prevent malicious or harmful uses of their services, an improved model of machine-learning technology can detect and understand conspiracy theory language. Insights from this modeling effort can help counter the effects of online conspiracies.

  • A Flight Commander Course student interacts with artificial intelligence in a live simulation on Joint Base McGuire-Dix-Lakehurst, New Jersey, September 27, 2019, images by A1C Ariel Owings/U.S. Air Force and Jamesteohart/Adobe Stock; design by Carol Ponce/RAND Corporation

    Report

    Exploring the Civil-Military Divide over Artificial Intelligence

    May 11, 2022

    Artificial intelligence is anticipated to be a key capability for enabling the U.S. military to maintain its dominance. How do software engineers and other technical staff in the industry view the defense community? Are they willing to contribute to AI-related projects for military use?

Explore Artificial Intelligence

  • Periodical

    Periodical

    RAND Review: March-April 2019

    This issue explores resilience and adaptation strategies researchers can pursue to address the impacts of climate change; security challenges posed by artificial intelligence and the speed at which technology is transforming society; and more.

    Mar 4, 2019

  • Cyborg head using artificial intelligence to create digital interface 3D rendering, image by sdecoret/Adobe Stock

    Q&A

    The Promise and Perils of AI: Q&A with Douglas Yeung

    Douglas Yeung, a social psychologist at RAND, discusses how any technology reflects the values, norms, and biases of its creators. Bias in artificial intelligence could have unintended consequences. He also warns that cyber attackers could deliberately introduce bias into AI systems.

    Feb 27, 2019

  • Blog

    Korea, Climate, AI in the Classroom: RAND Weekly Recap

    This weekly recap focuses on four problems on the Korean Peninsula, triaging climate change, using artificial intelligence in the classroom, and more.

    Jan 25, 2019

  • Teacher using tablet computer in elementary school lesson

    Report

    Artificial Intelligence Applications to Support Teachers

    Artificial intelligence could support teachers rather than replace them. But before the promise of AI in the classroom can be realized, risks and technical challenges must be addressed.

    Jan 23, 2019

  • Artificial eye looking through greenery

    Commentary

    Does the United States Face an AI Ethics Gap?

    Instead of worrying about an artificial intelligence “ethics gap,” U.S. policymakers and the military community could embrace a leadership role in AI ethics. This may help ensure that the AI arms race doesn't become a race to the bottom.

    Jan 11, 2019

  • A robot's hand selecting a candidate photograph

    Commentary

    Intentional Bias Is Another Way Artificial Intelligence Could Hurt Us

    Conversations about unconscious bias in artificial intelligence often focus on algorithms unintentionally causing disproportionate harm to entire swaths of society. But the problem could run much deeper. Society should be on guard for the possibility that nefarious actors could deliberately introduce bias into AI systems.

    Oct 22, 2018

  • Face detection and recognition

    Commentary

    Keeping Artificial Intelligence Accountable to Humans

    Artificial intelligence (AI) systems are often only as intelligent and fair as the data used to train them. To enable AI that frees humans from bias instead of reinforcing it, experts and regulators must think more deeply not only about what AI can do, but what it should do—and then teach it how.

    Aug 20, 2018

  • Journal Article

    Journal Article

    Intelligence in a Data-Driven Age

    The Intelligence Community is nearing critical decisions on artificial intelligence and machine learning.

    Jul 13, 2018

  • A drone flies over the ocean at dawn

    Commentary

    New Technologies Could Help Small Groups Wreak Large-Scale Havoc

    Lone wolves or small groups could use emerging technologies, such as drones or AI, for nefarious purposes. The threat is even greater when these technologies are used along with disinformation spread over social media.

    Jun 18, 2018

  • Osonde Osoba in a RAND panel discussion in Pittsburgh, Pennsylvania, February 20, 2018

    Q&A

    The Human Side of Artificial Intelligence: Q&A with Osonde Osoba

    Osonde Osoba has been exploring AI since age 15. He says it's less about the intelligence and more about being able to capture how humans think. He is developing AI to improve planning and is also studying fairness in algorithmic decisionmaking in insurance pricing and criminal justice.

    May 1, 2018

  • A robot arm moves its index finger toward a nuclear button

    Commentary

    Will Artificial Intelligence Undermine Nuclear Stability?

    In the coming years, AI-enabled progress in tracking and targeting adversaries' nuclear weapons could undermine the foundations of nuclear stability. The chance that AI will someday be able to guide strategy decisions about escalation or even launching nuclear weapons is real.

    May 1, 2018

  • Periodical

    Periodical

    RAND Review: May-June 2018

    This issue features research on preventing child abuse and neglect and improving outcomes for children in the U.S. child-welfare system; a look back on RAND's 70 years of innovation; and an exploration of the human side of artificial intelligence.

    Apr 30, 2018

  • "Global connections" 3D rendering

    Report

    Discontinuities and Distractions: Rethinking Security for the Year 2040

    As part of its Security 2040 initiative, RAND convened a workshop of experts to discuss trends that could shape the world through the year 2040.

    Apr 27, 2018

  • Events @ RAND Audio Podcast

    Multimedia

    Security 2040: The Promise and Perils of AI, 3D Printing, and Speed

    Emerging technologies, such as artificial intelligence and 3D printing, will pose new risks to global security. In this Events @ RAND podcast, multidisciplinary teams of experts discuss some of the most crucial trends and how to harness their potential.

    Apr 24, 2018

  • News Release

    News Release

    By 2040, Artificial Intelligence Could Upend Nuclear Stability

    Artificial intelligence has the potential to upend the foundations of nuclear deterrence by the year 2040. While AI-controlled doomsday machines are considered unlikely, the hazards of artificial intelligence for nuclear security lie instead in its potential to encourage humans to take potentially apocalyptic risks

    Apr 24, 2018

  • Artificial intelligence playing Go

    Report

    How Might Artificial Intelligence Affect the Risk of Nuclear War?

    Experts agree that AI has significant potential to upset the foundations of nuclear security. But there are also ways that machines could help ease distrust among international powers and decrease the risk of nuclear war.

    Apr 24, 2018

  • Abstract map of earth with futuristic, technological details

    Project

    Exploring the Future of Global Security

    What should security policymakers be planning for in the next 20 years? Security 2040 aims to answer that question by exploring how new technologies, evolving trends, and big ideas are shaping the future of global security.

    Apr 23, 2018

  • AI robot pressing a nuclear launch button.

    Article

    How Artificial Intelligence Could Increase the Risk of Nuclear War

    Advances in AI have provoked a new kind of arms race among nuclear powers. This technology could challenge the basic rules of nuclear deterrence and lead to catastrophic miscalculations.

    Apr 23, 2018

  • William Welser IV, Rebecca Balebako, and Osonde Osoba in a RAND panel discussion in Pittsburgh, Pennsylvania, February 20, 2018

    Blog

    'Alexa, What Do You Know About Me, and Who Are You Telling?'

    RAND experts held a wide-ranging discussion about artificial intelligence and privacy. They raised questions about fairness and equity regarding privacy and data use, while also highlighting positive trends and developments across the evolving AI-privacy landscape.

    Mar 1, 2018

  • Composite impage of a hand holding a digital device locked with a padlock symbol

    Multimedia

    The Collision of AI and Privacy

    In this Events @ RAND podcast, RAND experts discuss risks to privacy in the age of artificial intelligence.

    Feb 20, 2018

  • Topic Synonyms:
  • AI