Artificial Intelligence

Featured

Technology, machines, and software that have the ability to be self-directed and learn from their actions are generally known as artificial intelligence. In the early days of computing, RAND researchers examined and tried to develop such technology and apply it for use in political and military simulations.

  • Multimedia

    Challenges to U.S. National Security and Competitiveness Posed by AI

    An overview of testimony by Jason Matheny presented before the U.S. Senate Committee on Homeland Security and Governmental Affairs on March 8, 2023.

    Mar 8, 2023

  • Report

    Machine Learning in Public Policy

    Machine learning (ML) can have a significant impact on public policy by modeling complex relationships and augmenting human decisionmaking. But overconfidence in results and incorrectly interpreted algorithms can lead to peril. With interpretability, ML can achieve its promise of more-equitable policy decisions.

    Nov 15, 2022

Explore Artificial Intelligence

  • Artificial eye looking through greenery

    Commentary

    Does the United States Face an AI Ethics Gap?

    Instead of worrying about an artificial intelligence “ethics gap,” U.S. policymakers and the military community could embrace a leadership role in AI ethics. This may help ensure that the AI arms race doesn't become a race to the bottom.

    Jan 11, 2019

  • A robot's hand selecting a candidate photograph

    Commentary

    Intentional Bias Is Another Way Artificial Intelligence Could Hurt Us

    Conversations about unconscious bias in artificial intelligence often focus on algorithms unintentionally causing disproportionate harm to entire swaths of society. But the problem could run much deeper. Society should be on guard for the possibility that nefarious actors could deliberately introduce bias into AI systems.

    Oct 22, 2018

  • Face detection and recognition

    Commentary

    Keeping Artificial Intelligence Accountable to Humans

    Artificial intelligence (AI) systems are often only as intelligent and fair as the data used to train them. To enable AI that frees humans from bias instead of reinforcing it, experts and regulators must think more deeply not only about what AI can do, but what it should do—and then teach it how.

    Aug 20, 2018

  • Journal Article

    Journal Article

    Intelligence in a Data-Driven Age

    The Intelligence Community is nearing critical decisions on artificial intelligence and machine learning.

    Jul 13, 2018

  • A drone flies over the ocean at dawn

    Commentary

    New Technologies Could Help Small Groups Wreak Large-Scale Havoc

    Lone wolves or small groups could use emerging technologies, such as drones or AI, for nefarious purposes. The threat is even greater when these technologies are used along with disinformation spread over social media.

    Jun 18, 2018

  • Osonde Osoba in a RAND panel discussion in Pittsburgh, Pennsylvania, February 20, 2018

    Q&A

    The Human Side of Artificial Intelligence: Q&A with Osonde Osoba

    Osonde Osoba has been exploring AI since age 15. He says it's less about the intelligence and more about being able to capture how humans think. He is developing AI to improve planning and is also studying fairness in algorithmic decisionmaking in insurance pricing and criminal justice.

    May 1, 2018

  • A robot arm moves its index finger toward a nuclear button

    Commentary

    Will Artificial Intelligence Undermine Nuclear Stability?

    In the coming years, AI-enabled progress in tracking and targeting adversaries' nuclear weapons could undermine the foundations of nuclear stability. The chance that AI will someday be able to guide strategy decisions about escalation or even launching nuclear weapons is real.

    May 1, 2018

  • Periodical

    Periodical

    RAND Review: May-June 2018

    This issue features research on preventing child abuse and neglect and improving outcomes for children in the U.S. child-welfare system; a look back on RAND's 70 years of innovation; and an exploration of the human side of artificial intelligence.

    Apr 30, 2018

  • Report

    Discontinuities and Distractions: Rethinking Security for the Year 2040

    As part of its Security 2040 initiative, RAND convened a workshop of experts to discuss trends that could shape the world through the year 2040.

    Apr 27, 2018

  • Events @ RAND Audio Podcast

    Multimedia

    Security 2040: The Promise and Perils of AI, 3D Printing, and Speed

    Emerging technologies, such as artificial intelligence and 3D printing, will pose new risks to global security. In this Events @ RAND podcast, multidisciplinary teams of experts discuss some of the most crucial trends and how to harness their potential.

    Apr 24, 2018

  • News Release

    News Release

    By 2040, Artificial Intelligence Could Upend Nuclear Stability

    Artificial intelligence has the potential to upend the foundations of nuclear deterrence by the year 2040. While AI-controlled doomsday machines are considered unlikely, the hazards of artificial intelligence for nuclear security lie instead in its potential to encourage humans to take potentially apocalyptic risks

    Apr 24, 2018

  • AI robot pressing a nuclear launch button.

    Article

    How Artificial Intelligence Could Increase the Risk of Nuclear War

    Advances in AI have provoked a new kind of arms race among nuclear powers. This technology could challenge the basic rules of nuclear deterrence and lead to catastrophic miscalculations.

    Apr 24, 2018

  • Artificial intelligence playing Go

    Report

    How Might Artificial Intelligence Affect the Risk of Nuclear War?

    Experts agree that AI has significant potential to upset the foundations of nuclear security. But there are also ways that machines could help ease distrust among international powers and decrease the risk of nuclear war.

    Apr 24, 2018

  • Abstract map of earth with futuristic, technological details

    Project

    Exploring the Future of Global Security

    What should security policymakers be planning for in the next 20 years? Security 2040 aims to answer that question by exploring how new technologies, evolving trends, and big ideas are shaping the future of global security.

    Apr 23, 2018

  • William Welser IV, Rebecca Balebako, and Osonde Osoba in a RAND panel discussion in Pittsburgh, Pennsylvania, February 20, 2018

    Blog

    'Alexa, What Do You Know About Me, and Who Are You Telling?'

    RAND experts held a wide-ranging discussion about artificial intelligence and privacy. They raised questions about fairness and equity regarding privacy and data use, while also highlighting positive trends and developments across the evolving AI-privacy landscape.

    Mar 1, 2018

  • Composite impage of a hand holding a digital device locked with a padlock symbol

    Multimedia

    The Collision of AI and Privacy

    In this Events @ RAND podcast, RAND experts discuss risks to privacy in the age of artificial intelligence.

    Feb 20, 2018

  • A child poses with a Lego Boost set, a predicted top seller this Christmas, at the Hamleys toy store in London, Britain, October 12, 2017

    Commentary

    Smart Toys May Pose Risks

    Parents shouldn't avoid buying smart toys during the holidays, particularly if these devices top children's Christmas lists. But parents should definitely be wary of the security and privacy risks that smart toys can pose.

    Dec 21, 2017

  • Robots working in a factory

    Report

    The Risks of AI to Security and the Future of Work

    As artificial intelligence (AI) becomes more prevalent in the domains of security and employment, what are the policy implications? What effects might AI have on cybersecurity, criminal and civil justice, and labor market patterns?

    Dec 6, 2017

  • Robots working with cardboard boxes on a conveyer belt

    Commentary

    AI's Promise and Risks

    Artificial intelligence seems to be advancing faster than efforts to understand its potential consequences, good and bad. And discussions about AI often veer toward extremes. More balanced, rigorous analysis is needed to help shape policies that mitigate AI's risks and maximize its benefits.

    Oct 24, 2017

  • Scales of justice in front of computer monitors with code

    Commentary

    The Intersection of Algorithms and an Individual's Rights

    Data collection, and our reliance on it, have evolved extremely rapidly. The resulting algorithms have proved invaluable for organizing, evaluating and utilizing information. How do individuals' rights come in to play, when data about their lives is compiled to create algorithms, and the resulting tools are applied to judge them?

    Sep 29, 2017

  • Topic Synonyms:
  • AI