Is AI an Existential Risk? Q&A with RAND Experts

q&a

Mar 11, 2024

A brain made of circuit boards inside glowing blue and purple geometric shapes, photo by SpiffyJ/Getty Images

Photo by SpiffyJ/Getty Images

What are the potential risks associated with artificial intelligence? Might any of these be catastrophic or even existential? And as momentum builds toward boundless applications of this technology, how might humanity reduce AI risk and navigate an uncertain future?

At a recent RAND event, a panel of five experts explored these emerging questions. While the gathering highlighted RAND's diversity in academic disciplines and perspectives, the panelists were unsurprisingly unanimous that independent, high-quality research will play a pivotal role in exploring AI's short- and long-term risks—as well as the implications for public policy.

What do you view as the biggest risks posed by AI?

BENJAMIN BOUDREAUX AI could pose a significant risk to our quality of life and the institutions we need to flourish. The risk I'm concerned about isn't a sudden, immediate event. It's a set of incremental harms that worsen over time. Like climate change, AI might be a slow-moving catastrophe, one that diminishes the institutions and agency we need to live meaningful lives.

This risk doesn't require superintelligence or artificial general intelligence or AI sentience. Rather, it's a continuation and worsening of effects that are already happening. For instance, there's significant evidence that social media, a form of AI, has serious effects on institutions and mental well-being.

AI seems to promote mistrust that fractures shared identities and a shared sense of reality. There's already evidence that AI has undermined the credibility and legitimacy of our election system. And there's significant evidence that AI has exacerbated inequity and bias. There's evidence that AI has impacted industries like journalism, producing cascading effects across society. And as AI becomes a driver of international competition, it could become harder to respond to other catastrophes, like another pandemic or climate change. As AI gets more capable, as AI companies become more powerful, and as we become dependent on AI, I worry that all those existing risks and harms will get worse.

JONATHAN WELBURN It's important to note that we've had previous periods in history with revolutionary technological shocks: electricity, the printing press, the internet. This moment is similar to those. AI will lead to a series of technological innovations, some of which we might be able to imagine now, but many that we won't be able to imagine.

AI might exacerbate existing risks and create new ones. I think about inequity and inequality. AI bias might undermine social and economic mobility. Racial and gender biases might be baked into models. Things like deepfakes might undermine our trust in institutions.

But I'm looking at more of a total system collapse as a worst-case scenario. The world in 2023 already had high levels of inequality. And so, building from that foundation, where there's already a high level of concentration of wealth and power—that's where the potential worst-case scenario is for me. Capital owners own all the newest technology that's about to be created and owned, all the wealth, and all the decisionmaking. And this is a system that undermines many democratic norms.

But I don't see this as a permanent state, necessarily. I see it as a temporary state that could have generations of harm. We would transition through this AI shock. But it's still up to humanity to create the policy solutions that prevent the worst-case scenario.

JEFF ALSTOTT There's a real diversity of which risks we're all studying at RAND. And that doesn't mean that any of these risks are invalid or necessarily more or less important than the others. So, I agree with everything that Ben and Jon have said.

One of the risks that keeps me up at night is the resurrection of smallpox. The story here is formerly exquisite technical knowledge being used by bad actors. Bioweapons happens to be one example where, historically, the barriers have been information and knowledge. You don't need much in the way of specialized matériel or expensive sets of equipment any longer in order to achieve devastating effects, with the launching of pandemics. AI could close the knowledge gap. And bio is just one example. The same story repeats with AI and chemical weapons, nuclear weapons, cyber weapons.

Then, eventually, there's the issue of not just bad people doing bad things but AIs themselves running off and doing bad things, plus anything else they want to do—AI run amok.

NIDHI KALRA To me, AI is gas on the fire. I'm less concerned with the risk of AI than with the fires themselves—the literal fires of climate change and potential nuclear war, and the figurative fires of rising income inequality and racial animus. Those are the realities of the world. We've been living those for generations. And so, I'm not any more awake at night because of AI than I was already because of the wildfires in Canada, for instance.

But I do have a concern: What does the world look like when we, even more than is already the case today, can't distinguish fact from fiction? What if we can't distinguish a human being from something else? I don't know what that does to the kind of humanity that we've lived with for the entire existence of our species. I worry about a future in which we don't know who we are. We can't recognize each other. That vague foreboding of a loss of humanity is what, if anything, keeps me up. Otherwise, I think we'll be just as fine as we were yesterday.

EDWARD GEIST AI threatens to be an amplifier for human stupidity. That characterization captures the types of harms that are already occurring, like what Ben was discussing, but also more speculative types of harms. So, for instance, the idea of machines that do what you ask for—rather than what you wanted or should have asked for—or machines that make the same kind of mistakes that humans make, only faster and in larger quantities.

Some of you have addressed this implicitly, but let's tackle the question head on, as briefly as you can: With the caveat that there's still much uncertainty surrounding AI, do you think it poses an existential risk?

WELBURN No. I don't think AI poses an irreversible harm to humanity. I think it can worsen our lives. I think it can have long-lasting harm. But I think it's ultimately something that we can recover from.

KALRA I second Jon: No. We are an incredibly resilient species, looking back over millions of years. I think that's not to be taken lightly.

BOUDREAUX Yes. An existential risk is an unrecoverable harm to humanity's potential. One way that could happen is that humans die. But the other way that can happen is if we no longer engage in meaningful human activity, if we no longer have embodied experience, if we're no longer connected to our fellow humans. That, I think, is the existential risk of AI.

ALSTOTT Yes.

GEIST I'm not sure, and here's why: I'm a nuclear strategist, and I've learned through my studies just how hard it can be to tell the difference.

For example, is the hydrogen bomb an existential risk? I think most laypeople would probably say, “Yes, of course.” But even using a straightforward definition of existential risk (humans go extinct), the answer isn't obvious.

The current scientific understanding is that the nuclear weapons that exist today could probably not be used in a way that would result in human extinction. That scenario would require more and bigger bombs. That said, a nuclear arsenal that could plausibly cause human extinction via fallout and other effects does appear to be something a country could build if it really wanted to.

Are there AI policies that reasonable people could consider and potentially agree on, despite thinking differently about the question of whether AI poses an existential risk?

KALRA Perhaps ensuring that we're not moving too quickly in integrating AI into our critical infrastructure systems.

BOUDREAUX Transparency or oversight, so we can actually audit tech companies for the claims they're making. They extoll all these benefits of AI, so they should show us the data. Let us look behind the scenes, as much as we can, to see whether those claims are true. And this isn't just a role for researchers. For instance, the Federal Trade Commission might need greater funding to ensure that it can prohibit unfair and deceptive trade practices. I think just holding companies to the standards that they themselves have set could be a good step.

What else are you thinking about the role that research can play as we move into this new era?

KALRA I want to see questions about AI policy problems asked in a very RAND-like way: What would we have to believe about our world and about the costs of various actions to prefer Action A over Action B? What's the quantity of evidence you need for this to be a good decision? Do we have anything resembling that level of evidence? What are the trade-offs if we're right? What are the trade-offs if we're wrong? That's the kind of framing I'd like to see in discussions of AI policy.

WELBURN The lack of diversity in the AI world is a huge concern. That leads to racial and gender biases in AI models themselves.

I think RAND can make diversity a strong part of our AI research agenda. That can come in part from bringing together a lot of different stakeholders and not just people with computer science degrees. How can we bring people like sociologists into this conversation, too?

GEIST I'd like to see RAND play the kind of vital role in these discussions about AI policy that we played in shaping policy that mitigated the threat of thermonuclear war back in the 1950s and 1960s. In fact, our predecessors invented methodologies back then that could either serve as the inspiration for or perhaps even be directly adapted to AI-related policy problems today.

Zooming out, I think humanity needs to lay out a research agenda that will get us to answer the right questions. Because until very recently, AI as an academic exercise has been pursued in a very ad hoc way. There hasn't been a systemic research agenda that's been designed to answer some very concrete questions. It may be that there's more low-hanging fruit than is obvious if we frame the questions in very practical terms, especially given now that there are so many more eyeballs on AI. The number of people trying to work on these problems has just exploded in the last few years.

ALSTOTT As Ed alludes to, it's long-standing practice here at RAND to be looking forward multiple decades to contemplate different tech that could exist in the future, so we can understand the potential implications and identify evidence-based actions that might need to be taken now to mitigate future threats. We have started doing this with the AI of today and tomorrow and need to do much more.

We also need a lot more of the science of AI threat assessment. RAND is starting to be known as the place that's doing that kind of analysis today. We just need to keep going. But we probably have at least a decade's worth of work and analysis that needs to happen. So if we don't have it all sorted out ahead of time, it might be that, whatever the threat is, it lands, and then it's too late.

BOUDREAUX Researchers have a special responsibility to look at the harms and the risks. This isn't just looking at the technology itself but also at the context—looking into how AI is being integrated into criminal justice and education and employment systems, so we can see the interaction between AI and human well-being. More could also be done to engage affected stakeholders before AI systems are deployed in schools, health care, and so on. We also need to take a systemic approach, where we're looking at the relationship between AI and all the other societal challenges we face.

It's also worth thinking about building communities that are resilient to this broad range of crises. AI might play a role by fostering more human connection or providing a framework for deliberation on really challenging issues. But I don't think there's a technical fix alone. We need to have a much broader view of how we build resilient communities that can deal with societal challenges.


Benjamin Boudreaux is a policy researcher who studies the intersection of ethics, emerging technology, and security.

Jonathan Welburn is a senior researcher who studies emerging systemic risks, cyber deterrence, and market failures.

Jeff Alstott is a senior information scientist and directs the RAND Center for Technology and Security Policy.

Nidhi Kalra is a senior information scientist whose research examines climate change mitigation, adaptation, and decarbonization planning, as well as decisionmaking amid deep uncertainty.

Edward Geist is a policy researcher whose interests include Russia, civil defense, AI, and the potential effect of emerging technologies on nuclear strategy.

Special thanks to Anu Narayanan, associate director of the RAND National Security Research Division, who moderated the discussion, and to Gary Briggs, who organized this event. Excerpts presented here were edited for length and clarity.