Pardee Center Speakers and Events
The RAND Pardee Center welcomes speakers and hosts events related to futures methodologies, decisionmaking, and similar topics. Pardee Center representatives also present at events to share the center's work.
Additionally, the Pardee Center invites experts in understanding, managing, and shaping complex adaptive systems to participate in internal discussions known as the Policy Analysis for Complex Systems (PACS) Speaker Series.
Recent Speakers

April 13, 2023
Sarah Fletcher, Ph.D.
Climate-Informed Adaptive Water Supply Planning
Assistant Professor of Civil and Environmental Engineering, Woods Institute for the Environment, Stanford University
Sarah Fletcher, Ph.D., is assistant professor of Civil and Environmental Engineering, Woods Institute for the Environment, Stanford University. In her talk, Fletcher shared her progress towards a computational framework for climate-informed adaptive planning using contrasting case studies in sub-Saharan Africa and California.
Abstract: Water planners face the challenge of ensuring reliable, affordable water supplies in a changing and uncertain climate. Adaptive planning approaches, in which planners delay or change action and respond as the climate changes over time, have the potential to enable reliability without unnecessary, expensive infrastructure development.
However, adaptive planning also poses a risk: When will we have learned enough to adapt? And what changes should we make? The answers are highly dependent on the local climate. Regions facing slow, long-term change in average precipitation require different responses than those facing increased frequency of short, intense droughts. In regions where decadal oscillations dominate precipitation variability, long-term trends are more difficult to discern and plan for.

January 25, 2023
Michael Gerst, Ph.D.
Using Visualization Science to Improve Usability of Decision Support Tools
Research Faculty, Earth System Science Interdisciplinary Center, University of Maryland
Michael Gerst, Ph.D., is a member of the research faculty, Earth System Science Interdisciplinary Center, University of Maryland. In his talk, Gerst explored five different visualization projects.
Abstract: As the effects of climate change have become more apparent to the general public, scientific information and data are increasingly being sought for use in decision support tools, which range from static weather forecast maps to interactive online flood risk estimators. Designing decision support tools is challenging because they often require the coupling of scientific advancement and meeting stakeholder needs, which usually requires balancing competing project priorities. While much progress has been made in structuring processes to pursue both advancement and usability, one often overlooked problem is that best practices for visualizing information and data for use in a scientific setting can be very different from those in a decision-making setting, especially for communicating uncertainty. As decision support tools almost always involve some element of visual reasoning, this represents a significant problem for creating usable science. This gap exists, in part, because practitioner and visualization science insights are infrequently synthesized together in a way that is accessible to those who are not visualization experts, and in part, because individual decision support projects are not always set up to test design choices in a way that improves stakeholder usability and contributes to broader visualization science.
Five visualization projects that contribute to reducing this gap: (i) redesign of the NOAA Temperature and Precipitation Outlooks, (ii) testing of USGCRP climate indicators, (iii) redesign of the USGS Water Watch, (iv) redesign of the NIDIS Drought Outlook, and (v) design of a crop growing season interactive tool. In addition to discussing the thinking behind visualization design choices and stakeholder usability test results, he summarized lessons learned in structuring projects as case studies with clear experimental visualization questions that also meet stakeholder needs.

December 13, 2022
Edward A. (Ted) Parson
AI’s Societal Impacts and Governance: The Neglected (and Crucial) Mid-Range
Dan and Rae Emmett Professor of Environmental Law; Faculty Director of the Emmett Institute on Climate Change and the Environment at the University of California, Los Angeles.
Ted Parson is Dan and Rae Emmett Professor of Environmental Law and Faculty Director of the Emmett Institute on Climate Change and the Environment at the University of California, Los Angeles. Parson studies international environmental law and policy, the societal impacts and governance of disruptive technologies including geoengineering and artificial intelligence, and the political economy of regulation. Parson leads the AI Pulse program at UCLA Law and organized the 2019 Summer Institute on AI and Society. His articles have appeared in scientific and scholarly journals in a wide range of fields. His most recent books are The Science and Politics of Global Climate Change (with Andrew Dessler) (3rd ed. Cambridge, 2019), and A Subtle Balance: Evidence, Expertise, and Democracy in Public Policy and Governance, 1970-2010 (McGill-Queens University Press, 2015). His 2003 book, Protecting the Ozone Layer: Science and Strategy (Oxford), won the Sprout Award of the International Studies Association and is widely recognized as the authoritative account of the development of international cooperation to protect the ozone layer. Parson has led and served on multiple advisory committees, for the National Academy of Sciences, the U.S. Global Change Research Program, and other national and international bodies. His work was influential in establishing the World Commission on Climate Overshoot, for which he serves as a senior advisor. Ted holds B.Sc. University College, University of Toronto, 1975; M.Sc. University of British Columbia, 1981; and Ph.D. Harvard University, 1992.
Abstract: Most work on AI impacts and governance has clustered at near and far endpoints – harms and injustices from present applications, and existential risks of AGI. Yet a wide range of potential high-stakes advances, impacts, and governance challenges lies between, capable of transformative impact yet still under human control. Development of mid-range impacts is likely to reflect tight coupling between technical and socio-political factors. Mid-range impacts may destabilize present political equilibria, legal doctrines, or moral principles, by transforming what people are able to know about and do to each other. They may present the highest-stakes margins to influence aggregate societal impacts of AI. Although beyond the possibility of confident projections, they may be well suited to exploratory methods combining scenario analysis, gaming, and exploratory modeling, perhaps organized in a robust/adaptive decision-making framework.

December 6, 2022
Blaise Agüera y Arcas
What Large Language Models Mean
VP and Fellow, Google Research
Blaise Agüera y Arcas is a VP and Fellow at Google Research, where he leads an organization working on both basic research and new products in AI. His focus is on augmentative, privacy-first, and collectively beneficial applications, including on-device ML for Android phones, wearables, and the Internet of Things. One of the team’s technical contributions is Federated Learning, an approach to training neural networks in a distributed setting that avoids sharing user data. Blaise also founded the Artists and Machine Intelligence program, and has been an active participant in cross-disciplinary dialogs about AI and ethics, fairness and bias, policy, and risk. Until 2014 he was a Distinguished Engineer at Microsoft. Outside the tech world, Blaise has worked on computational humanities projects including the digital reconstruction of Sergei Prokudin-Gorskii’s color photography at the Library of Congress, and the use of computer vision techniques to shed new light on Gutenberg’s printing technology. Blaise has given TED talks on Sea dragon and Pho¬to¬synth (2007, 2012), Bing Maps (2010), and machine creativity (2016), and gave a keynote at NeurIPS on social intelligence (2019). In 2008, he was awarded MIT’sTR35 prize. In 2018 and 2019 he taught the course “Intelligent Machinery, Identity, and Ethics” at the University of Washington, placing computing and AI in a broader historical and philosophical context.
Abstract: In the 2020s, machine learning has kicked into a new gear with the advent of large, unsupervised, language-enabled sequence models. These models seem finally to be poised on the cusp of delivering on the original promise of general AI. They also bring with them a slew of open questions in fields ranging from public policy to ethics, to economics, to neuroscience, to philosophy. This wide-ranging talk will begin with a quick review of how such models work, then delve into why they work, and how this might inform our understanding of the human mind and cognition in general. We’ll finish with a few interesting implications for fairness and bias, value alignment, intellectual property, and the future of work.

November 29, 2022
Brian Christian
The Alignment Problem: How Can Artificial Intelligence Learn Human Values?
Author
Brian Christian is an American non-fiction author, poet, programmer and researcher, best known for a bestselling series of books about the human implications of computer science, including The Most Human Human, Algorithms to Live By, and The Alignment Problem.
Abstract: This session of the future of AI seminar series is conversational in nature based upon Brian Christian’s highly informative book, The Alignment Problem. RAND researcher Ben Boudreaux and Ann Pendleton-Jullian will lead off with a series of framing questions and then conversation can go anywhere.
In best-selling author Brian Christian’s riveting account, we meet the alignment problem’s “first-responders,” and learn their ambitious plan to solve it before our hands are completely off the wheel. In a masterful blend of history and on-the ground reporting, Christian traces the explosive growth in the field of machine learning and surveys its current, sprawling frontier. Readers encounter a discipline finding its legs amid exhilarating and sometimes terrifying progress. Whether they―and we―succeed or fail in solving the alignment problem will be a defining human story.
The Alignment Problem offers an unflinching reckoning with humanity’s biases and blind spots, our own unstated assumptions and often contradictory goals. A dazzlingly interdisciplinary work, it takes a hard look not only at our technology but at our culture―and finds a story by turns harrowing and hopeful.

November 18, 2022
Daniel Hoyer
History of the Near Future: Understanding Polycrises of the Past to Help Navigate the Future
Computational historian; social scientist
Daniel Hoyer is a computational historian and social scientist. He holds a PhD in Classics from New York University, where he studied economic and social development in the high Roman Empire. Since 2014 he has been a part of Seshat: Global History Databank, a multidisciplinary project examining long-run social dynamics by combining qualitative and empirical information about the past with advanced quantitative analysis and computer modeling. His current research seeks to understand the root causes of and limiting factors to societal development and resilience. In particular, he is interested in understanding societal responses to shifting ecological, social, and economic contexts that determine well-being outcomes in the past, as well as how this understanding may shed light on critical social pressures today.
Abstract: Most approaches to the polycrisis highlight the ‘extraordinary’ and ‘unprecedented’ nature of current threats. While we face certain unique challenges, the perils of changing and unpredictable climate, emergent diseases, the destabilizing nature of rampant inequality, as well as the looming threat of polarization and warfare have been part of the human experience for millennia. There have been polycrises in the past just as today. Learning from history offers an invaluable and underutilized path for developing strategies for navigating our current threats.
We seek to write a history of the near future combining insights about our shared past with prospection about where we might be headed, and how we can better navigate the troubles facing us.
Here, I will discuss ongoing work with the Seshat: Global History Databank tracing the dynamics of societal crises from the deep past to contemporary states. I will present results showing that long-developing structural factors – the degree of material and social inequality, intensity of elite conflict and polarization, the state’s capacity to facilitate collective action – largely shape how crises unfold. I will conclude by discussing work looking to translate these insights into a modeling framework to track the level of pressure facing contemporary societies and to identify key 'leverage points' that might relieve some of this tension helping societies become more resilient to the myriad threats the polycrises presents.

November 17, 2022
Joanna Radin
What are the Odds? Risk, Uncertainty, and the Tactics of the Technothriller
Associate Professor of History of Science and Medicine, Yale University
Joanna Radin is an award-winning historian of biomedical futures at Yale University, where she is an Associate Professor of History of Science and Medicine. Her research and teaching focus on how people have imagined science, medicine and technology will change their lives. Her current work draws on the career of the Harvard-trained writer, director, producer and video game-designer, Michael Crichton to trace shifting ideas about science, expertise, and truth in American culture. She is the author of Life on Ice: A New History for Cold Blood (Chicago, 2017), an account of the emergence and ethics of the low-temperature biobank and co-editor of Cryopolitics: Frozen Life in a Melting World (MIT, 2017), a critical investigation of practices of life extension across human and non-human worlds. Her writing has appeared in leading academic journals as well as The Washington Post, The Los Angeles Review of Books, The New Inquiry, and others. She co-edits the Science as Cultureseries at the University of Chicago Press.
Abstract: Where do fears and fantasies about the future come from? In this talk I consider how “the scenario,” rooted in Cold War practices of nuclear defense innovated at RAND was adopted as a literary strategy for exploiting uncertainty by one of the 20th century’s most prolific interpreters of cultures of expertise. Michael Crichton (1942-2008), who first made a name for himself with the 1969 technothriller, The Andromeda Strain, had already been publishing crime novels under the pen name John Lange. The first of these, Odds On (1966), makes clear its debts to the practice of scenario planning. Ironically, scenario planning, as developed in approaches to “alternative future worlds” by Herman Kahn and the game theoretics of Thomas Schelling, was itself adapted from Hollywood scriptwriting conventions. Crichton’s embrace of the scenario and of game theory in his earliest work, as well as an unusual fluency with computing, helped create a wildly successful formula through which he would influence how emerging science and technology were imagined for more than 50 years, from biotech to climate change. Examining how RAND-associated techniques for dealing with risk and uncertainty have traveled via mass culture, I argue, can help us respond to contemporary debates about trust in experts and in liberal institutions, more generally.

October 11, 2022
Benjamin Bratton
Antikythera: Computation and Planetarity
Director, Antikythera program, Berggruen Institute; Professor of Philosophy of Technology and Speculative Design, University of California, San Diego
Benjamin Bratton is Director of the Antikythera program at Berggruen Institute. He is Professor of Philosophy of Technology and Speculative Design at the University of California, San Diego. His research spans philosophy of technology, social and political theory, computational media & infrastructure, and speculative design. He is the author of several books including The Stack: On Software and Sovereignty (MIT Press, 2016), The Revenge of the Real: Politics for a Post-Pandemic World (Verso, 2021), The Terraforming (Strelka Press, 2019), and Dispute Plan to Prevent Future Luxury Constitution (e-flux/Sternberg Press, 2015). He Visiting Professor at the European Graduate School, New York University-Shanghai, and SCI_Arc (the Southern California Institute of Architecture) and previously directed several think-tanks at the Strelka Institute. HIs current book project develops a new philosophy of the “artificial” in relation to climate change, planetary science, synthetic intelligence and the prospects of viable planetary futures.
Abstract: Philosophy—and more generally the project of developing viable concepts about how the world works and thus how thinking about the world works—-has always developed in conjunction with what technology reveals and does. At this moment, technology and particularly planetary scale computation has outpaced our theory. The response, too often, is to force supposedly comfortable and settled ideas about ethics, scale, polity, and meaning onto a situation that not only calls for a different framework but is already generating that different framework. That is, instead of simply applying philosophy to the topic of computation, Antikythera will start from the other direction and produce theoretical and practical thought -the speculative- from the encounter with computation. This historical moment seems interminable but may be fleeting. It is defined by a paradoxical challenge. How can the ongoing emergence of planetary intelligence comprehend its own evolution, and the astronomical preciousness of sapience, and simultaneously recognize itself in the reflection of the violence from which it emerged and against which it struggles to survive?
Taking this seriously demands a different sort of speculative and practical philosophy and a corresponding sort of computation.

October 4, 2022
Stuart Russell
Human-Compatible AI
Professor, Computer Science, University of California at Berkeley; Smith-Zadeh Chair in Engineering; Director, Center for Human-Compatible AI; Director, Kavli Center for Ethics, Science, and the Public
Stuart Russell is a Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI and the Kavli Center for Ethics, Science, and the Public. He is a recipient of the IJCAI Computers and Thought Award and Research Excellence Award and held the Chaire Blaise Pascal in Paris. In 2021 he received the OBE from Her Majesty Queen Elizabeth and gave the Reith Lectures. He is an Honorary Fellow of Wadham College, Oxford, an Andrew Carnegie Fellow, and a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. His book "Artificial Intelligence: A Modern Approach" (with Peter Norvig) is the standard text in AI, used in 1500 universities in 135 countries. His research covers a wide range of topics in artificial intelligence, with a current emphasis on the long-term future of artificial intelligence and its relation to humanity. He has developed a new global seismic monitoring system for the nuclear-test-ban treaty and is currently working to ban lethal autonomous weapons.
Abstract: As AI advances in capabilities and moves into the real world, its potential to benefit humanity seems limitless. Yet we see serious problems including racial and gender bias, manipulation by social media, and an arms race in lethal autonomous weapons. Looking further ahead, Alan Turing predicted the eventual loss of human control over machines that exceed human capabilities. I will argue that Turing was right to express concern but wrong to think that doom is inevitable. Instead, we need to develop a new kind of AI that is provably beneficial to humans. This in turn brings into focus a number of issues that our society needs to resolve.
Recorded Events
2018

California Adaptation Forum
Sacramento, CA
August 27, 2018
On August 27, 2018, Robert Lempert, Michelle Miro, Miriam Marlier, Tom LaTourrette, and Neil Berg (UCLA – formerly with RAND) participated in the California Adaptation Forum held in Sacramento, California. The biennial California Adaptation Forum gathers the adaptation community to foster knowledge exchange, innovation, and mutual support to create resilient communities throughout the state. The forum offers a series of engaging plenaries, sessions, workshops, and tours that support the transition from adaptation awareness and planning to action.
Robert Lempert, Director of the RAND Frederick S. Pardee Center for Longer Range Global Policy and the Future Human Condition helped administer the workshop, “Sea-Level Rise Adaptation: Understanding the Science, Regulatory Frameworks and Resources,” which provided an overview of the latest sea-level rise science, guidance, tools, and resources. The first part of the workshop focused on the recently updated 2018 State of California Sea-Level Rise Guidance from the Ocean Protection Council, and included a presentation on the adaptation pathways concept and a discussion with state coastal agencies on implementation of the Guidance. The second part of the workshop focused on tools and resources. It included: a presentation and demonstration of a new coastal plan alignment tool, which assists with aligning local coastal programs, general plans, local hazard mitigation plans and climate adaptation plans; a demonstration of the Adaptation Clearinghouse website and its sea-level rise resources; and status updates on four sea-level rise online mapping tools: NOAA Sea Level Rise Viewer, COSMOS/Our Coast Our Future, the Cal-Adapt Sea Level Rise tool, and the ART Bay Area Flood Explorer.
Tom LaTourrette, Senior Physical Scientist, presented a demonstration of RAND’s new California Emergency Response Infrastructure Climate Vulnerability Tool (CERI-Climate). CERI-Climate is an interactive tool that combines a database of California critical emergency response infrastructure with projected flood and wildfire hazard footprints to examine the exposure and associated impacts to infrastructure statewide from these hazards. The database contains over 600 assets, such as emergency services and health care facilities. Outputs include maps and tables describing facility exposures, flood and fire risks, property damage estimates from flooding, and estimates of operational disruption. The tool allows users to examine a range of conditions spanning different emissions scenarios, climate models, hazard severity, and other factors in 20-year time intervals through the year 2100. The tool also provides the ability to examine results for particular facility types, specific counties, and for facilities located in disadvantaged communities.
Dialogue on Deep Decarbonization in the Face of Risk and Uncertainty
Hosted by RAND
San Francisco, CA
April 5, 2018
The Decarbonization Dialogues, an initiative co-led by the RAND Corporation, with the generous underwriting of the Metanoia Fund, brings together leaders—from philanthropy, business, research, advocacy, government, law and international organizations—to critically evaluate the world’s potential for urgently decarbonizing human activity, which is necessary to mitigate climate change.
Building on a series of smaller conversations held over the past year, RAND invited a select group of individuals representing a range of organizations and expertise to an off-the-record meeting to explore how to move forward the agenda for deep decarbonization—particularly to ensure that adequate technological means are available to advance decarbonization apace while also meeting other global social, political and economic goals—and bearing in mind the deep uncertainty inherent in all of the pathways available to global decarbonization.
The meeting offered the participants the opportunity to collectively assess the current state of innovation, investment and risk management in the pursuit of decarbonization; major challenges to and opportunities for urgent decarbonization; feasible pathways for urgent decarbonization in the context of prevailing and possible social, political, and economic constraints; communication challenges in raising the call for urgent decarbonization; fostering collective recognition of indicators for risk-aware societal investment in decarbonization efforts; and “robustness” and “risk governance” as useful frameworks for thinking about leadership in advancing decarbonization.
DMDU Society 2018 Annual Meeting
Southern California
November 13-15, 2018
DMDU Society's 2018 meeting will be held in Southern California and will be hosted jointly by the RAND Corporation and the local government of Culver City, California. The 2018 theme will focus on DMDU in urban planning and technology sectors, with a particular emphasis on Latin American and Pacific Rim communities. Seating for this event is limited and registration is required.
For more information, visit www.deepuncertainty.org
How "Serious" Games inform Decisionmaking
Rob Lempert, Director, Frederick S. Pardee Center for Longer Range Global Policy and the Future Human Condition
Santa Monica, CA
February 7, 2018
Robert Lempert, director of RAND's Pardee Center, hosted a RAND Policy Circle Conversation, "How 'Serious' Games inform Decisionmaking," on February 7 in Santa Monica.
More than 30 friends and sponsors of RAND participated in an interactive game entitled, Decisions for the Decade. This game supports learning and dialogue about the challenges of decisionmaking under deep uncertainty. The gameplay helps people recognize that the future can prove deeply uncertain, and therefore managing risks may require being prepared for surprises.