Understanding Scientists' Computational Modeling Decisions About Climate Risk Management Strategies Using Values-Informed Mental Models

Published in: Global Environmental Change Volume 42, January 2017, Pages 107-116. doi: 10.1016/j.gloenvcha.2016.12.007

Posted on RAND.org on March 07, 2018

by Lauren A. Mayer, Kathleen Loa, Bryan Cwik, Nancy Tuana, Klaus Keller, Chad Gonnerman, Andrew M. Parker, Robert J. Lempert

Read More

Access further information on this document at Global Environmental Change Volume 42

This article was published outside of RAND. The full text of the article can be found at the link above.

When developing computational models to analyze the tradeoffs between climate risk management strategies (i.e., mitigation, adaptation, or geoengineering), scientists make explicit and implicit decisions that are influenced by their beliefs, values and preferences. Model descriptions typically include only the explicit decisions and are silent on value judgments that may explain these decisions. Eliciting scientists' mental models, a systematic approach to determining how they think about climate risk management, can help to gain a clearer understanding of their modeling decisions. In order to identify and represent the role of values, beliefs and preferences on decisions, we used an augmented mental models research approach, namely values-informed mental models (ViMM). We conducted and qualitatively analyzed interviews with eleven climate risk management scientists. Our results suggest that these scientists use a similar decision framework to each other to think about modeling climate risk management tradeoffs, including eight specific decisions ranging from defining the model objectives to evaluating the model's results. The influence of values on these decisions varied between our scientists and between the specific decisions. For instance, scientists invoked ethical values (e.g., concerns about human welfare) when defining objectives, but epistemic values (e.g., concerns about model consistency) were more influential when evaluating model results. ViMM can (i) enable insights that can inform the design of new computational models and (ii) make value judgments explicit and more inclusive of relevant values. This transparency can help model users to better discern the relevance of model results to their own decision framing and concerns.

This report is part of the RAND Corporation External publication series. Many RAND studies are published in peer-reviewed scholarly journals, as chapters in commercial books, or as documents published by other organizations.

Our mission to help improve policy and decisionmaking through research and analysis is enabled through our core values of quality and objectivity and our unwavering commitment to the highest level of integrity and ethical behavior. To help ensure our research and analysis are rigorous, objective, and nonpartisan, we subject our research publications to a robust and exacting quality-assurance process; avoid both the appearance and reality of financial and other conflicts of interest through staff training, project screening, and a policy of mandatory disclosure; and pursue transparency in our research engagements through our commitment to the open publication of our research findings and recommendations, disclosure of the source of funding of published research, and policies to ensure intellectual independence. For more information, visit www.rand.org/about/principles.

The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.