An overview of testimony by Jason Matheny presented before the U.S. Senate Committee on Homeland Security and Governmental Affairs on March 8, 2023.
Mar 8, 2023
An overview of testimony by Jason Matheny presented to the U.S. Senate Committee on Armed Services, Subcommittee on Cybersecurity, on April 19, 2023.
Jason Matheny, President and CEO, RAND Corporation
Integrating AI into our national security plans poses special challenges. What keeps me up at night is AI being applied; the development of new cyber weapons and bioweapons for which we don't have reliable defenses. And I worry that right now the most likely scenario is one in which those models were either stolen from the United States, were built with U.S. tech, U.S. chips, U.S. chipmaking equipment.
I think the strongest argument for a pause is our own labs need to get their cybersecurity together to reduce the likelihood that the models that they're building will be stolen by our adversaries. I think given the massive private sector investment in AI right now makes sense for the federal government to concentrate on the places where it has a unique role, where there there's a market failure or an authority that only the government can exercise.
One of those, and I think among the most important, is in thinking about the talent that is needed to support AI development in the United States. If we want to win a competition against a country that is four and a half times our size, is producing more Ph.D.s than we are, twice as many master's students in STEM fields, we have to attract the global team to join ours. A second key area is cybersecurity requirements for the leading AI labs so that they're less likely to have their models stolen. A third is export controls on chips and chip making equipment so that our competitors don't have access to leading-edge compute. A fourth is federal research that's focused on the places where the commercial sector is going to under invest, but also thinking about how we break other countries models.
I think these models right now are very brittle; we need to be thinking about ways that we can slow down progress elsewhere by doing things like adversarial attacks, data poisoning, model inversion. Let's use the tricks that we're seeing used against us and make sure that we understand the state of the art.