Americans Need to Know How the Government Will Use AI

commentary

Mar 22, 2024

Passengers take biometric facial recognition photos at Houston International Airport, February 12, 2018, photo by Donna Burton/U.S. Customs and Border Protection

Passengers take biometric facial recognition photos at Houston International Airport, February 12, 2018

Photo by Donna Burton/U.S. Customs and Border Protection

This commentary originally appeared on San Francisco Chronicle on March 22, 2024.

Americans have been concerned about tech-enabled government surveillance for as long as they have known about it. Now in the age of artificial intelligence, and with the announcement by the Department of Homeland Security this week that it is embracing the technology, that concern isn't going away anytime soon.

But federal agencies could mitigate some of that fear. How? By engaging the public.

Since at least the 1928 Supreme Court decision to allow law enforcement use of wiretapping, government use of technology has provoked public debate. Two years ago, public outcry forced the IRS to shelve newly announced plans for using facial recognition to identify taxpayers. More recently, the Department of Homeland Security's CBP One app, which uses facial recognition to identify asylum applicants, was found to be less able to recognize asylum seekers with darker skin, like many other such systems. This, too, has understandably led to public frustration.

Homeland Security has a huge mission set—including protecting borders, election infrastructure, and cyberspace. But unlike other federal agencies, it has many public-facing missions—such as Transportation Security Administration agents at airports. This also gives the department a unique opportunity to work with the public to ensure that tech is used responsibly.

People were much more suspicious of the most sweeping uses of facial recognition, like to surveil protests or monitor polling stations.

Share on Twitter

The department understands this, which is why it asked us—researchers who study how technology intersects with public life—to survey Americans to find insights on using technology in ways that the public would be more likely to support. Surveying a representative sample of 2,800 adults in 2021, the biggest takeaway was that Americans cared less about what technology was being used than how it was being used.

For instance, we asked people whether they would support the government using facial recognition for such purposes as investigating crimes, tracking immigrants, or identifying people in public places like stadiums or polling stations. Respondents supported using the technology in some ways—identifying victims and potential suspects of a crime, for example—far more than others. People were much more suspicious of the most sweeping uses of facial recognition, like to surveil protests or monitor polling stations. And this was true for different AI technologies.

Another important factor was the safeguards surrounding a given technology's use. In our survey, these safeguards included providing alternatives to engaging with the technology, administering regular audits to ensure that the technology was accurate and did not have a disparate impact across demographic groups, and providing notification and transparency about how it is used. Rather than a one-size-fits-all approach, we found Americans want safeguards sensitive to the context in which the technology is applied, such as whether the technology will be used on the open border or in a dense urban city.

To its credit, the department has implemented some safeguards along these lines, but they are not always uniformly administered. For example, although facial recognition technology is optional for travelers going through airport security, some individuals report not being made aware that it is not a requirement, including a U.S. senator. Such inconsistency breeds confusion and likely mistrust.

Nevertheless, there is an opportunity for constructive engagement. Many of the respondents to our survey said that they were either neutral or ambiguous about government use of technology, meaning that they hadn't yet decided whether the benefits of using a given technology outweighed the risks. Far from having fully formed polarized views on the subject, many Americans are open to being persuaded one way or another.

This might allow government agencies to work within this large group of “swing” Americans to build more trust in how the government uses new tech on all of us. And, counterintuitively, the government's reputation for moving slowly and deliberately is, in this case, perhaps an asset.

Far from having fully formed polarized views on government use of technology, many Americans are open to being persuaded one way or another.

Share on Twitter

Slowness is a trait often ascribed to the government. For instance, to field our survey we had to undergo a 15-month approval process. And that slowness had consequences: By the time we got our approval, large language models had burst onto the scene but because they weren't factored into our survey, we couldn't ask people about them.

But when it comes to deploying new technologies, it should be done carefully, with a clear understanding of their benefits and risks—especially from the perspective of communities most deeply affected. This means that a deliberately paced process can be a feature, not a bug; slowness can be an asset, not a hindrance.

If agencies like the Department of Homeland Security take the time to understand what makes the public more comfortable with how technology is used, the public might gain confidence. Even better: If agencies using technology to surveil Americans pulled back the curtain to explain how and why they do it, similar to the process of careful and considered deployment. As our research showed, people might not be very interested in understanding how the tech works, but they want to know how it will be used—on them and society.


Douglas Yeung is a senior behavioral scientist at RAND and a member of the Pardee RAND Graduate School faculty. Benjamin Boudreaux is a policy researcher at RAND.