Download Free Electronic Document

FormatFile SizeNotes
PDF file 1.7 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Research Brief
Transportation Security Administration (TSA) face recognition technology implemented at the Los Angeles International Airport. Photo by Transportation Security Administration

Photo by Transportation Security Administration

Key Findings

  • Many survey respondents indicated that they had no strong opinions about government use of face recognition technology (FRT).
  • Respondents indicated agreement that the government's use of FRT had both benefits and risks, but they were likelier to acknowledge risks than benefits.
  • Respondents rated such factors as security, accuracy, and privacy as more important than speed or convenience.
  • Respondents reported believing that the government is required to meet certain requirements for using FRT.
  • Public support depends more on the application than on the technology.
  • Less than 25 percent of the respondents reported trusting the government's use of FRT.

How Is Artificial Intelligence Used for Homeland Security?

The U.S. Department of Homeland Security (DHS) uses several artificial intelligence (AI) technologies, such as face recognition and risk-assessment technologies (algorithms that predict the likelihood that an event will occur) in border and airport security, criminal investigations, immigration enforcement, and other applications. DHS actively seeks to broaden the use of these and other AI technologies, such as license plate readers and mobile phone location tracking, across its homeland security missions.

However, DHS faces constraints in using AI because of how key stakeholders — including Congress, technology companies, and the broader public — perceive these uses of AI. These stakeholders have raised concerns about how DHS use of these technologies affects privacy, civil liberties, and equity. Stakeholder perceptions of the government's use of advanced technologies, such as AI, are vital for several reasons, including the ability of some agencies, such as DHS , to establish and maintain trust in the legitimacy and fairness of their efforts, the need to ensure funding and legislative support from Congress, and the imperative of fostering collaboration with technology developers and other operational partners.

Having already experienced widely publicized challenges to its efforts to implement automated body scanning technology at airports (along with other setbacks due to public outcry), DHS is deeply concerned with gauging and understanding public perceptions of its uses of technologies. In that spirit, DHS enlisted the Homeland Security Operational Analysis Center to assess the public's current perceptions of potential uses of AI and to provide recommendations for addressing any public concerns. This study was the first step in examining public perceptions of a variety of technologies.

The researchers surveyed a nationally representative and diverse sample of U.S. adults to assess their perceptions of four types of technologies that rely on AI:

  1. face recognition technology (FRT)
  2. license plate–reader technology
  3. risk-assessment technology
  4. mobile phone location data.

Survey participants (2,841 in total) were asked to share their opinions about a variety of potential government uses of each of the four technologies. The survey was fielded using the RAND American Life Panel (ALP), a nationally representative panel of the American public. The full results of the survey were published in a report; much of the analysis — and the findings recounted here — focus on the findings related to FRT because it was of greatest interest to DHS, possibly because of controversies over its use and the publicity it has generated.

The Survey

For each of the technologies, survey participants were asked about their perceptions of its benefits and risks and of its use by the federal government, including DHS. They were also presented with a scenario involving a use of the technology and asked to describe their comfort level with that application.

What the Survey Found About Perceptions of Face Recognition Technology

Many respondents said that they had no strong opinions about government use of FRT. The survey found that a large proportion of respondents might not have formed opinions or were neutral about government use of FRT. Some 40 percent of answers to the question about whether FRT benefits outweighed the risks were neutral or ambiguous (such as "Neither Agree nor Disagree"). The likelihood of providing a neutral response was not strongly associated with any personal characteristic, such as age, sex, race or ethnicity, or socioeconomic status.

Respondents indicated agreement that the government's use of FRT had both benefits and risks, but they were likelier to acknowledge risks than benefits. More indicated agreement that government uses of this technology had risks than those who acknowledged benefits, and only half asserted that the benefits outweighed the risks (see Figure 1).

Figure 1. The Public Perception of Benefits and Risks of Government Use of Face Recognition Technology

The public agrees that the government's use of FRT has both benefits and risks, but they are more likely to acknowledge risk

Response There are risks of the U.S. government's use of face recognition technology There are benefits of the U.S. government's use of face recognition technology
Agree or strongly agree 75% 66%
Neither agree nor disagree 16%15%
Disagree or strongly disagree 5% 10%
Don't know or it depends 4% 10%

SOURCE: Features ALP data.

NOTE: Missing responses are excluded. n = 2,841 respondents.

When asked what factors the government needs to consider in weighing whether to use FRT, respondents indicated agreement that a broad variety is important. Respondents rated security, accuracy, and privacy as more important than speed or convenience (see Figure 2).

Figure 2. What Factors Are Important for Government Uses of Face Recognition Technology?

Which of the following are important for the U.S. government’s use of facial recognition technology?

Factor Important (%)
Security 93.40
Accuracy 92.87
Privacy 88.83
Transparency 86.93
Fairness 86.22
Oversight 82.79
Ability to consent 78.49
Speed75.21
Convenience 65.60

SOURCE: Features ALP data.

Respondents indicated believing that the government is required to meet certain requirements for using FRT. When asked what requirements the government should have to demonstrate fulfilling to be able to use FRT, the vast majority rated several as being important — for example, special training, transparency about the reasons for use, and secure storage of the images (see Figure 3). Less than 10 percent rated any of the possible requirements as unimportant.

Figure 3. Important Requirements for Government Uses of Face Recognition Technology

Which of the following are important for the U.S. government's use of facial recognition technology?

Requirement Very important or somewhat important (%) Not important (%)
Require special training for using facial recognition technology 88.86 1.51
Provide information about how facial recognition technology will be used 88.06 2.15
Securely store images 86.39 2.88
Obtain a court order for certain uses 82.11 3.83
Destroy images when no longer needed 79.293.35
Obtain consent from people if images of their faces will be shared 77.62 5.42
Regular audits 71.697.28
Obtain consent from people before images of their faces are collected 65.92 8.76
Allow people to opt out 59.6 12.28

SOURCE: Features ALP data.

Public support depends more on the application than on the technology. The survey asked respondents about their support for government use of each of the four technology types for a variety of applications (see Figure 4). The responses were similar for the different technology types, suggesting that the public's support depends more on the proposed use than on the type of technology. Support for identifying crime victims or suspects greatly outweighed support for some other uses, such as identifying people in public places, predicting whether someone was likely to commit a crime, or trying to assess whether a person was telling the truth.

Figure 4. Supported Government Uses of Face Recognition Technology

The U.S. government might use facial recognition technology in many ways to safeguard the American people. How much do you support the following uses?

Use case Supported or strongly supported (%)Opposed or strongly opposed (%)
Identify an adult victim of a crime 81.72 4.43
Identify a child victim of a crime 81.62 4.74
Identify a suspect of a crime 78.89 5.88
Identify visitors to government buildings such as courthouses 64.64 13.98
Screen large public events to identify suspected terrorists 61.28 14.87
Identify students, professors, or visitors at colleges 56.13 22.57
Identify students, teachers, or visitors at schools 55.66 22.80
Identify airport travelers 53.28 16.94
Identify someone suspected of violating immigration laws such as an expired visa 41.66 30.17
Identify people at voting locations 36.62 42.40
Identify people in protests and demonstrations 32.68 39.03
Identify people in public spaces such as parks or stadiums 32.67 37.65
Determine whether someone is telling the truth 28.74 36.69
Determine whether someone appears likely to commit a crime 23.10 47.01

SOURCE: Features ALP data.

Finally, the researchers wanted to gauge whether public support for DHS use of FRT was more or less than public support for use of FRT by the broader federal government. Although only 29 percent of respondents expressed trust in DHS use of FRT, even fewer — less than 25 percent of the respondents — reported trusting the government's use of FRT. Although trust appeared lower among men, younger adults, and those who described using niche social media platforms, these differences were not strong.

What Steps Can the Department of Homeland Security Take?

This nationally representative survey along with prior RAND research on public perceptions and misperceptions about technologies suggests steps DHS can take to address the public's generally poor — or neutral — perceptions of its use of FRT, including routinely assessing those perceptions:

  1. Proactively engage communities that are uncertain of or neutral on government use of AI. The large proportion of survey participants indicating ambiguous or neutral reactions about particular technologies or their applications suggests an important role for DHS in proactively shaping public perceptions. And because these technologies are developing rapidly, DHS should consider proactively engaging with these communities — before contemplating deployment.
  2. Focus on applications and safeguards rather than on the type of AI technology. The results of the survey clearly showed that respondents' reported concerns centered on specific applications and the safeguards that can be applied, not on the technologies themselves. Respondents were likelier to indicate opposition than support for some possible uses, such as determining whether an individual appears likely to commit a crime or is telling the truth or identifying people in protests, demonstrations, or simply in public spaces, suggesting that DHS should take greater care when considering such uses.
  3. Consider which data sources underpin AI. Respondents indicated greater support for government-owned data sources, such as criminal records, than other sources of data, especially social media and commercially available data. This finding suggests that DHS should take the opportunity to exploit in-house data or data from other government sources.
  4. Integrate public perception into technology development and acquisition life cycles. DHS should integrate into all aspects of technology deployment societal considerations about how technology is widely deployed, including concerns about privacy; it should not wait to conduct impact assessments after the fact.

Developing approaches to routinely gauge public perceptions of AI applications, such as FRT for criminal investigations or airport security, could benefit DHS's ability to implement those applications while maintaining the public's support and trust.

This report is part of the RAND research brief series. RAND research briefs present policy-oriented summaries of individual published, peer-reviewed documents or of a body of published work.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.