Expert Discusses Latest Biometric Testing

On May 29, 2001, Dr. James L. Wayman of San Jose State University visited RAND's Washington, D.C. office to lead a seminar on the testing of biometric technologies. He focused special attention on biometric product testing sponsored by the United Kingdom's Communications Electronics Security Group (CESG), the results of which were published in March 2001 (Biometric Product Testing Final Report, National Physical Laboratory, 19 Mar. 2001). CESG's testing covered six different technologies: facial recognition, fingerprint, hand geometry, iris scanning, hand vein, and voice recognition.

Dr. Wayman began his seminar by explaining various problems typically encountered with testing biometric products with standard protocols due to the difficulties of implementing them across biometric technologies and devices and comparing results. He described a "best practices" method that looks at a number of definable variables, including:

  • "failure to enroll"
  • "failure to acquire"
  • "false match"
  • "false non-match"
  • the trade-off between matching errors (false accept/reject rates and Detection Error Trade-Off (DET))
    throughput rates of users (transaction time with local processing and local matching)
  • the sensitivity of the systems' performance to environmental conditions, and
  • the differences in performance over different classes of users which can be measured and compared regardless of the device or type of technology used

The CESG's method controlled for other factors, such as the testing environment, by providing the best possible testing conditions.

Dr. Wayman stressed that despite using the best practices method, comparisons among biometric products are difficult, especially outside the lab. For example, within the fingerprint technology, three products were tested, and significant differences in performance resulted depending on which algorithm was used. Likewise, the speaker recognition was performed using a single handset and a dedicated line with no background noise, thus making comparisons with more realistic studies where different users use different handsets or studies using wireless technology very difficult. Additionally, the CESG testing did not evaluate other important factors such as security, vulnerability, or user acceptance.

Seminar participants included representatives from RAND, MITRE, the Defense Logistics Agency, and the Naval Research Laboratory (NRL).