Facial recognition algorithms from commercial vendors struggle badly with diversity. Their accuracy drops 10 to 100 times when identifying Asian and Black faces versus white faces. These glaring gaps show why we need strong facial recognition rules in today’s tech-driven world. By 2020, eighteen U.S. cities had banned police from using this technology. The European Union’s draft AI Act also wants to put major limits on public facial recognition systems.
Privacy rules for facial recognition look different depending on where you live. San Francisco led the way in 2019 as the first U.S. city to stop police from using facial recognition. Maine now has what many call the strongest rules statewide. These restrictions tackle vital ethical issues like identity theft risks. They also address built-in biases that unfairly target women and people with darker skin tones. This piece dives into how facial recognition rules keep evolving and what it means when our laws can’t keep up with advancing technology.
The Rapid Evolution of Facial Recognition Technology
The story of facial recognition started over 50 years ago. Woodrow W. Bledsoe experimented with “programming computers” to recognize human faces between 1964 and 1966. His early work revealed several challenges: head rotation, lighting, facial expressions, and aging made the technology rudimentary for decades.
A major breakthrough came in 1988. Sirovich and Kirby created the Eigenface system that could code a normalized facial image with less than one hundred values. Turk and Pentland built on this foundation in 1991. They developed systems that could detect faces within images, which led to the first automatic facial recognition systems.
The technology gained substantial government backing during the 1990s. DARPA and NIST launched the Face Recognition Technology (FERET) program to boost commercial development. The Viola-Jones object detection framework made up-to-the-minute face detection in video footage possible by the 2000s.
The pace of advancement picked up speed from 2010-2020. The Face Recognition Grand Challenge of 2006 showed newer algorithms were 10 times more accurate than those from 2002. These algorithms performed 100 times better than their 1995 counterparts. NIST’s 2018 test revealed remarkable progress – only 0.2% of searches failed to match the correct image. This was a huge improvement from the 4% failure rate in 2014.
We have a long way to go, but we can build on this progress. A 2018 MIT study exposed troubling disparities with error rates of 34.7% for darker-skinned women compared to just 0.8% for lighter-skinned men. A 2019 NIST study confirmed that U.S.-developed algorithms misidentified Black, Asian, and Native American individuals more often than white people.
The technology has grown beyond basic 2D imaging. Modern systems now include 3D facial recognition that maps facial contours, thermal imaging for low-light conditions, and deep convolutional neural networks that enhance matching capabilities. Apple’s Face ID uses thermal infrared technology to map patterns from blood vessels under the skin.
Law enforcement, border control, and commercial sectors have widely adopted this technology. The gap between technological progress and proper facial recognition regulations continues to grow.
Current Landscape of Facial Recognition Laws
The United States has a fragmented regulatory landscape for facial recognition technology. Only three states—Illinois, Texas, and Washington—have passed complete laws that regulate how private entities collect and use biometric information. Illinois’ Biometric Information Privacy Act (BIPA), enacted in 2008, stands out as the strongest law because it offers broad coverage and allows private citizens to take legal action.
BIPA makes businesses get written consent before collecting biometric data. The law requires them to create retention schedules and protect data with a “reasonable standard of care”. Violators must pay $1,000 per incident, which increases to $5,000 for intentional violations. These violations were calculated per scan until 2024, leading to huge penalties for employers. The new amendments now limit damages to one recovery per person for each type of violation.
San Francisco became the first major American city to implement such a ban in 2019 when its Board of Supervisors voted 8-to-1. At least 13 other cities, including Boston and Portland, have banned their governments from using facial recognition technology.
Different states have chosen various regulatory paths instead of complete bans. Massachusetts requires a court order for scans but doesn’t need probable cause. Maine needs probable cause but doesn’t require court verification. Both Colorado and Washington make their governments tell defendants about facial recognition use before trial.
The federal government lacks meaningful regulation. No laws specifically control how federal agencies use facial recognition technology. Congress has seen the Facial Recognition and Biometric Technology Moratorium Act, but it hasn’t passed yet.
This scattered regulatory approach creates major challenges for citizens and businesses alike. Most states’ laws treat biometric data as “sensitive information” that needs special protection. They require notification of collection, purpose limitations, and consent. Colorado takes extra steps by requiring system testing for accuracy and demographic bias because of reliability concerns.
Critical Gaps in Facial Recognition Ethical Issues
Facial recognition technology has advanced rapidly, but some troubling ethical issues remain unaddressed by current regulations. The biggest problem is bias in the algorithms. These systems make mistakes 34.7% of the time with darker-skinned women compared to just 0.8% with lighter-skinned men. Tests by NIST showed these error rates can be 10 to 100 times higher for certain demographic groups.
The root of these biases comes from unbalanced training data that mirrors existing social inequalities. Research shows that even datasets meant to be diverse still contain more than 80% lighter-skinned people. Researchers call this effect “power shadows” when society’s biases show up in the data.
Privacy has become another major concern. A 2015 Pew Research study revealed that 63% of people value their right to move around without being identified. All the same, systems like Clearview AI have built massive databases by scraping billions of photos without asking for permission.
These issues go beyond theory and into real life. Some Black men have been wrongfully arrested based only on incorrect facial recognition matches. The technology often collects data without people knowing or agreeing to it. This creates an unfair power balance between people and the organizations using these systems.
Protecting data across countries brings its own challenges. Different legal systems handle data differently as it moves between nations. The current global system doesn’t work well and shows “deep fissures in trust”. So politics increasingly gets in the way of sharing data internationally.
The “black box” nature of these algorithms makes things worse. Nobody can really figure out who’s responsible when mistakes happen. Companies won’t share their data “for copyright and trade secret protection purposes”.
These ethical gaps explain why many places are now strictly regulating or completely banning certain uses of facial recognition.
Conclusion
Facial recognition technology faces a turning point as its rapid advances leave legal protections behind. Our research reveals some telling patterns. The technology has come a long way from Bledsoe’s initial work to today’s advanced 3D systems. However, accuracy still varies greatly among different demographic groups. Only three U.S. states have detailed biometric privacy laws, which leaves the current regulations scattered and incomplete.
These gaps just need quick action. The system’s bias hits minorities harder, especially when you have darker-skinned people and women facing error rates up to 34.7%. Companies keep rolling out facial recognition systems without proper consent or transparency, while privacy protection falls short.
States like Illinois show how strong privacy laws can protect people’s rights and still allow tech advances. The lack of federal oversight creates confusion for businesses of all sizes. Local rules vary too much from place to place.
Knowing how to create detailed, standard rules will shape facial recognition’s future. These rules must protect privacy, tackle system bias, and ensure clear deployment methods. We have a long way to go, but we can build on this progress. The technology’s benefits remain hidden behind its threats to privacy and basic rights.