In December 2025, a new reality took effect at U.S. borders: every non-citizen entering or leaving the country may have their face photographed and processed through biometric systems. No age exemptions. No opt-outs for frequent travellers. The Department of Homeland Security frames this as routine security, a way to “biometrically confirm departure.” But once captured, these facial scans enter databases that can be queried, cross-referenced, and potentially shared with law enforcement agencies indefinitely.
This is not an isolated policy shift. Across the world, facial recognition technology has evolved from a sci-fi concept into the infrastructure of daily life. The global facial recognition market is projected to reach $24.28 billion by 2032, driven by government contracts, commercial applications, and consumer demand for frictionless security. In the UK, police forces now deploy mobile facial recognition vans that scan crowds in real time, matching faces against watch lists in seconds. In China, the technology is already woven into social credit systems and mass surveillance networks.
The question is no longer whether facial recognition will become ubiquitous, it already is. The question is whether we will retain any meaningful control over our most intimate identifier: our face.
How facial recognition works
Facial recognition systems analyse geometric patterns in a person’s face: the distance between eyes, the shape of cheekbones, the contour of the jaw. These measurements are converted into a mathematical template, often called a faceprint, which can be stored and compared against millions of other templates in milliseconds.
The technology distinguishes between two core functions. One-to-one verification confirms identity like unlocking your phone with Face ID. One-to-many identification searches a database to determine who you are, e.g. law enforcement scanning crowds for wanted individuals. The latter carries far greater privacy risks because it operates without the subject’s knowledge or consent.
But technical capability is only part of the problem. The real danger lies in what happens after identification. Once your face is matched to a database entry, it becomes a key that unlocks vast networks of personal information: your name, address, employment history, social media activity, travel patterns, even your associates and family members. A face ceases to be just a physical feature. It becomes a persistent, searchable ID that follows you everywhere.
The scope of surveillance
Facial recognition is now embedded in systems most people encounter regularly, often without realizing it.
Government surveillance has expanded rapidly. Beyond the DHS mandate at U.S. borders, police departments across the United States and Europe use facial recognition to identify suspects, monitor protests, and track individuals in public spaces. South Wales Police conducted trials that scanned over 500,000 faces at public events. In many cases, the legal framework governing these programs is minimal or non-existent. There is often no requirement for warrants, no transparency about how long data is retained, and limited oversight on how the technology is used.
Commercial exploitation has turned faces into commodities. TikTok has experimented with AI avatars generated from real actors who sold the rights to their likeness for as little as $500. These digital doubles can be deployed in ads, speak new languages, and endorse products all without ongoing consent from the original person. Meanwhile, retailers use facial recognition to track shoppers’ movements through stores, analysing dwell time and emotional responses to optimise product placement and pricing. Airlines and hotels deploy the technology for “seamless” check-ins, building profiles of customer behaviour over time.
Consumer-facing tools have democratized surveillance in troubling ways. Services like PimEyes and FaceCheck ID allow anyone to upload a photo and discover where that face appears online. While marketed as tools for journalists or personal safety, these platforms are easily weaponised for stalking, harassment, and doxxing. Researchers have demonstrated how off-the-shelf smart glasses can be modified with facial recognition software to instantly identify strangers on the street, pulling up their names, addresses, and social profiles in real time.
The convenience of these systems masks a troubling reality: every face captured, every database expanded, builds a surveillance infrastructure that fundamentally alters the balance of power between individuals and institutions.
The risks are real and underestimated
Facial recognition is not neutral technology. It carries embedded biases and systemic vulnerabilities that amplify existing inequalities.
Accuracy problems disproportionately harm marginalised communities. Multiple studies have shown that facial recognition systems misidentify women and people of colour at significantly higher rates than white men. The consequences are not abstract: individuals have been wrongfully arrested because algorithms matched their faces to grainy surveillance footage. In a system where false positives can result in detention, interrogation, or worse, algorithmic bias is not a technical glitch, it is a civil rights crisis.
Data breaches expose millions. Facial recognition databases are high-value targets for hackers. In 2019, a biometric database used by banks and police in multiple countries was found exposed online, leaking fingerprints and facial recognition data for over a million people. Unlike a password, you cannot change your face. Once compromised, biometric data remains a permanent vulnerability.
Mission creep is inevitable. Technologies deployed for one purpose rarely stay confined to their original use case. Systems installed to catch terrorists are used to monitor protesters. Tools designed to find missing children are repurposed to track immigrants. The infrastructure built today will define the surveillance capabilities available to future governments including those with authoritarian ambitions.
Regulation is fragmented and insufficient
Some jurisdictions have begun to respond, but the patchwork of regulations is nowhere near adequate.
The European Union’s AI Act classifies certain uses of facial recognition as high-risk, imposing transparency and accountability requirements. Several U.S. cities, including San Francisco and Boston, have banned government use of facial recognition technology. Denmark has proposed legislation granting individuals ownership over their likeness in AI-generated content and deepfakes, though the bill has yet to pass.
But these are exceptions. In the United States, federal regulation remains virtually non-existent. Most laws focus narrowly on deepfakes in elections or explicit sexual content, leaving everyday surveillance entirely unaddressed. Some proposals in Congress would pre-empt state-level AI regulations for years, effectively freezing progress at a time when rapid action is needed. Meanwhile, private companies face almost no restrictions on collecting, analysing, or selling facial data.
The result is a regulatory vacuum at precisely the moment when guardrails matter most.
What needs to happen
Protecting privacy in the age of facial recognition requires both individual action and systemic reform.
For individuals, the options are limited but meaningful. Opt out of facial recognition databases like PimEyes and FaceCheck by requesting image removal. Scrub personal information from data broker sites such as WhitePages, Spokeo, and Intelius. Lock down social media privacy settings to restrict public access to photos. Use tools like Signal for encrypted communication and consider covering or disabling cameras on devices when not in use. These steps reduce exposure but cannot eliminate it entirely. The burden of protection cannot rest solely on individuals.
For policymakers, the path forward is clear. Biometric data must be classified as highly sensitive, requiring explicit, informed consent before collection. Facial recognition use by law enforcement should require warrants and be subject to independent oversight. Retention periods for biometric data must be strictly limited, with mandatory deletion after defined timeframes. Individuals should have the right to know when their face has been scanned, by whom, and for what purpose, with meaningful legal recourse when rights are violated. Algorithmic audits should be mandatory, with public reporting on accuracy rates across demographic groups.
For technology companies, transparency and accountability must become standard. Businesses deploying facial recognition should be required to disclose what data they collect, how long they retain it, and whether they share it with third parties. Independent audits should verify that systems meet minimum accuracy thresholds and do not encode discriminatory biases.
The choice ahead
Facial recognition represents a inflection point in the relationship between individuals and power. Once normalized, it becomes nearly impossible to reverse. The infrastructure being built today: databases, algorithms, and surveillance networks will shape the freedoms available to future generations.
The technology itself is not inherently evil. Used with strict safeguards, transparency, and consent, facial recognition could serve legitimate purposes: reuniting missing persons with families, securing borders without invasive searches, preventing fraud. But those safeguards do not currently exist. Instead, we are sleepwalking into a world where every face is a trackable data point, where anonymity in public space becomes a relic of the past, and where the power to surveil is concentrated in institutions with little accountability.
Your face is not just another password. It is your identity. And once you lose control over it, getting it back may be impossible.















