The U.S. Customs and Border Protection agency has quietly signed a significant new contract with Clearview AI, the controversial facial recognition company whose massive database of scraped internet photos has long drawn the ire of privacy advocates, civil liberties organizations, and even some lawmakers on Capitol Hill. The deal, which authorizes the use of Clearview’s technology for what the agency describes as “tactical targeting,” represents a dramatic expansion of the federal government’s embrace of AI-powered biometric surveillance — and raises urgent questions about the boundaries of law enforcement technology in an era of rapidly advancing artificial intelligence.
The contract, first reported by WIRED, was identified through federal procurement records and marks one of the most substantial agreements between a U.S. government agency and Clearview AI to date. While CBP has previously used facial recognition tools in various capacities — including at airports and border crossings — the new arrangement specifically invokes “tactical targeting,” a term that suggests the technology could be deployed in real-time operational scenarios rather than confined to retrospective investigative work.
A Database Built on Billions of Scraped Photos — Now in the Hands of Border Agents
Clearview AI’s core product is built on a database of more than 50 billion images scraped from publicly available sources across the internet, including social media platforms like Facebook, Instagram, LinkedIn, and countless other websites. The company’s algorithm allows users to upload a photograph of an individual and receive potential matches drawn from this vast repository, along with links to the original sources where those images appeared online. It is, in effect, a reverse search engine for human faces — one that privacy experts have described as fundamentally incompatible with democratic norms.
The company has faced legal challenges and regulatory scrutiny around the world. In 2022, the United Kingdom’s Information Commissioner’s Office fined Clearview AI more than £7.5 million for unlawfully collecting the facial images of UK residents. Australia, France, Italy, and Greece have all taken similar enforcement actions. In the United States, Clearview settled a landmark lawsuit in Illinois under the state’s Biometric Information Privacy Act, agreeing to restrictions on how it sells access to its database to private companies — though notably, government agencies were largely exempted from those restrictions.
“Tactical Targeting”: What the Contract Language Actually Means
The phrase “tactical targeting” in the CBP contract has drawn particular attention from surveillance researchers and civil liberties advocates. In military and law enforcement parlance, tactical targeting typically refers to the identification and tracking of specific individuals in operational settings — a far cry from the more passive use of facial recognition at immigration checkpoints or airport kiosks. According to WIRED’s reporting, the contract’s language suggests that CBP agents could use Clearview’s technology in the field to identify individuals encountered during enforcement operations, potentially including immigration raids, border interdiction missions, and counternarcotics activities.
This operational posture represents a significant escalation. Previous CBP use of facial recognition has been largely centered on Traveler Verification Service systems at ports of entry, which compare travelers’ faces against passport and visa photos held in government databases. Clearview AI’s system, by contrast, draws on a far broader and more invasive pool of data — images that individuals never consented to have collected for law enforcement purposes. The distinction is not merely technical; it is constitutional, touching on Fourth Amendment protections against unreasonable searches and the broader right to anonymity in public spaces.
The Expanding Federal Appetite for Facial Recognition
CBP is far from the only federal agency that has turned to Clearview AI. Records obtained through Freedom of Information Act requests and investigative reporting over the past several years have revealed that the Department of Homeland Security, Immigration and Customs Enforcement (ICE), the FBI, and numerous other agencies have tested or deployed the technology. A 2021 Government Accountability Office report found that 20 federal agencies reported using facial recognition technology, with 10 of them using Clearview AI specifically. Despite this widespread adoption, Congress has yet to pass comprehensive legislation governing the federal use of facial recognition.
The new CBP contract arrives at a moment when the Trump administration has been aggressively expanding immigration enforcement operations, deploying additional agents to the southern border and conducting high-profile raids in American cities. Civil liberties organizations, including the American Civil Liberties Union and the Electronic Frontier Foundation, have warned that facial recognition technology in the hands of immigration enforcement agencies poses acute risks to immigrant communities, communities of color, and anyone who might be misidentified by an algorithm with documented racial and gender bias.
Accuracy Concerns and the Human Cost of Algorithmic Error
Independent testing by the National Institute of Standards and Technology (NIST) has consistently found that many facial recognition algorithms exhibit higher error rates when attempting to identify individuals with darker skin tones, women, and elderly people. While Clearview AI has claimed that its technology performs well across demographic groups, the company has not submitted its algorithm to NIST’s most rigorous testing protocols in a fully transparent manner. The consequences of misidentification in a tactical law enforcement context could be severe — wrongful detention, deportation proceedings initiated against the wrong individual, or worse.
There have already been documented cases of facial recognition leading to wrongful arrests in the United States. Robert Williams, a Black man in Detroit, was arrested in 2020 after a facial recognition system incorrectly matched his driver’s license photo to surveillance footage of a shoplifting suspect. His case, and several others like it, have become rallying points for advocates pushing for moratoriums or outright bans on government use of the technology. The deployment of such tools in high-stakes border enforcement scenarios — where individuals may have limited access to legal counsel and where due process protections are already attenuated — amplifies these concerns considerably.
Clearview AI’s Path from Pariah to Preferred Vendor
Clearview AI’s trajectory from a secretive startup that sparked outrage when its existence was first revealed by The New York Times in January 2020 to a preferred vendor for one of the largest federal law enforcement agencies is itself a remarkable story of institutional normalization. CEO Hoan Ton-That has spent years cultivating relationships with law enforcement agencies, offering free trials of the technology and positioning the company as an indispensable tool in the fight against human trafficking, child exploitation, and terrorism. The company has claimed credit for assisting in thousands of investigations, including the identification of suspects involved in the January 6, 2021, attack on the U.S. Capitol.
These claims have been difficult to independently verify, and critics argue that they serve as a convenient justification for a surveillance infrastructure that, once built, is virtually impossible to constrain. “The problem with these tools is not just how they’re used today — it’s the architecture of surveillance they create for tomorrow,” said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, in previous public comments about Clearview AI’s government contracts. Once an agency has access to a database of billions of faces and the algorithmic capability to search it in seconds, the temptation to expand its use beyond the original stated purpose becomes nearly irresistible.
Congressional Inaction and the Regulatory Vacuum
Despite years of hearings, proposed bills, and bipartisan expressions of concern, Congress has failed to enact any meaningful federal regulation of facial recognition technology. The Facial Recognition and Biometric Technology Moratorium Act, introduced multiple times by Democratic lawmakers, has never advanced out of committee. Republican lawmakers have generally been more supportive of law enforcement’s use of the technology, particularly in the context of border security and immigration enforcement. The result is a regulatory vacuum in which agencies like CBP are free to sign contracts with companies like Clearview AI with minimal oversight or public accountability.
This absence of legislative guardrails means that the primary checks on the technology’s use come from internal agency policies — which can be changed at any time — and from the courts, which have been slow to address the constitutional implications of mass facial recognition surveillance. A handful of state and local jurisdictions, including San Francisco, Boston, and the state of Vermont, have enacted their own restrictions or bans on government use of facial recognition, but these measures do not apply to federal agencies operating within their borders.
What Comes Next: The Stakes for Privacy and Civil Liberties
The CBP-Clearview AI contract is likely to face legal challenges and intensified scrutiny from oversight bodies, but in the near term, it signals a clear direction of travel for federal law enforcement. As AI capabilities continue to advance and as the political environment favors aggressive enforcement at the border and beyond, the integration of powerful biometric surveillance tools into everyday policing operations appears set to accelerate. For privacy advocates, the question is no longer whether the government will use facial recognition at scale — it is whether any meaningful limits will be placed on that use before the infrastructure becomes too deeply embedded to dismantle.
The stakes extend well beyond immigration policy. The normalization of mass facial recognition by federal agencies sets precedents that will shape the relationship between the state and the individual for decades to come. Whether that relationship will be governed by robust legal protections and democratic accountability, or by the unchecked discretion of agencies armed with ever-more-powerful algorithms, remains an open and urgent question — one that the CBP’s new contract with Clearview AI has made impossible to ignore.


WebProNews is an iEntry Publication