U.S. Customs and Border Protection has quietly signed a new contract with Clearview AI, the controversial facial recognition company, to deploy its technology for what the agency describes as “tactical targeting” — a development that raises fresh questions about the expanding use of biometric surveillance tools across federal law enforcement and immigration enforcement operations.
The deal, first reported by WIRED, marks a significant deepening of the relationship between one of the nation’s largest law enforcement agencies and a company whose very business model — scraping billions of photos from social media platforms and the open web to build a massive facial recognition database — has drawn lawsuits, regulatory scrutiny, and international bans. The contract positions Clearview AI’s tools squarely within the operational toolkit of agents working at and beyond the nation’s borders.
A Contract Built on Controversy: What the CBP-Clearview Deal Entails
According to the WIRED report, the contract specifically references the use of Clearview AI’s facial recognition capabilities for “tactical targeting,” a term that encompasses identifying individuals of interest in real-time or near-real-time operational settings. While the precise dollar value and full scope of the agreement have not been entirely disclosed, procurement records indicate that CBP has moved beyond pilot programs and exploratory licensing agreements into a more formalized, operational deployment of the technology.
This is not CBP’s first foray into Clearview AI’s services. The agency, along with Immigration and Customs Enforcement (ICE) and other Department of Homeland Security components, has previously tested or licensed Clearview AI’s product. But the new contract language — particularly the invocation of “tactical targeting” — suggests a more aggressive and field-ready application than earlier arrangements, which were often framed as investigative or analytical tools used after the fact.
Clearview AI’s Vast Database: The Engine Behind the Technology
Clearview AI’s technology is powered by a database of more than 50 billion images scraped from publicly accessible websites, social media platforms, and other online sources. The company’s algorithm allows users to upload a photograph of an individual and receive potential matches drawn from this enormous repository, along with links to where those images were originally found online. The capability is extraordinarily powerful — and, critics argue, extraordinarily invasive.
The company has faced legal challenges across multiple jurisdictions. In 2022, the UK’s Information Commissioner’s Office fined Clearview AI more than £7.5 million for collecting facial images of UK residents without consent. Australia’s privacy commissioner reached a similar finding. In the United States, Clearview AI settled a landmark lawsuit in Illinois under the state’s Biometric Information Privacy Act, agreeing to restrict sales of its database to private companies and individuals, though notably not to government agencies. That carve-out has allowed Clearview to continue pursuing — and winning — federal contracts.
The Federal Push for Biometric Tools Under the Current Administration
The new CBP contract arrives at a moment when the federal government, under the Trump administration, has dramatically escalated immigration enforcement operations. CBP and ICE have been granted expanded authorities and resources to identify, detain, and deport individuals, and advanced surveillance technologies have become central to that mission. Facial recognition, in particular, has been positioned as a force multiplier — a way to rapidly identify individuals with outstanding warrants, prior deportation orders, or suspected ties to criminal organizations.
Administration officials have publicly championed the use of cutting-edge technology in border security. In recent months, DHS has touted investments in AI-powered tools for everything from screening travelers at ports of entry to monitoring social media activity of visa applicants. The Clearview AI contract fits neatly within this broader strategic push, even as civil liberties organizations sound increasingly urgent alarms.
Privacy Advocates and Legal Scholars Push Back
Organizations including the American Civil Liberties Union, the Electronic Frontier Foundation, and the Brennan Center for Justice have long warned that the federal government’s use of Clearview AI poses fundamental threats to privacy, free expression, and due process. The concern is not merely theoretical. Facial recognition technology has been shown in multiple peer-reviewed studies to exhibit higher error rates when identifying people of color, women, and older adults — raising the specter of misidentification leading to wrongful detention or deportation.
“The use of Clearview AI by federal agencies is essentially mass surveillance laundered through a private company,” said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, in previous public statements about government use of the tool. Critics note that Clearview AI’s database was built without the consent of the billions of individuals whose images it contains, and that deploying it for “tactical targeting” in an immigration enforcement context could chill free speech and association, particularly in immigrant communities already under heightened scrutiny.
How “Tactical Targeting” Differs from Traditional Investigative Use
The phrase “tactical targeting” is doing significant work in the contract’s framing, and it deserves close parsing. Traditional investigative use of facial recognition typically involves running a suspect’s image against a database after a crime has been committed or a specific lead has been developed. Tactical targeting, by contrast, implies a more proactive posture — identifying individuals in the field, potentially in real-time, as part of ongoing enforcement operations.
This distinction matters enormously from both an operational and a constitutional perspective. If CBP agents are using Clearview AI to identify individuals encountered during sweeps, checkpoints, or surveillance operations, the technology is functioning less like a detective’s tool and more like a dragnet. Legal experts have noted that such use could implicate Fourth Amendment protections against unreasonable searches, particularly if individuals are stopped or detained based primarily on a facial recognition match that may or may not be accurate.
Clearview AI’s Evolving Government Strategy
For Clearview AI, the CBP contract represents validation of a business strategy that has been relentlessly focused on government clients since the company’s commercial ambitions were curtailed by legal settlements and public backlash. CEO Hoan Ton-That has repeatedly argued that the technology saves lives and helps solve crimes, pointing to its use in identifying victims of child sexual exploitation and suspects in violent crimes. The company has also marketed its technology to the Department of Defense and has explored applications in overseas conflict zones.
Clearview AI’s pivot toward government work has been financially significant. The company has secured contracts with numerous federal, state, and local agencies, and its revenue from government clients has grown substantially. The CBP deal, while just one contract among many, carries symbolic weight: it demonstrates that even amid fierce public debate, the federal government is willing to deepen its reliance on a company that many privacy advocates consider a cautionary tale about unchecked surveillance capitalism.
Congressional Oversight Remains Fragmented and Uncertain
Despite years of debate, Congress has not passed comprehensive federal legislation governing the use of facial recognition technology by law enforcement. Several bills have been introduced in prior sessions — including proposals for outright moratoriums on federal use of the technology — but none have advanced to a vote. The current political environment, with its emphasis on border security and law enforcement empowerment, makes passage of restrictive legislation even less likely in the near term.
Some members of Congress have continued to press for transparency. Senators Edward Markey and Jeff Merkley have previously called on DHS to disclose the full extent of its facial recognition programs and to conduct civil rights impact assessments before deploying new tools. But oversight hearings have been sporadic, and the executive branch has largely been left to set its own policies regarding the acquisition and use of biometric surveillance technologies.
What Comes Next: The Broader Implications for Civil Liberties and Federal Power
The CBP-Clearview AI contract is not an isolated event. It is a data point in a rapidly accelerating trend toward the integration of artificial intelligence and biometric tools into the daily operations of federal law enforcement. As these technologies become more capable and more deeply embedded in agency workflows, the window for meaningful democratic deliberation about their appropriate use narrows.
For industry observers, the deal underscores a critical reality: the market for government facial recognition contracts is growing, regulatory constraints remain minimal at the federal level, and companies like Clearview AI have successfully navigated legal and reputational challenges to secure their position as key vendors. For civil liberties advocates, the contract is a warning that the infrastructure of mass biometric surveillance is being assembled incrementally, one procurement at a time, with insufficient public scrutiny or legal safeguards.
The question now is whether the courts, Congress, or the public will intervene before the technology becomes so deeply entrenched that rolling it back becomes practically impossible. If history is any guide, the answer is far from certain — and the stakes, for millions of people who may never know their face was scanned, could not be higher.


WebProNews is an iEntry Publication