In the shadowy underbelly of social media advertising, where billions of dollars flow through algorithms that often prioritize profits over protection, a new initiative is emerging from unlikely quarters: former executives of Meta Platforms Inc. Rob Leathern and Rob Goldman, both veterans of the company’s ad integrity efforts, are launching a nonprofit aimed at piercing the veil of opacity that allows scam ads to proliferate unchecked. Their venture, Collective Metrics, seeks to standardize transparency in digital advertising, potentially reshaping how platforms report and combat fraudulent content.
Drawing from their insider experience—Leathern as Meta’s former vice president of product management for business integrity, and Goldman as the ex-vice president of product management for ads—the duo is positioning their organization as a neutral arbiter in an industry plagued by mistrust. According to a report in Wired, the nonprofit will focus on creating benchmarks for ad transparency, including metrics on scam prevalence and platform enforcement efficacy. This comes amid growing scrutiny of Meta’s ad ecosystem, where internal documents reveal staggering volumes of fraudulent promotions.
The Alarming Scale of Social Media Scams
A recent investigation by Reuters exposes the depth of the problem: Meta projected that 10% of its 2024 revenue—approximately $16 billion—stemmed from ads promoting scams or banned goods. The company estimates its platforms deliver 15 billion scam ads daily to users, a figure that underscores the systemic failures in automated moderation systems. These ads range from cryptocurrency Ponzi schemes to counterfeit luxury items and even sextortion ploys, exploiting vulnerabilities in user targeting algorithms.
Insiders familiar with Meta’s operations, as detailed in the Reuters documents, note that the company only bans advertisers if automated systems predict a 95% certainty of fraud—a threshold that allows many bad actors to slip through. This lax enforcement has drawn bipartisan criticism, with figures like New York Attorney General Letitia James calling on Meta to bolster protections against investment scams that have siphoned hundreds of millions from users, according to posts on X (formerly Twitter).
From Meta’s Halls to Nonprofit Advocacy
Leathern and Goldman’s departure from Meta wasn’t abrupt; both left in 2023 after years of grappling with the company’s ad challenges. Leathern, who joined Meta via the acquisition of his startup in 2010, oversaw efforts to curb misinformation and scam ads during pivotal events like the 2020 U.S. elections. Goldman, meanwhile, managed ad product strategies and faced public backlash over political ad policies. Their new nonprofit, Collective Metrics, aims to collaborate with platforms, advertisers, and regulators to establish verifiable standards for ad safety.
As reported in Wired, the organization plans to publish annual reports on platform performance, using data volunteered by companies or scraped from public sources. “We want to create a common language for transparency,” Leathern told Wired, emphasizing the need for metrics that go beyond self-reported figures. This approach echoes calls from consumer advocates and could pressure giants like Meta, Google, and TikTok to disclose more about their ad vetting processes.
Industry-Wide Ramifications and Regulatory Pressure
The timing of Collective Metrics’ launch coincides with heightened regulatory scrutiny. In the U.K., the Advertising Standards Authority (ASA) has ramped up campaigns against misleading ads, with figures like Martin Lewis of MoneySavingExpert.com publicly decrying the “frustration” of unchecked scams on platforms, as seen in his X posts. Across the Atlantic, U.S. lawmakers are eyeing stricter rules, inspired by revelations that Meta’s competitors, such as Google, reportedly fare better in fraud detection, per a report in Economic Times Telecom.
Internal Meta debates, as uncovered by Reuters, highlight a tension between revenue growth and ethical advertising. Documents show executives acknowledging that scam ads inflate engagement metrics, indirectly boosting platform value. Yet, this short-term gain erodes user trust—a point echoed in X discussions where users like tech commentator Aravind criticize Meta for allowing spam links to persist “just to inflate engagement rates.”
Technological Hurdles in Scam Detection
At the heart of the scam epidemic lies the limitations of AI-driven moderation. Meta’s systems, while sophisticated, struggle with evolving tactics like AI-generated deepfakes in ads, which mimic legitimate endorsements. A The Information briefing notes that Meta’s internal projections for 2024 included ads for fraudulent schemes that evade detection through subtle manipulations. Former staffers like Leathern argue that transparency metrics could incentivize better AI investments.
Comparisons with peers reveal disparities: Google’s ad review processes reportedly catch more fraud upfront, leading to lower scam volumes, according to industry analyses cited in Economic Times Telecom. Collective Metrics plans to benchmark these differences, potentially exposing laggards and fostering competition in ad integrity. “It’s about making the invisible visible,” Goldman explained in the Wired interview, highlighting how opaque reporting hinders progress.
Voices from the Ground: User and Expert Sentiments
Sentiment on X reflects widespread frustration, with posts from figures like Carol M. Swain celebrating legal setbacks for Meta, such as a court ruling allowing lawsuits over fraudulent ads to proceed. Advocacy groups and attorneys general are amplifying these concerns, urging platforms to prioritize user safety over ad dollars. In one X thread, a user recounted how internal teams at tech firms allegedly coached scammers to comply with policies while ignoring underlying fraud—a claim that aligns with whistleblower accounts in Reuters’ reporting.
Experts warn that without intervention, the scam deluge could undermine the digital economy. A WebProNews analysis delves into the implications, noting that Meta’s $16 billion in scam-derived revenue represents a moral and financial quagmire. Collective Metrics’ data-driven approach could provide the empirical backbone for policy changes, bridging the gap between tech insiders and regulators.
Potential Pathways for Reform
Looking ahead, Collective Metrics envisions partnerships with organizations like the Better Business Bureau and international watchdogs to validate its metrics. By aggregating anonymized data from multiple platforms, the nonprofit could create industry-wide dashboards, revealing trends in scam tactics and enforcement efficacy. This model draws inspiration from financial transparency standards, adapting them to the ad tech space.
Challenges remain: Platforms may resist sharing sensitive data, fearing competitive disadvantages. Yet, as public pressure mounts—evidenced by X campaigns from influencers like PRGuy calling for crackdowns on misleading content—the incentive for cooperation grows. Leathern and Goldman’s initiative, born from their Meta tenure, represents a pivotal step toward accountability in an industry long criticized for its black-box operations.
Evolving Strategies Against Digital Deception
Innovative solutions are already in play elsewhere. For instance, some platforms experiment with user-reported scam flagging integrated with machine learning, but scalability issues persist. Collective Metrics aims to standardize these efforts, providing templates for transparent reporting that could be mandated by future regulations. As noted in a Engadget article, Meta’s ongoing profits from scams highlight the urgency for such reforms.
Ultimately, the success of this nonprofit could hinge on adoption by major players. With backing from former insiders who know the system’s flaws intimately, it offers a rare glimpse of hope in combating the scam flood that threatens social media’s foundational trust.


WebProNews is an iEntry Publication