Google has slipped a significant change into its Gmail service, allowing the tech giant to scan users’ emails and attachments for artificial intelligence training by default. The update, which applies to non-EU users, has sparked privacy alarms across the tech industry, with cybersecurity firms and watchdogs racing to inform users how to reclaim control over their data.
Reported first by Malwarebytes, the policy shift means Google Workspace users—numbering in the hundreds of millions—must actively opt out to prevent their private correspondence from feeding models like Gemini. This comes amid intensifying scrutiny of Big Tech’s data hunger, as regulators in Europe tighten reins while U.S. consumers grapple with opaque terms of service.
Policy Shift Under the Radar
The change stems from an update to Google’s ‘Gemini Apps Activity’ settings, where ‘Gemini Apps Activity’ is now enabled by default for personal Google accounts outside the European Economic Area. According to ZDNet, this permits Google to review email content and attachments to improve AI functionalities, including natural language processing and content generation tools.
Google’s official stance, buried in privacy policy footnotes, clarifies that such data usage enhances services like smart replies and spam detection. Yet, critics argue the opt-in reversal prioritizes AI advancement over user consent, echoing past controversies like the 2023 Bard training data flap. WinBuzzer notes that non-EU users face a stark choice: disable the feature and potentially lose AI-powered Gmail perks, or surrender data for innovation.
Unpacking the Technical Mechanics
At its core, the system leverages Gmail’s existing scanning infrastructure, which already processes emails for targeted ads (though Google ended personalized ad scanning in 2021 for consumer accounts). Now, machine learning pipelines ingest anonymized snippets—subject lines, body text, and attachment metadata—to fine-tune large language models. WinBuzzer reports that while Google claims data is de-identified, experts warn of re-identification risks in aggregated datasets.
For enterprise users on Google Workspace, administrators must navigate the Google Admin console to toggle ‘Gemini Apps Activity’ off domain-wide. Individual users access this via myaccount.google.com, under Data & Privacy > Gemini Apps Activity. Malwarebytes provides step-by-step visuals: pause the activity, review stored data, and delete past interactions—actions that take under five minutes but require proactive intent.
Industry Reactions and User Backlash
Cybersecurity outlets have amplified warnings, with Newsweek urging Gmail’s 1.8 billion users to scrutinize settings amid fears of sensitive data exposure. Posts on X from accounts like @Malwarebytes highlight the ‘nightmare of opting out,’ garnering thousands of views and shares, reflecting widespread user consternation.
Privacy advocates, including the Electronic Frontier Foundation (though not directly quoted here), have long criticized default opt-ins as ‘privacy theater.’ Google’s move aligns with industry trends—Microsoft’s Copilot similarly trains on Outlook data—but stands out for its stealth rollout, lacking email notifications or homepage banners.
Broader Implications for AI Data Ecosystems
This isn’t isolated; Google’s October 2025 AI updates expanded Gemini’s scope across products, per the company’s blog. WindowsReport details how email training bolsters features like AI summaries and drafting, but at what cost? Legal experts predict class-action suits, especially if breaches expose trained models’ provenance.
EU users remain shielded by GDPR’s explicit consent mandates, creating a two-tier privacy regime that disadvantages Americans and others. Medianama reports Google’s updated AI Training Policy explicitly lists emails as training fodder, signaling a strategic pivot to proprietary data amid open-source alternatives like Llama.
Opt-Out Mechanics and Limitations
To disable: Log into your Google Account, navigate to Data & Privacy, select Gemini Apps Activity, and hit ‘Pause.’ This halts future use but doesn’t retroactively purge data—users must manually delete via activity controls. ZDNet warns that Workspace admins control overrides, potentially locking teams in.
Limitations abound: Opting out may degrade AI features, like advanced search or auto-categorization. Moreover, attachments—photos, PDFs, docs—pose acute risks, as OCR and content analysis could extract personal identifiers. Malwarebytes stresses verifying two-factor authentication post-changes to thwart opportunistic phishing.
Competitive Landscape and Alternatives
Rivals like ProtonMail tout end-to-end encryption with zero-knowledge AI, appealing to privacy hawks. Fastmail and Tutanota similarly shun scanning, though lacking Google’s ecosystem heft. As AI commoditizes email, users weigh convenience against control— a tension Google’s policy exacerbates.
X sentiment, per recent posts, skews negative: Users decry ‘snooping’ and demand transparency. Google’s silence on rollout timelines fuels speculation of A/B testing, common in Silicon Valley but irksome for data sovereignty.
Regulatory Horizons and Future Risks
U.S. lawmakers, eyeing bipartisan privacy bills, may probe this as evidence of insufficient safeguards. California’s CCPA offers opt-out analogs, but enforcement lags. Globally, Brazil’s LGPD and India’s DPDP mirror GDPR, pressuring multinationals.
For insiders, the real play is Google’s moat-building: Exclusive data trains superior models, widening the AI chasm. Yet, backlash risks user exodus, per churn models from SimilarWeb data. As 2025 unfolds, expect policy tweaks—perhaps mandatory disclosures—under mounting pressure.


WebProNews is an iEntry Publication