TikTok Lays Off 150 in Berlin, Replaces Safety Team with AI

TikTok is replacing its Berlin trust and safety team with AI and outsourced labor, laying off about 150 employees as part of ByteDance's global automation push for efficiency. Similar layoffs occurred in Malaysia and the US amid regulatory pressures. Critics highlight AI's risks in nuanced moderation, sparking strikes and ethical concerns. This trend may inspire hybrid models balancing innovation and human oversight.
TikTok Lays Off 150 in Berlin, Replaces Safety Team with AI
Written by John Smart

TikTok’s decision to overhaul its content moderation operations in Germany marks a pivotal shift in how social media giants are leveraging artificial intelligence to manage vast digital ecosystems. According to a recent report in The Guardian, the company is set to replace its Berlin-based trust and safety team with AI systems and outsourced labor, leading to the dismissal of around 150 employees. This move is part of a broader global strategy by TikTok’s parent company, ByteDance, to automate moderation processes amid mounting pressures to cut costs and enhance efficiency.

Workers in Berlin have responded with strikes, highlighting the human cost of this transition. The ver.di trade union, representing the affected staff, argues that the layoffs not only jeopardize jobs but also raise questions about the reliability of AI in handling nuanced content decisions, such as identifying hate speech or misinformation. TikTok, however, maintains that the changes will improve moderation speed and consistency, drawing on lessons from similar restructurings elsewhere.

The Global Wave of AI-Driven Layoffs in Moderation

Recent developments echo patterns seen in other regions. Last year, TikTok laid off nearly 500 moderators in Malaysia, replacing them with AI tools, as detailed in a report from the Institute of Strategic and International Studies. This pattern underscores ByteDance’s aggressive push toward automation, with insiders noting that AI can process content at scales impossible for human teams alone. Yet, critics point to potential pitfalls, including biases embedded in algorithms trained on imperfect data sets.

In the U.S., TikTok has merged its core product and trust and safety teams, appointing Adam Presser as general manager of its U.S. Data Security division, according to TechCrunch. This reorganization aims to bolster national security safeguards while integrating AI more deeply into operations. Industry experts suggest these moves are driven by regulatory scrutiny, particularly in Europe under the Digital Services Act, which demands robust content oversight.

Challenges and Ethical Dilemmas of AI Moderation

The reliance on AI isn’t without controversy. Posts on X (formerly Twitter) from users like tech analysts and labor advocates reflect growing unease, with some highlighting how AI might struggle with cultural nuances in content from diverse regions like Germany. One thread emphasized the risk of “adaptive threat response” systems overlooking subtle harms, potentially exacerbating issues like misinformation during elections.

Moreover, TikTok’s recent safety updates, including enhanced family controls and well-being missions, as reported by Rappler, show an attempt to balance automation with user protection. In Kenya, for instance, the platform removed over 450,000 videos and banned 43,000 accounts in early 2025 for violations, per The Kenyan Wall Street, relying on a hybrid of AI and human review.

Industry Implications and Future Outlook

For industry insiders, TikTok’s strategy signals a broader trend among platforms like Meta and YouTube, where AI investments are surging to handle exponential content growth. However, the Berlin strikes, covered extensively in Euronews, could inspire similar labor actions globally, forcing companies to address worker retraining and ethical AI deployment.

ByteDance’s approach may yield cost savings—estimated at millions annually—but at the risk of eroding trust if AI falters. As one X post from a content moderation expert noted, the “automation revolution” promises efficiency but demands rigorous oversight to avoid fairness issues. Looking ahead, regulators and unions may push for hybrid models that retain human judgment, ensuring platforms like TikTok navigate the AI era without sacrificing safety or equity.

Balancing Innovation with Human Elements

Ultimately, TikTok’s pivot reflects the tech sector’s race to integrate AI amid economic headwinds. Reports from PCMag indicate hundreds of global layoffs tied to this focus, yet the company has unveiled creator tools to foster positive content, as per SecurityOnline. For insiders, the key question is whether these changes will enhance platform integrity or expose vulnerabilities in an increasingly automated world. As tensions rise, TikTok’s handling of this transition could set precedents for the entire social media industry.

Subscribe for Updates

AITrends Newsletter

The AITrends Email Newsletter keeps you informed on the latest developments in artificial intelligence. Perfect for business leaders, tech professionals, and AI enthusiasts looking to stay ahead of the curve.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us