Wednesday, February 11, 2026

Hundreds of TikTok Moderators Cut in Shift Toward AI Oversight

TikTok has begun laying off hundreds of human moderators in the United Kingdom and Asia as part of a broader shift toward artificial intelligence-driven content moderation. The company confirmed the decision this week, framing it as part of an ongoing reorganization effort within its global Trust and Safety operations. The move has sparked immediate backlash from unions and online safety advocates who argue that human oversight remains critical in monitoring harmful content.

Mass Layoffs and Union Response

According to reports from the Wall Street Journal and the BBC, TikTok has cut jobs from its 2,500-strong workforce in the UK, though the exact number of layoffs has not been disclosed. Workers who lose their positions will receive priority consideration for rehiring, but the company has not specified what criteria will apply. The decision follows months of restructuring as TikTok consolidates moderation functions into fewer geographic hubs.

The Communications Workers Union (CWU), which represents some affected workers, criticized the cuts. John Chadfield, the CWU’s national tech officer, told the BBC that TikTok was “putting corporate greed over the safety of workers and the public.” In a separate comment to the Wall Street Journal, Chadfield argued that moderators had long warned about the risks of relying on what he called “hastily developed, immature AI alternatives.”

Critics worry that reducing human involvement could leave vulnerable users at risk if AI moderation systems fail to catch harmful or nuanced content. Moderators play a key role in identifying posts that algorithms misclassify or overlook, and unions argue that human oversight is especially important in areas like child safety, harassment, and self-harm content.

TikTok Defends Its AI Approach

TikTok has defended its decision, stating that artificial intelligence has already become central to its moderation efforts. In a public statement, the company described its AI as “comprehensive” and emphasized that the technology was designed to protect both users and the employees tasked with reviewing sensitive material. The company added that the restructuring builds on changes introduced last year to strengthen its Trust and Safety model worldwide.

“TikTok is continuing a reorganization that we started last year to strengthen our global operating model for Trust and Safety, which includes concentrating our operations in fewer locations globally,” the statement said. The company has invested heavily in AI tools over several years and maintains that automation can help maximize “effectiveness and speed” in removing unsafe material from the platform.

TikTok also claimed that its AI systems already identify and remove a large share of policy-violating content. The company says approximately 85 percent of non-compliant posts are automatically taken down before they reach users, though it has not provided independent evidence to verify this figure. Safety advocates argue that such statistics should be subject to greater transparency and third-party oversight.

Regulatory Pressure in the United Kingdom

The layoffs and shift toward AI moderation come as TikTok faces increased regulatory scrutiny in the United Kingdom and beyond. Earlier this year, the UK’s Information Commissioner’s Office launched an investigation into the company’s handling of data belonging to teenage users aged 13 to 17. Regulators are examining how TikTok collects, stores, and uses this information, particularly in relation to child safety standards.

TikTok also cited new obligations under the UK’s Online Safety Act, which came into force in July, as part of its rationale for expanding AI-driven moderation. The legislation requires social media platforms to comply with stricter rules for protecting users, with potential fines of up to 10 percent of global revenue for non-compliance. By relying more heavily on AI, the company says it can better meet these regulatory demands at scale.

Despite these assurances, critics maintain that automation alone cannot replace the judgment and contextual awareness of human moderators. They warn that TikTok’s decision could increase risks for users while leaving dismissed workers without adequate protections. The coming months will likely determine whether regulators, unions, and the public accept TikTok’s reliance on AI as a sustainable approach to content moderation.