In a cramped London flat, a content creator with a smartphone and a knack for algorithmic manipulation has been quietly assembling one of the most prolific anti-immigrant disinformation operations on TikTok — reaching millions of viewers with fabricated stories designed to stoke fear, resentment, and hatred toward migrants in the United Kingdom. The operation, recently exposed by investigative journalists, reveals the alarming ease with which a single individual can weaponize social media platforms to shape public sentiment on immigration, one of the most politically charged issues in British life.
The creator, whose identity has been partially shielded but whose digital footprint has been meticulously traced, has been producing dozens of videos per week — many of them featuring entirely invented stories about immigrants committing crimes, receiving lavish government benefits, or displacing British citizens from housing and jobs. The content is crafted with a veneer of authenticity: urgent voiceovers, screenshots of fake news articles, and references to real locations that lend the fabrications an air of credibility. According to an investigation by London Centric, the creator has amassed hundreds of thousands of followers and generated tens of millions of views, making them one of the most influential purveyors of anti-immigrant content on the platform in the UK.
The Anatomy of a Disinformation Factory
What distinguishes this operation from the garden-variety xenophobic rant is its industrial scale and calculated sophistication. As London Centric detailed, the creator does not simply share opinions about immigration policy — they manufacture fake news stories wholesale. Videos frequently claim that migrants have been given priority access to council housing, that asylum seekers are receiving thousands of pounds in cash handouts, or that immigrant crime waves are sweeping through specific British towns. In nearly every case, the stories are either entirely fabricated or represent grotesque distortions of real events.
The production method is disturbingly efficient. The creator appears to use a template-based approach: identify a topic that is already generating anxiety among the British public, fabricate a specific and emotionally charged anecdote, and then package it in TikTok’s short-form video format with all the hallmarks of a legitimate news report. The algorithm does the rest. TikTok’s recommendation engine, which prioritizes engagement above virtually all other metrics, reliably amplifies content that provokes strong emotional reactions — and few topics provoke stronger reactions than immigration. The result is a feedback loop in which the most inflammatory and dishonest content is systematically rewarded with the widest distribution.
TikTok’s Algorithm: The Silent Accomplice
The role of TikTok’s algorithmic architecture in enabling this kind of operation cannot be overstated. Unlike legacy social media platforms where reach is largely determined by follower count, TikTok’s For You Page can catapult content from unknown creators to millions of viewers overnight. This design feature, which has been celebrated for democratizing content creation, has a dark corollary: it also democratizes disinformation. A single bad actor with no institutional backing, no media credentials, and no accountability can reach an audience that rivals that of established news organizations.
TikTok has faced mounting scrutiny from regulators and researchers over its handling of misinformation and harmful content. The platform has repeatedly stated that it removes content that violates its community guidelines, including hate speech and dangerous misinformation. However, critics argue that enforcement is inconsistent and reactive rather than proactive. In the case documented by London Centric, many of the creator’s videos remained live for extended periods, accumulating millions of views before any action was taken. Even when individual videos were removed, the creator’s account continued to operate, and new content was uploaded at a pace that outstripped moderation efforts.
The Real-World Consequences of Digital Lies
The stakes of this kind of disinformation extend far beyond the digital realm. The United Kingdom has experienced a sharp increase in anti-immigrant sentiment and, in some cases, outright violence. The summer of 2024 saw a wave of riots across English towns, fueled in significant part by false information circulating on social media about the identity and background of a suspect in a stabbing attack in Southport. While the causes of those disturbances were complex and multifaceted, researchers and law enforcement officials have pointed to the role of online disinformation in accelerating and intensifying the violence.
The TikTok operation exposed by London Centric fits squarely within this pattern. By flooding the platform with fabricated stories about immigrant crime and welfare abuse, the creator is not merely reflecting existing prejudices — they are actively constructing a false narrative framework that makes real-world hostility toward immigrants seem rational and justified. Each fake story functions as a data point in an imaginary crisis, and when viewers encounter dozens of such stories in their feeds, the cumulative effect is a distorted perception of reality that can translate into support for extreme policy measures or, in the worst cases, direct action against migrant communities.
The Economics of Outrage
There is also a financial dimension to this operation that deserves scrutiny. TikTok’s Creator Fund and various monetization features mean that viral content can generate real income for its producers. While the precise earnings of the creator in question have not been publicly disclosed, the sheer volume of views their content has attracted suggests that anti-immigrant disinformation is not just an ideological project — it is a business model. The economics are straightforward: outrage drives engagement, engagement drives views, and views drive revenue. In this calculus, the truth is not merely irrelevant; it is an obstacle to profitability.
This monetization dynamic creates perverse incentives that extend beyond any single creator. When platforms financially reward content that generates strong emotional reactions without adequately penalizing dishonesty, they create a marketplace in which disinformation entrepreneurs can thrive. The London-based TikTok creator is not an anomaly but a rational actor within a system that has been designed — whether intentionally or not — to reward exactly this kind of behavior. Until the economic incentives are restructured, there is every reason to expect that similar operations will continue to proliferate.
Regulatory Gaps and the Limits of Platform Self-Governance
The UK government has been attempting to address the challenge of online harms through the Online Safety Act, which received Royal Assent in October 2023 and is being implemented in phases by Ofcom, the communications regulator. The legislation imposes new duties on platforms to protect users from illegal content and, for the largest platforms, from content that is legal but harmful. However, the practical effectiveness of the regime remains to be seen. Enforcement is complex, the definitional boundaries of harmful content are contested, and platforms have significant resources to resist or delay compliance.
Moreover, the speed at which disinformation can be produced and distributed on platforms like TikTok poses a fundamental challenge to any regulatory framework that relies on after-the-fact enforcement. By the time a piece of fake content is identified, reviewed, and removed, it may have already been viewed millions of times, shared across multiple platforms, and absorbed into the belief systems of its audience. The damage, in other words, is done before the remedy can be applied. This temporal mismatch between the speed of disinformation and the pace of regulation is one of the most vexing problems facing policymakers worldwide.
A Test Case for Democratic Resilience
The case of the London TikTok creator is, in many respects, a microcosm of a much larger challenge confronting democratic societies. The combination of powerful algorithmic amplification, low barriers to content production, financial incentives for outrage, and inadequate regulatory oversight has created conditions in which disinformation can flourish with minimal friction. Immigration, as a topic that touches on deep questions of identity, belonging, and resource allocation, is particularly vulnerable to this kind of exploitation.
What makes this case especially instructive is its ordinariness. This is not a state-sponsored influence operation or a coordinated campaign by a political organization. It is, by all appearances, the work of a single individual operating from London, exploiting freely available tools and platform features to manufacture and distribute fake news at scale. If one person can do this, the implications for the integrity of public discourse are profound. The question is not whether such operations can be stopped entirely — they almost certainly cannot — but whether societies and their institutions can develop the resilience, the media literacy, and the regulatory frameworks necessary to limit their impact.
For now, the videos continue to circulate, the views continue to accumulate, and the false stories continue to seep into the public consciousness. The creator, as documented by London Centric, remains active. And the platform that hosts and amplifies their content continues to profit from the engagement it generates. Until that equation changes, the disinformation machine will keep running.
Inside the Machine: How a London-Based TikTok Creator Built a Disinformation Empire Targeting Immigrants first appeared on Web and IT News.

