Privacy is heading into a brutal year. According to a detailed forecast from Computer Weekly, 2026 will bring an unprecedented convergence of political, technological, and corporate pressures that threaten to erode personal data protections on a global scale. For industry professionals who’ve spent years building compliance frameworks and privacy-first architectures, the ground is shifting fast.
The warning comes largely from Simon McDougall, a former deputy commissioner at the UK’s Information Commissioner’s Office (ICO), now a partner at the consultancy Bho Advisory. McDougall doesn’t mince words. He argues that privacy will face attacks from multiple directions simultaneously — government surveillance expansion, AI-driven data harvesting, and a political climate increasingly hostile to regulatory enforcement. Not one threat. All of them at once.
Start with the political dimension. The Trump administration in the US has already signaled a deregulatory posture toward tech companies, and McDougall sees this filtering into global norms. When the world’s largest economy relaxes its stance on data protection, it creates permission structures for other governments to do the same. The UK, already post-Brexit and eager for trade deals, could soften its own rules to maintain alignment with American commercial interests. And the EU, long considered the gold standard for privacy through GDPR, is facing its own internal pressures from member states pushing for AI competitiveness over individual rights.
That tension between innovation and protection isn’t new. But it’s intensifying.
McDougall points to AI as the primary accelerant. Large language models and generative AI systems are voracious consumers of personal data, and the companies building them have strong financial incentives to resist constraints on data collection. The argument from Silicon Valley has been consistent: restricting training data means restricting progress. That framing has gained traction in policy circles, particularly as the US and China race to dominate AI development. Privacy, in this context, gets recast as an obstacle rather than a right.
So what does this mean practically? For one, expect enforcement to weaken. McDougall highlights that data protection authorities around the world are already underfunded and understaffed. The ICO in the UK has seen its budget pressures grow even as the complexity of its mandate expands. The US still lacks a comprehensive federal privacy law, relying instead on a patchwork of state-level regulations like California’s CCPA. And even where strong laws exist on paper, the political will to enforce them is eroding. Fines that once made headlines are becoming cost-of-business calculations for major tech firms.
There’s also the surveillance angle. Governments on both sides of the Atlantic are expanding their access to personal data under the banner of national security and public safety. The UK’s Online Safety Act, while framed as a child protection measure, grants authorities broader powers to compel platforms to monitor and share user data. Similar legislative moves are underway across Europe and in Australia. The pattern is consistent: safety justifications open doors that are very difficult to close again.
McDougall isn’t alone in sounding the alarm. Privacy International and the Electronic Frontier Foundation have both flagged 2025-2026 as a critical period for digital rights, citing the intersection of AI deployment, weakened regulatory bodies, and authoritarian-leaning governance trends. Wired has covered the growing tension between AI companies and European regulators extensively, documenting how firms like Meta and OpenAI have pushed back against data processing restrictions.
Here’s what makes this moment different from previous privacy scares. It’s not a single technology or a single bad actor. It’s structural. The economic incentives, the political incentives, and the technological capabilities are all aligned against privacy protections at the same time. That kind of convergence doesn’t happen often, and when it does, the effects tend to be durable.
For professionals in security, compliance, and data governance, the practical implications are significant. Organizations that have built their strategies around regulatory frameworks may find those frameworks weakened or unenforced. Privacy-by-design principles, once seen as forward-thinking, may become the primary line of defense when external enforcement falters. Companies will need to decide whether privacy is a compliance checkbox or an actual value — because the checkbox alone won’t hold.
And for the rest of us? The ones who grew up watching technology transform from a hobbyist’s playground into the infrastructure of daily life? We should pay attention. I’ve been tinkering with tech since I was a kid in the Midwest, and one thing I’ve learned is that the systems we build reflect the values we prioritize. If privacy drops off the priority list in 2026, the architecture we build during that period will carry those compromises forward for years.
McDougall’s forecast isn’t speculative doom. It’s a pattern-matched prediction based on observable trends in policy, funding, and corporate behavior. The data supports it. The question isn’t really whether privacy will be under pressure in 2026. It’s whether anyone with power will push back.
Privacy Is About to Get Hammered in 2026 — Here’s What’s Coming first appeared on Web and IT News.




