Home Cybersecurity Disaster Recovery Identity Security AI Governance Sectors IT Services About Insights Contact
AI & Technology

UK Government's March 18 AI Copyright Deadline Could Reshape Business Data Rights

26 March 2026 · 4 min read

← All insights

The UK government missed its statutory deadline to publish critical reports on AI and copyright protections under the Data (Use and Access) Act 2024. The March 18, 2026 deadline passed without the promised analysis that would determine whether businesses can legally protect their intellectual property from unauthorised AI training. This regulatory vacuum leaves UK organisations uncertain about their data rights whilst AI systems continue accessing proprietary content.

The Data (Use and Access) Act 2024 represents the UK's most significant attempt to balance AI innovation with intellectual property protection. Under Section 61A, the government was required to examine whether existing copyright exceptions adequately cover AI training activities and whether new protections are needed for rights holders.

Key Facts:
- The March 18, 2026 deadline for AI copyright reports under the Data (Use and Access) Act 2024 has passed without publication
- Current UK law provides no explicit opt-out mechanism for businesses to prevent AI training on their data
- The European Union's AI Act includes provisions for transparency in AI training datasets that the UK has not yet matched
- Legal uncertainty continues around whether scraping publicly available business data constitutes fair dealing under current copyright law

What the Missing Reports Should Have Covered

According to reporting from Slaughter and May, the government's analysis was meant to address three critical questions: whether current copyright exceptions adequately cover AI training, whether rights holders need new opt-out mechanisms, and how to balance innovation with intellectual property protection. Without this analysis, businesses remain in legal limbo regarding their proprietary content.

The reports should have examined whether the existing "text and data mining" exceptions in copyright law provide sufficient coverage for AI training activities. Currently, these exceptions allow research organisations and libraries to process copyrighted material under specific conditions, but their application to commercial AI development remains unclear. The government's failure to clarify this position leaves businesses vulnerable to unauthorised use of their content.

The Competitive Intelligence Problem

Without clear regulatory guidance, UK businesses face an asymmetric risk where competitors using AI systems may gain unfair access to proprietary information through training data. Marketing materials, technical documentation, and strategic communications published online could be systematically harvested and analysed by AI models without consent or compensation.

This creates particular challenges for professional services firms, consultancies, and technology companies whose competitive advantage relies on proprietary methodologies and insights. The current legal framework provides no mechanism to prevent AI systems from processing publicly available content, even when that content represents significant intellectual investment.

The uncertainty extends to internal business data. Whilst businesses can control access to confidential information, the boundary between public and protected content becomes blurred when considering AI training datasets. Many organisations inadvertently expose strategic information through marketing content, case studies, and thought leadership that could be systematically processed by competitor AI systems.

International Divergence Creates Compliance Complexity

The UK's regulatory delay contrasts sharply with developments in other jurisdictions. The European Union's AI Act includes provisions requiring AI developers to provide transparency about their training datasets and to respect opt-out requests from rights holders. This creates a situation where UK businesses operating internationally face different data protection standards across markets.

The NCSC's guidelines on secure AI deployment emphasise the importance of data governance but provide limited guidance on intellectual property protection. This regulatory gap becomes more problematic as organisations increasingly deploy AI agents that require careful oversight to prevent unauthorised data access.

Boardroom Questions

Quick Diagnostic

PTG Advisory Team
Pacific Technology Group

Related Reading

UK Spy Chief's Vibe Coding Warning Creates Security Standards Crisis — NCSC CEO warns that rapid AI code generation without review is creating massive security gaps requiring immediate indust

Gartner Calls for Friday Afternoon Copilot Bans Due to User Laziness Risk — Gartner analyst warns tired users may not properly scrutinise AI-generated content, highlighting the human element in en

AWS AI Sandbox Cracked Open Through DNS Attack Vector — Security researchers expose critical flaw in AWS Bedrock's sandbox isolation, showing how AI agents can bypass containme

AWS AI Sandbox Fails at DNS Level, Exposes Cloud Credentials — Security researchers bypassed AWS Bedrock's AI code interpreter sandbox using DNS queries, exposing cloud credentials an

Banks Finally Build AI Governance Frameworks as Regulation Tightens — E.SUN Bank and IBM create Taiwan's first banking AI governance framework, signalling the industry's shift from AI experi

Strengthen your organisation's security posture

Take the PTG Cyber Assessment Speak With Our Advisory Team

Ready to strengthen your cyber resilience?

Talk to our team about protecting your organisation against evolving threats.

Get in Touch