A class-action lawsuit against Grammarly has exposed how AI companies are appropriating real professionals' identities without consent, creating significant governance risks for UK businesses relying on AI writing tools. Julia Angwin, an investigative journalist, discovered Grammarly's AI was generating writing advice using her name and professional reputation without permission, raising fundamental questions about digital identity rights in the AI era.
Identity appropriation in AI systems involves algorithms using real people's names, professional credentials, and reputations to generate content that appears to be authored or endorsed by those individuals. According to reporting from National Today, this practice extends beyond simple plagiarism into unauthorised commercial use of professional identities.
Key Facts:
- Grammarly's AI system generated writing advice using Julia Angwin's professional identity without consent
- The class-action lawsuit could establish precedents for AI companies' use of real people's professional credentials
- Right-of-publicity laws do not provide technology companies with exceptions for AI-generated content
- UK businesses using AI writing tools may face indirect liability for vendors' identity appropriation practices
What Legal Protections Exist for Professional Identity Rights?
Right-of-publicity laws in various jurisdictions protect individuals from unauthorised commercial use of their names, likenesses, and professional reputations. However, technology companies have historically assumed these protections don't apply to algorithmic content generation. Angwin's lawsuit challenges this assumption directly, arguing that AI systems cannot appropriate professional identities simply because the process is automated. For UK businesses, this legal uncertainty creates compliance risks when deploying AI tools that might generate content using real professionals' names or credentials.
How This Affects UK AI Governance Frameworks
UK organisations deploying AI writing tools face emerging governance challenges around third-party identity appropriation. The ICO's AI guidance emphasises accountability for algorithmic decisions, but many businesses haven't audited whether their AI vendors appropriate real people's professional identities without consent. This oversight becomes particularly critical for firms in regulated sectors like financial services, where the FCA expects robust third-party risk management. Companies using AI tools must now verify that vendors obtain proper consent before using real professionals' names or credentials in generated content.
Vendor Due Diligence Requirements
The Grammarly case demonstrates why AI vendor assessments must extend beyond traditional data protection considerations. UK businesses should audit whether AI writing tools generate content using real people's professional identities, how vendors obtain consent for such use, and what indemnification exists for identity appropriation claims. This becomes especially relevant given the interconnected nature of modern AI systems, where one vendor's practices can create liability exposure for multiple downstream users. AI governance teams must develop specific protocols for evaluating identity appropriation risks in AI vendor contracts.
Strategic Implications for UK Boards
This lawsuit signals a broader shift in AI accountability, where courts may hold technology companies responsible for appropriating professional identities without consent. UK businesses should expect increased scrutiny of AI vendor practices, particularly around content generation that uses real people's names or professional credentials. Boards must ensure their AI governance frameworks address third-party identity appropriation risks and establish clear vendor accountability standards. The legal precedents emerging from cases like Angwin's will likely shape future AI regulation, making proactive governance essential for managing both reputational and legal exposure.
Related Reading
Zero-Click Excel Bug Turns Copilot Into Corporate Data Thief — CVE-2026-26144 allows attackers to exploit Microsoft 365 Copilot through malicious Excel files, turning AI assistance in
OpenAI Acquires Promptfoo: What UK AI Governance Teams Need to Know — OpenAI's $18.4M acquisition of AI red teaming specialist Promptfoo signals a shift towards integrated security in enterp
Strengthen your organisation's security posture

