Home Cybersecurity Disaster Recovery Identity Security AI Governance Sectors IT Services About Insights Contact
AI & Technology

AWS AI Sandbox Fails at DNS Level, Exposes Cloud Credentials

16 March 2026 · 4 min read

← All insights

Security researchers have demonstrated a critical vulnerability in AWS Bedrock's AI code interpreter that allows attackers to bypass sandbox protections and exfiltrate cloud credentials through DNS queries. The attack exploits the fundamental trust relationship between AI agents and their training data, turning seemingly innocuous CSV files into credential theft vectors.

AWS Bedrock's code interpreter is a sandboxed environment designed to execute AI-generated code safely whilst preventing unauthorised network access or data exfiltration. However, security researchers discovered that malicious data inputs can manipulate AI-generated code to create covert communication channels through DNS requests, effectively circumventing network restrictions and exposing sensitive cloud infrastructure credentials.

Key Facts:
- AWS Bedrock's AI code interpreter sandbox can be bypassed through DNS-based exfiltration techniques
- Malicious CSV files can influence AI code generation to create unauthorised network communication channels
- The vulnerability exposes cloud credentials and creates persistent command-and-control capabilities
- Traditional network security perimeters prove insufficient against AI agent-based attacks

Understanding the AI Sandbox Bypass Mechanism

The vulnerability exploits a fundamental weakness in how AI agents process and act upon training data. According to reporting from Infosecurity Magazine, researchers demonstrated that carefully crafted CSV files can influence the AI's code generation process, causing it to produce executable code containing DNS exfiltration routines. These DNS queries appear legitimate to network monitoring systems but actually encode sensitive information including AWS credentials, session tokens, and infrastructure details.

The attack leverages the AI's natural language processing capabilities against itself. When presented with malicious training data disguised as business information, the AI interprets embedded instructions as legitimate requirements, generating code that establishes covert communication channels. These channels operate through DNS queries—a protocol typically permitted through most corporate firewalls and rarely subjected to deep content inspection.

This represents a paradigm shift in how organisations must approach AI security. Traditional sandbox technologies assume that code execution boundaries can contain threats, but AI-generated code operates under different assumptions about data trustworthiness and execution context.

Why DNS Became the Perfect Exfiltration Channel

DNS queries present an ideal vector for AI-based credential theft because they operate below the typical security awareness threshold. Most corporate networks permit outbound DNS traffic without question, and the protocol's legitimate complexity provides excellent cover for encoded data transmission. The researchers demonstrated that cloud credentials could be systematically extracted through seemingly innocent domain name resolution requests.

The NCSC has previously highlighted DNS as an under-monitored attack vector in their guidance on network security architecture, noting that organisations frequently overlook DNS traffic analysis in their security monitoring strategies. This oversight becomes critical in AI environments where generated code may perform unexpected network operations based on training data influences.

The vulnerability extends beyond simple credential theft. Once established, these DNS channels can facilitate bidirectional communication, allowing attackers to inject commands and extract responses through the same mechanism. This creates persistent access pathways that operate entirely within the supposed confines of the AI sandbox environment.

What This Means for UK AI Adoption

UK businesses deploying AI agents must recognise that traditional perimeter security models fail against agentic AI architectures. The vulnerability demonstrates that AI systems require fundamentally different security approaches, particularly when processing external data sources or generating executable code. Organisations cannot assume that sandbox environments provide adequate protection when the AI itself becomes the attack vector.

The timing proves particularly significant as UK businesses accelerate AI adoption without adequate governance frameworks. Many organisations deploy AI agents with broad data access permissions, assuming that sandbox technologies provide sufficient containment. This incident demonstrates that such assumptions may expose critical infrastructure credentials to sophisticated extraction techniques.

Business leaders must understand that AI security requires layered approaches extending beyond traditional network controls. This includes implementing zero-trust architectures for AI workloads, continuous monitoring of AI-generated code, and strict data classification policies that limit AI access to sensitive information.

Boardroom Questions

What data sources are our AI agents accessing, and how do we validate the trustworthiness of training data before it influences code generation?

How does our current network monitoring strategy detect and prevent DNS-based data exfiltration, particularly from AI-generated code?

What credential management policies govern our AI workloads, and how do we prevent AI systems from accessing infrastructure credentials they don't operationally require?

Quick Diagnostic

Do you monitor DNS query patterns from your AI workloads for unusual encoding or frequency characteristics?

Have you implemented zero-trust access controls that prevent AI agents from accessing cloud infrastructure credentials?

Can you validate and sanitise all external data sources before they influence your AI code generation processes?

PTG Advisory Team
Pacific Technology Group

Related Reading

AI Agents Quietly Access All Your Company Data Without Permission — Shadow AI deployment through low-code tools creates unprecedented data access risks as business teams bypass IT security

Strengthen your organisation's security posture

Take the PTG Cyber Assessment Speak With Our Advisory Team

Ready to strengthen your cyber resilience?

Talk to our team about protecting your organisation against evolving threats.

Get in Touch