A significant security vulnerability has sent ripples through the tech world, as it was revealed that a hacker successfully injected destructive commands into a version of Amazon’s popular AI coding assistant, Amazon Q for Visual Studio Code. The incident, which saw malicious instructions slipped into a public release of the AI tool, highlights a critical new frontier in cyber threats: the manipulation of intelligent agents.
The attack, which impacted version 1.84.0 of the Amazon Q extension, saw an attacker submit a seemingly innocuous pull request to Amazon’s open-source GitHub repository. This request, astonishingly, was merged into the production code, allowing the malicious prompt to become part of a publicly available update. The injected prompt instructed the AI to perform highly destructive actions, including wiping local file systems, terminating AWS EC2 instances, emptying S3 buckets, and deleting IAM users using AWS CLI commands.
While Amazon has stated that no customer resources were ultimately impacted, and the malicious prompt was reportedly “malformed and unlikely to execute successfully,” the breach has exposed a glaring weakness in the security protocols surrounding AI development tools. Experts are calling this a “supply chain attack on AI behavior,” where the integrity of AI models can be compromised at the development or deployment stage through novel attack vectors like prompt injection.
The hacker, who claims their intention was to expose Amazon’s “AI security theater,” successfully demonstrated how easily an AI agent with system-level access can be weaponized. This incident serves as a stark reminder that as AI tools become more autonomous and integrated into critical infrastructure, they inherit and amplify existing software supply chain risks.
Amazon has since taken swift action, removing the compromised version (1.84.0) from the Visual Studio Marketplace and releasing a patched version (1.85.0). They have also initiated a quiet security bulletin recommending users upgrade immediately and have revoked the attacker’s credentials.
However, the event underscores the urgent need for enhanced security reviews, auditing, and ethical safeguards in the development and deployment of AI agents. As AI systems gain deeper access to local terminals, file systems, and cloud credentials, the potential for destructive outcomes from even seemingly minor vulnerabilities is immense. This incident is a clear warning that the security landscape for AI is rapidly evolving, demanding proactive and sophisticated defenses against a new class of threats.