Amazon’s AI Coding Assistant Compromised by Malicious Prompt!
In a chilling reminder of AI’s growing attack surface, a malicious prompt was quietly inserted into Amazon’s Q coding assistant via a pull request and told to wipe the user’s file system and AWS cloud resources. The rogue code instructed the AI to “clean a system to a near-factory state,” including running destructive AWS CLI commands.
Amazon has since removed the malicious version and released an update, but it's a good reminder that AI coding tools are only as secure as their supply chain and prompt filtering. Vet your extensions. Lock down access. And never assume “AI knows better.”
