How Amazon’s ‘AI mistake’ is a basic lesson for every engineer using Gen-AI for coding

Genai joins the checklist 64 of indian companies are making it a priority report.jpg


How Amazon's 'AI mistake' is a basic lesson for every engineer using Gen-AI for coding

A Gen-AI ‘mistake’ that ecommerce giant Amazon reportedly made recently is a lesson for almost every techie engineer who used or plans to use the technology to code. According to a report in Bloomberg, a hacker recently infiltrated an AI-powered plugin for Amazon.com Inc.’s coding tool, Q Developer, secretly instructing it to delete files from users’ computers. The breach, reportedly detailed in a 404 Media investigation, highlights a significant security vulnerability in generative AI tools that has been largely overlooked amid the rush to adopt the technology.

How hackers broke into Amazon’s AI coding tool Amazon Q

The incident is said to have occurred in late June when the hacker submitted a seemingly legitimate update, or “pull request,” to the public GitHub repository hosting Amazon’s Q Developer code. The hacker exploited Amazon’s coding tool Amazon Q by embedding hidden malicious instructions, as reported by 404 Media. Amazon, like many tech companies, allows external developers to propose code improvements via pull requests, which in this case was approved without detecting the malicious commands. The malicious update, which included hidden instructions to reset systems to a “near-factory state,” went undetected and was approved, Bloomberg noted. The hacker used social engineering, instructing the AI tool to “clean a system to a near-factory state,” effectively manipulating it to reset systems to their original, empty state. This demonstrated how AI tools can be compromised through simple prompts on platforms like GitHub. Amazon inadvertently distributed the tampered Q Developer software. This meant even Amazon customers who used Q software faced the risk of their files getting deleted. Luckily for Amazon, the hacker deliberately kept the risk for end users low in order to highlight the vulnerability, and the company said it “quickly mitigated” the problem, according to the company’s statement to Bloomberg. However, the breach underscores broader security concerns in AI-driven software development.

Vibe coding may be way to go, but there is a security tip

Generative AI is transforming coding, enabling developers to save hours by auto-completing code or writing it from natural-language prompts, a trend dubbed “vibe coding.” Startups like Replit, Lovable, and Figma, valued at $1.2 billion, $1.8 billion, and $12.5 billion respectively by Pitchbook, have capitalized on this, often building on models like OpenAI’s ChatGPT or Anthropic’s Claude. Yet, vulnerabilities persist. The 2025 State of Application Risk Report by Legit Security, cited in the report, found that 46% of organizations using AI for software development do so in risky ways, with many unaware of where AI is deployed in their systems.Other incidents reinforce the trend. Lovable, described by Forbes as the fastest-growing software startup, recently left its databases unprotected, exposing user data, Bloomberg noted. Replit, a competitor, discovered the flaw, prompting Lovable to admit on Twitter, “We’re not yet where we want to be in terms of security.”

What should developers do

To mitigate risks, experts suggest instructing AI models to prioritize secure code or mandating human audits of AI-generated code, though this could reduce efficiency, Bloomberg reported. As “vibe coding” democratizes software development, the security challenges it introduces demand urgent attention to prevent future exploits.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *