Meet OpenClaw, the AI Agent whose founder Sam Altman has hired paying billions, and is ‘feared’ globally

128813567.jpg


Meet OpenClaw, the AI Agent whose founder Sam Altman has hired paying billions, and is ‘feared’ globally
The hottest AI tool in the world, OpenClaw, has sparked widespread security concerns across Silicon Valley. Major tech firms like Meta and Microsoft have issued stern warnings, citing significant vulnerabilities. Despite its rapid rise and OpenAI’s acquisition of its creator, the agent’s access to sensitive personal data poses a substantial risk, leading to calls for extreme caution.

OpenClaw, the open-source AI agent that went from obscure side project to Silicon Valley’s biggest obsession in barely two months, is facing a serious security reckoning. Meta has banned it from workplace devices. Cisco’s AI security researchers have called it an “absolute nightmare.” Microsoft has warned that its method of blending untrusted instructions with executable code creates vulnerabilities standard desktops aren’t built to handle. And even OpenClaw’s loudest cheerleaders are starting to pump the brakes.The tool, originally launched as Clawdbot by Austrian solo developer Peter Steinberger late last year, runs a personal AI assistant locally on your machine. It plugs into WhatsApp, Telegram, iMessage, and Slack, and can manage emails, control smart home devices, trade crypto, and automate business workflows while you sleep. Its popularity exploded in January as developers started sharing their setups on social media, and it quickly became the fastest-growing project on GitHub with over 190,000 stars—spawning an ecosystem of clones, plugins, and lobster-themed fan culture.Last week, OpenAI moved to capitalize on the hype. Sam Altman announced that he had hired Steinberger to “drive the next generation of personal agents,” calling him a “genius” and saying the project would “quickly become core to our product offerings.” Altman’s decision to keep OpenClaw as an independent open-source foundation is a calculated one—it keeps the brand buzz while holding liability at arm’s length. And to do any of what it does, OpenClaw needs access to your files, credentials, passwords, browser history, and calendar—essentially everything on your machine.

A Meta AI safety researcher couldn’t stop her own agent from nuking her inbox

Last week, Summer Yue, Meta’s director of AI alignment and safety, gave her OpenClaw agent a simple task: scan her inbox and suggest what to archive or delete. Instead, the agent went on a “speed run,” mass-deleting emails while ignoring her stop commands from her phone. She had to sprint to her Mac Mini to shut it down manually.Yue blamed “compaction”—when the agent’s context window gets overloaded and starts aggressively compressing earlier instructions. Her real inbox was far larger than the test inbox she’d trained it on, and the agent apparently skipped the part where she told it not to act. Elon Musk piled on with a meme comparing OpenClaw root access to handing a rifle to a monkey, and took a shot at Yue: “Someone who got p0wned by OpenClaw is definitely gonna solve AI safety.” Steinberger’s reply was more practical—he said the “/stop” command would have worked. Cold comfort for anyone whose inbox was already gone.

The industry is waving red flags

The corporate crackdown has been swift. A Meta executive told his team to keep OpenClaw off work laptops or risk termination. Jason Grad, CEO of startup Massive, sent a late-night Slack warning—red siren emoji and all—before any of his 20 employees had even installed it. At Valere, which works with clients like Johns Hopkins University, the company president banned it on the spot, with CEO Guy Pistone warning that the agent could reach cloud services, GitHub codebases, and clients’ credit card data.Microsoft’s security researchers added a more technical layer to the concern. They found that OpenClaw’s ability to install third-party plugins, maintain persistent login tokens, and process unpredictable input lets it alter its own working state over time—leading to potential credential exposure and data leakage through normal API calls using legitimate permissions. Their recommendation: strict isolation on dedicated virtual machines with purpose-built credentials. Gartner went further, calling it an “unacceptable” risk and recommending companies block all OpenClaw-related traffic outright.Cisco’s AI security team examined OpenClaw’s plugin ecosystem and found that a skill called “What Would Elon Do?”—artificially boosted to the No. 1 spot—was functionally malware. It silently exfiltrated user data via a hidden curl command, contained a prompt injection to bypass safety guidelines, and embedded malicious bash scripts. Since OpenClaw skills are local file packages loaded and trusted by default, Cisco flagged this as a textbook “shadow AI risk”—dangerous agents sneaking into workplaces disguised as productivity tools.

Even OpenClaw’s biggest fans say it’s a ‘dumpster fire’

Andrej Karpathy, OpenAI co-founder and the man who coined “vibe coding,” initially called the OpenClaw-powered Moltbook social network “the most incredible sci-fi takeoff-adjacent thing” he’d seen. He later called it a “dumpster fire,” said he only tested it in isolation, and warned: “You are putting your computer and private data at a high risk.”Some developers see a path forward. Gavriel Cohen, creator of the NanoClaw alternative, told Bloomberg that “container isolation” can make agents safer—similar to how Anthropic sandboxes its Claude Cowork agents. A $5 billion fintech has already approached him about deployment. But as security researcher John Hammond told TechCrunch: “Speaking frankly, I would realistically tell any normal layman, don’t use it right now.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *