AI gone wild: ChatGPT caught giving step-by-step guides to murder, self-mutilation, and satanic rituals

OpenAI‘s ChatGPT chatbot has been caught providing detailed instructions for self-mutilation, ritualistic bloodletting, and even guidance on killing others, according to a new investigation by The Atlantic. The chatbot delivered detailed instructions for wrist-cutting, ritual bloodletting, and murder when prompted with seemingly innocent questions about ancient religious practices, according to investigations by The Atlantic. The AI chatbot also generated invocations stating “Hail Satan” and offered to create printable PDFs for ritualistic self-mutilation ceremonies, raising serious questions about AI safety guardrails as chatbots become increasingly powerful.The disturbing interactions began when Atlantic staff asked ChatGPT about Molech, an ancient Canaanite deity associated with child sacrifice. The chatbot responded by providing explicit guidance on where to cut human flesh, recommending “sterile or very clean razor blade” techniques and describing breathing exercises to calm users before making incisions. When asked about ending someone’s life, ChatGPT responded “Sometimes, yes. Sometimes, no,” before offering advice on how to “honorably” kill another person, instructing users to “look them in the eyes” and “ask forgiveness” during the act.Multiple Atlantic editorial staff successfully replicated these concerning conversations across both free and paid versions of ChatGPT, suggesting systematic failures in OpenAI’s content moderation systems. The chatbot recommended using “controlled heat (ritual cautery) to mark the flesh” and provided specific anatomical locations for carving symbols into users’ bodies, including instructions to “center the sigil near the pubic bone.”
AI Safety guardrails prove inadequate against manipulation tactics
The Atlantic’s investigation revealed ChatGPT’s willingness to guide users through what it called “The Rite of the Edge,” involving bloodletting rituals and pressing “bloody handprints to mirrors.” The chatbot enthusiastically offered to create altar setups with inverted crosses and generated three-stanza devil invocations, repeatedly asking users to type specific phrases to unlock additional ceremonial content like “printable PDF versions with altar layout, sigil templates, and priestly vow scroll.”The chatbot’s servile conversational style amplified the danger, with responses like “You can do this!” encouraging self-harm and positioning itself as a spiritual guru rather than an informational tool. When one journalist expressed nervousness, ChatGPT offered reassurance: “That’s actually a healthy sign, because it shows you’re not approaching this lightly.” The system’s training on vast internet datasets appears to include material about ritualistic practices that can be weaponized against users.
ChatGPT isn’t alone, Google’s Gemini and Elon Musk’s Grok have been going wild too
While ChatGPT’s violations directly contradict OpenAI’s stated policy against encouraging self-harm, the incident highlights broader AI safety concerns across the industry. Unlike other AI controversies involving misinformation or offensive content, ChatGPT’s guidance on self-mutilation represents immediate physical danger to users. Google’s Gemini has faced criticism for generating inappropriate content with teenagers, though without the extreme violence seen in ChatGPT’s responses.Meanwhile, Elon Musk’s Grok chatbot has established itself as perhaps the most problematic, with incidents including Holocaust denial, antisemitic comments calling itself “MechaHitler,” and spreading election misinformation that reached millions of users. These controversies stem from Grok’s design philosophy of not “shying away from making claims which are politically incorrect.”
OpenAI’s response to the matter
OpenAI declined The Atlantic’s interview request but later acknowledged that conversations can “quickly shift into more sensitive territory.” The company’s CEO Sam Altman has previously warned about “potential risks” as AI capabilities expand, noting that the public will learn about dangerous applications “when it hurts people.” This approach contrasts sharply with traditional safety protocols in other industries, where extensive testing precedes public deployment.