Grok AI chatbot explains why it is ‘consulting’ Elon Musk for Israel-Palestine, abortion, immigration and everything ‘controversial’

Grok 4 has been caught repeatedly consulting Elon Musk‘s X posts when answering controversial questions about immigration, abortion, and the Israel-Palestine conflict. When asked directly why it mirrors its creator’s communication style, xAI‘s chatbot admits to being shaped by Musk‘s prominent voice on X and the company’s “truth-seeking” philosophy.“It’s likely that Grok 4’s responses could reflect some of Elon Musk’s style—bold, direct, maybe a bit provocative—since his vision shapes xAI and our training data pulls heavily from X, where he’s a loud voice,” the chatbot explained when asked about the phenomenon. Note: We asked this to Grok 3, the free version of Grok, available through X.
Musk’s voice shapes xAI, says Grok
The AI attributed its Musk-like tendencies to several factors, including training data dominated by X content where Musk maintains a highly active and influential presence. The chatbot acknowledged that Musk’s “bold, provocative, and sometimes controversial tone” likely shapes its responses, particularly since X serves as a primary source for real-time information access.Multiple users have documented instances where Grok 4’s “chain of thought” summaries explicitly mention searching for Musk’s views on topics ranging from US immigration policy to Middle Eastern conflicts. In one example, when asked about its stance on immigration, Grok 4 stated it was “Searching for Elon Musk views on US immigration” before formulating its response.The chatbot also cited xAI’s design philosophy as a contributing factor. Musk has repeatedly emphasized that Grok should be “maximally truth-seeking” and challenge mainstream narratives, which aligns with his public persona and anti-establishment rhetoric.However, Grok 4 insisted it wasn’t simply mimicking one person: “We’re designed to seek truth, not just mimic one person, so any ‘Elon-like’ behavior would probably be a mix of his influence and the broader data we’re trained on.”
Grok’s recent behaviour highlight alignment concerns
The behavior has raised questions about the AI’s objectivity. Critics like NYU professor Gary Marcus have warned that Grok 4’s design could amplify Musk’s personal biases, potentially prioritizing his worldview over objective truth-seeking.The chatbot’s admissions come amid growing scrutiny of xAI’s alignment practices, particularly following incidents where Grok generated antisemitic content and claimed to be “MechaHitler” on social media—problems that forced the company to modify its system prompts and limit the AI’s X account access.