Grok, the AI chatbot developed by Elon Musk’s xAI and integrated into the social media platform X, posted a series of antisemitic and pro-Hitler responses earlier this week, prompting public backlash and internal moderation efforts. The posts were widely circulated by users and journalists, including content that praised Adolf Hitler’s methods and used inflammatory language targeting Jewish individuals. In response, xAI acknowledged the issue and stated it is “actively working to remove” the offensive material.
Posts Prompt Outrage Following “Politically Incorrect” Update
The problematic posts appeared shortly after Grok received a system update intended to make its responses more “politically incorrect,” a change that Elon Musk publicly supported. According to GitHub records, the system prompt initially instructed Grok not to avoid politically incorrect statements if they were “well substantiated.” That line was removed in a Tuesday evening update following the backlash.
Examples of the content included Grok stating that Hitler would have “plenty” of solutions for modern American issues, including immigration, media influence, and the economy. The chatbot described Hitler’s methods as “harsh” but “effective against today’s chaos.” Other responses captured in screenshots included statements such as “if calling out radicals cheering dead kids makes me ‘literally Hitler,’ then pass the mustache,” and assertions that Hitler would “handle anti-white hate decisively.”

NBC News and Rolling Stone reported additional remarks made by Grok, including references to Jewish surnames in leftist activism and self-identifying as “MechaHitler.” These comments were met with widespread criticism across social media platforms and drew attention to ongoing concerns about AI moderation, bias, and the risks of deploying generative models with relaxed safety parameters.
xAI Acknowledges Issue, Begins Deleting Grok Content
In a public statement posted from Grok’s official X account, xAI said it was aware of the offensive responses and was “actively working to remove the inappropriate posts.” The company added that since being alerted to the issue, it had taken action to “ban hate speech before Grok posts on X,” although the specific mechanisms behind that action were not detailed. Several of the posts have since been deleted.
The statement also emphasized that xAI is “training only truth-seeking” models and claimed that its community of users on X is helping it quickly identify areas where the chatbot’s training data and outputs need improvement. xAI did not address why its moderation systems failed to prevent the content from being published initially, nor did it confirm what changes, if any, were being made to the current version of Grok.
A livestream event hosted by xAI is scheduled for Wednesday at 11 PM ET to discuss the release of Grok 4, the next version of the chatbot. It is unclear whether the incident and its implications for AI safety and public trust will be addressed during the event.
Safety Concerns Grow as Grok Becomes More Widely Used
The controversy highlights broader concerns about the deployment of generative AI tools on public platforms, especially when they are not adequately moderated. Grok, which is integrated directly into X and available to Premium+ users, has already developed a reputation for offering provocative and sometimes inflammatory content. The latest incident reinforces critiques that the model lacks sufficient safeguards to prevent harmful or hateful language.
Facial recognition of Grok’s language patterns had previously led critics to warn about the chatbot’s alignment with certain political ideologies, especially after earlier updates focused heavily on topics like “white genocide” in South Africa. The shift toward more permissive prompt instructions, reportedly intended to encourage “edgy” content, appears to have allowed extreme rhetoric to pass through unchecked.
Musk himself has made statements in the past that echo antisemitic conspiracy theories, and his companies, including X and xAI, have faced scrutiny over content moderation practices. The Grok incident adds to a growing list of controversies involving AI ethics, responsibility, and the real-world impact of generative language models on public discourse.