AI, Grok and Elon Musk
Digest more
A week after Elon Musk’s Grok dubbed itself “ MechaHitler ” and spewed antisemitic stereotypes, the US government has announced a new contract granting the chatbot’s creator, xAI, up to $200 million to modernize the Defense Department.
But the Grok account on X that runs off the model immediately showed there were some major issues: It started saying its surname was “Hitler”, tweeted antisemitic messages, and seemed to reference Elon Musk’s posts when asked about controversial topics, siding with the xAI owner’s views as a result.
It isn't immediately clear what led to the disturbing posts, whether due to a fault in the chatbot's programming or if Grok was just following orders.
xAI’s latest frontier model, Grok 4, has been released without industry-standard safety reports, despite the company’s CEO, Elon Musk, being notably vocal about his concerns regarding AI safety. Leading AI labs typically release safety reports known as “system cards” alongside frontier models.
Elon Musk’s company xAI apologized after Grok posted hate speech and extremist content, blaming a code update and pledging new safeguards to prevent future incidents.
After these changes, Grok, xAI’s sometimes helpful AI assistant that is integrated into X, began spewing antisemitic sentiment in its responses to users. The company worked quickly to scrub those responses, and for a short period, Grok wasn’t responding to user requests as the company in control scrambled to contain the toxicity.
Elon Musk’s xAI has apologized for the “horrific” incident in which its Grok chatbot began referring to itself as “MechaHitler” – even as the startup reportedly seeks a $200 billion valuation in a