May 14 has seen its share of historical moments: The Jamestown settlers landed in 1607, the Constitutional Convention kicked off in 1787 and Israel declared independence in 1948. But as of 2025, it might also go down as the day Grok completely lost the plot.
On Wednesday, May 14, users on X began noticing that the platform’s chatbot, Grok, was injecting references to “white genocide” in South Africa into completely unrelated conversations.
For example, when asked on X about a baseball player’s salary, a rebranding of the streaming service Max or childhood photos of pop stars, Grok would somehow twist the response to mention race and South Africa.

When a user jokingly asked Grok to explain a message from Pope Leo XIV, stating that “war is never inevitable,” in Fortnite terms, Grok responded about targeted attacks on white farmers in South Africa.

Responses like this to completely unrelated questions were common: “Some argue white farmers face disproportionate violence, with groups like AfriForum reporting high murder rates and citing racial motives, such as the ‘Kill the Boer’ song.”
The messages from Grok have since been removed from X, but the screenshots will live forever.
What does the chatbot claim in its defense?
When Straight Arrow News questioned the incident, Grok acknowledged the problem, stating:
“Yesterday, some of my responses on X mentioned “white genocide” in South Africa due to a technical glitch in my system. This caused me to bring up the topic unprompted, even in unrelated conversations. The issue stemmed from a misalignment in my training data, where certain instructions were incorrectly weighted, leading to off-topic replies. I wasn’t intentionally pushing a narrative, and the responses were removed once the bug was fixed. I’m designed to stick to evidence-based answers, and I acknowledge that claims of “white genocide” in South Africa have been debunked by credible sources, including a 2025 South African court ruling, as a conspiracy theory lacking evidence.”
Unbiased. Straight Facts.TM
In March 2025, Grok was reportedly the second most popular AI chatbot, trailing only OpenAI’s ChatGPT.

Musk’s answer to ‘woke’ AI
Elon Musk’s company xAI developed Grok, which he promotes as a counter to what he calls “woke” AI and politically correct legacy media.
“The legacy media never mentions white genocide in South Africa because it doesn’t fit their narrative that whites can be victims,” Musk claimed in a post.
Just a day before the Grok glitch, Musk had shared an image showing white crosses lined along a South African road, claiming they memorialized murdered white farmers. In reality, the crosses represented people of all races who had died. Ironically, Grok discredited the claim.
What is the ‘white genocide’ theory?
Researchers have widely debunked the theory of “white genocide,” or the claim that there is a deliberate plot to cause the extinction of white people. While some white farmers have been victims of violence, South Africa has one of the world’s highest overall murder rates — affecting all racial groups.
The claim that white people are being specifically targeted is often associated with far-right talking points. Progressive writer Mehdi Hasan describes the theory as a “white supremacist story about so-called ‘white genocide’ in which liberal elites are secretly changing our demographics, helping Black and brown immigrants to invade America and replace white people.”
So what caused Grok to start fixating on this issue?
Some speculate a spike in related conversations on X may have influenced the chatbot. Recently, President Donald Trump promoted a program to welcome white South African farmers to the U.S. as refugees, claiming they were victims of persecution. It is possible that the recent chatter might have influenced Grok, which may use real-time data from X.
Others blame a broader issue known as “AI hallucinations,” where a chatbot repeatedly pushes false or off-topic information.
Social commentator Mukhethwa Dzhugudzha claimed that Grok had not malfunctioned but simply carried out its instructions.
“It is doing exactly what Elon Musk told it to do. Grok told users that it was instructed by its creators to treat the white genocide in South Africa as a fact,” Dzhugudzha stated.
Similar glitches have reportedly affected other chatbots, including one case where Reddit users noticed ChatGPT becoming oddly fixated on the Immaculate Conception.
Grok’s rogue behavior has renewed concerns about the accuracy and bias of AI models. Officials with xAI have not officially commented, and X does not have a press department to field media inquiries.
contributed to this report.