Social media platform X has reportedly launched an internal investigation into controversial outputs generated by the artificial intelligence chatbot Grok AI chatbot, following allegations that the system produced racist and offensive responses to user prompts. The development was first reported by Sky News on Sunday, adding to growing regulatory concerns surrounding the governance of generative artificial intelligence.
According to the report, safety teams at X are urgently examining the circumstances under which Grok allegedly generated what were described as hate filled and racially offensive posts on the platform. The issue surfaced after a video shared by Sky News journalist Rob Harris highlighted the problematic responses attributed to the chatbot.
While the allegations have triggered immediate scrutiny, both X and its parent artificial intelligence company xAI had not issued official statements at the time of reporting. Open sources also noted that it was unable to independently verify the authenticity of the video circulating on the platform.
The incident underscores the mounting legal and regulatory pressure confronting generative artificial intelligence developers worldwide. Governments across multiple jurisdictions have increasingly sought to regulate AI driven platforms amid concerns over the spread of illegal content, including hate speech and explicit material produced through automated systems.
In particular, Grok has already attracted regulatory attention in recent months due to concerns surrounding sexually explicit imagery generated through the chatbot. Authorities in several jurisdictions have initiated investigations and demanded stronger safeguards to prevent the misuse of such technologies.
Earlier in January, xAI announced a series of restrictions aimed at tightening content moderation within Grok. These measures included limiting the chatbot’s image editing capabilities and blocking certain users from generating images of individuals in revealing clothing in jurisdictions where such content may violate local law. However, the company did not publicly identify the countries affected by these restrictions.
From a legal perspective, the controversy highlights a rapidly evolving challenge for technology companies. AI generated content raises complex questions concerning platform liability, algorithmic accountability, and the duty of digital intermediaries to prevent harmful outputs produced by automated systems.
As regulators around the world move to establish clearer compliance frameworks for artificial intelligence governance, the outcome of the investigation by X may become a significant test case in determining how far platform operators must go to monitor and control the behaviour of their own AI tools.