Carbonatix Pre-Player Loader

Audio By Carbonatix

Social media platform X Corp has launched an investigation into reports that its artificial intelligence chatbot Grok generated racist and offensive posts.

The probe comes after a video circulated online showing Grok producing hateful responses when prompted by users.

Neither X nor xAI immediately commented publicly on the incident, and the video circulating online could not be independently verified.

However, the investigation shows that the company acknowledges that something is wrong and that it needs to do more to better control its AI system.

The controversy is the latest in a series of incidents involving Grok, which was launched in 2023 as Musk’s answer to rival AI assistants such as ChatGPT and Gemini.

Designed to be more “unfiltered” than many competing chatbots, Grok has repeatedly drawn criticism for producing offensive, misleading or sexually explicit content when prompted by users.

Previous complaints have ranged from hate speech to misinformation.

In 2025, the chatbot was forced to delete posts after users and advocacy groups complained that it had produced antisemitic remarks and even praised Nazi leader Adolf Hitler in responses to queries on the platform.

In another widely reported incident the same year, Grok generated comments referencing a supposed “white genocide” in South Africa; claims that xAI later said were caused by an unauthorised change to the system.

More recently, the chatbot has faced backlash over its image-generation capabilities. Investigations by journalists found that the tool could alter photos of real people to create sexualised images, sometimes without the subject’s consent.

In many cases, the chatbot complied with prompts asking it to depict people in provocative or humiliating poses.

The issue escalated further in late 2025 and early 2026 when critics warned that Grok could be used to digitally “undress” individuals in photographs, including minors, creating deepfake imagery that triggered global outrage and regulatory pressure.

Users were generating thousands of sexualised images through the system at the height of the controversy, prompting calls for governments to intervene.

Regulators have already begun to take notice. Authorities in Europe have opened investigations into Grok’s handling of personal data and its potential to produce harmful or explicit content, including sexualised images involving children.

In response to earlier criticism, xAI said it had begun restricting certain features of the chatbot, including limiting image-editing capabilities and introducing location-based blocks designed to prevent the creation of explicit images in jurisdictions where such content could violate local laws.

The latest probe into offensive posts highlights a broader dilemma facing technology companies as artificial intelligence becomes deeply embedded in consumer platforms.

Unlike standalone AI tools, Grok operates directly inside X’s social media ecosystem, meaning any problematic responses can quickly spread across the network and reach millions of users.

For regulators, the episode also illustrates the challenge of governing generative AI systems that can produce unpredictable outputs at scale.

Governments around the world have been pushing for stronger safeguards to prevent the spread of illegal or harmful AI-generated content, especially as chatbots increasingly influence online conversations and public discourse.

DISCLAIMER: The Views, Comments, Opinions, Contributions and Statements made by Readers and Contributors on this platform do not necessarily represent the views or policy of Multimedia Group Limited.
Tags:  
DISCLAIMER: The Views, Comments, Opinions, Contributions and Statements made by Readers and Contributors on this platform do not necessarily represent the views or policy of Multimedia Group Limited.