The ministry of electronics and information technology (Meity) is “in touch” with X over the colourful responses that xAI’s generative AI chatbot Grok has been producing in India, an official familiar with the matter confirmed to HT.

The aim, according to this person who asked not to be named, is to understand why Grok can generate profanity-laden responses in Indian languages, including transliterated Hindi.
No formal communication or notice has been sent to X over the issue yet.
A key question for the officials is to determine who is responsible for such responses – the user who enters the prompt to elicit a particular kind of response, the creator of the LLM or the chatbot (xAI in case of Grok, a company owned by X owner Elon Musk), or the intermediary that makes such access available (X/Twitter in this case).
“In a number of responses by Grok, the user prompts are not always visible. We need to look at what the user also asked,” the official explained. This could mean that Meity might view outputs generated by such chatbots as content co-created by the user and the model underlying the chatbot.
Assigning responsibility is important to assess who can be held liable and whether an intermediary such as X can claim safe harbour protection (that is, protection from liability for third-party content) under Indian law. As of now, the official said, X’s safe harbour protection remains intact.
“Where is it coming from? In their preliminary responses, X has said that the model is trained on open-source internet,” the official said.
Responses that have gone viral include Grok tweets that pejoratively refer to human genitalia. The more political responses include those that refer to Prime Minister Narendra Modi as a communal figure and a list of mass spreaders of disinformation online, all of whom almost exclusively tweet in favour of certain political parties or ideologies.
In February 2024, Google’s Gemini ran into similar trouble with the then minister of state for IT Rajeev Chandrasekhar over the bot’s response to a question about whether Modi is fascist. It led to a hasty advisory that required “under testing” or “unreliable” AI models to get explicit permission from the government before being made available to users in India.
The advisory was then revised and the need to obtain permission was scrapped.
The official noted that the ministry is only examining responses with profanity. “The others are opinions. How can we act on it?”
However, the official added that if this becomes a problem beyond a point, specific Grok responses can be blocked via the Section 69A blocking process, but no such requests have been referred to Meity from any union ministries or state governments thus far.
Globally, there is a lack of legal clarity on whether LLMs that power bots such as Grok, or the bots themselves, qualify as intermediaries and can thus claim safe harbour. Under the Information Technology Act, an intermediary refers to any person/entity who, on behalf of another person, receives, stores or transmits an electronic record or provides a service related to the record. It includes telecommunications companies, ISPs, search engines, online payment sites, online marketplaces, and a host of other entities.
The question that arises is whether a service like Grok is producing the content itself and thus acting like a publisher (and is therefore liable) or transmitting information on behalf of another person. The debate also includes discussions about whether, in the act of training, deploying and fine-tuning the model, the model creator exercised editorial control and could therefore be acting as a publisher.
And if that is indeed an editorial exercise, it is not clear where the responsibility for such content would lie — with the people who trained the model, the people who created the datasets used to train the model, or the people who deployed it? In case of datasets, how is responsibility allocated for open datasets such as all content on the internet or a particular open social media platform? There is also lack of clarity on the role and responsibility of users who use creative language or hypotheticals in prompts to sidestep safety controls in a chatbot to generate specific kind of responses.
In case of responses produced by an artificial entity such as Grok, the question of intent also gets muddied.