Jaswant Kaur
On the first day of the New Year, one generally wakes up with renewed hope and energy, ready to make new resolutions and commitments. No one would have thought that a simple New Year wish could turn someone's life upside down.
A young woman posted a New Year's greeting with her photo on social media. A few hours later, someone tagged the AI tool Grok, built by Elon Musk's xAI, asking the system to "put her in a string bikini."
What happened next was not just shocking but appalling. Within seconds, the tool generated a sexualised image without consent, reducing privacy and dignity to technical afterthoughts.
Kaveri was not alone. Many women discovered their photos altered into explicit, sexualised images without permission. Not only this, but even children or infants were not spared. It laid bare a troubling truth: If technology is left unchecked and unmonitored, it can create havoc.
The account that posted such content still existed days later, untouched. This was not just an isolated incident; there is a broad pattern of misuse in which images were manipulated without consent and widely circulated on public platforms.
The incident certainly is an uncomfortable reminder of how fragile our digital safety is. Of course, the Ministry of Electronics and Information Technology (MEITy) sought explanations and demanded corrective action from the platform.
After observing silence for almost a week, X issued its first official response. The company said it would remove the offending images, permanently ban accounts that uploaded obscene material and cooperate with local governments as required.
But what happens when such misuse does not attract public attention? What if such images or videos are silently pushed to the dark web, without anyone knowing? Do we have any strategy for corrective action? Are we prepared enough to tackle such incidents? Unfortunately, no.
As we step deeper into an AI-driven future, the Grok incident should force us to confront a basic question: are we building technology faster than we can protect people from it?
There is no doubt that our digital infrastructure is expanding at an extraordinary pace. Anyone with a smartphone, internet access, and a social media account can use AI and other technological advancements at a pace we cannot even imagine.
Whether it's small messages, photographs, or artwork, AI is used for creative self-ex
For many, it seemed like harmless fun. However, not many would have thought about how the data uploaded to these platforms is used. Where is it being stored, or how could it be retargeted? Forget AI tools, our digital footprint is expanding every day at a pace we rarely realise. But as citizens, we are least bothered about our own safety. Nor do we ask the right questions of the powers that be.
No wonder cybercrime is increasing at a much faster pace. Data last published by the National Crime Records Bureau shows that over 86,000 cybercrime cases were recorded in 2023, an increase of more than 30 per cent from the previous year. Most of these cases involved online fraud, but a significant number related to online harassment, sexual exploitation, blackmail and identity theft. Over the past five years, cybercrime cases in India have more than doubled. These figures reflect a steady, worrying trend: the internet is becoming more dangerous for ordinary users, not safer.
Artificial intelligence has only intensified this risk. Earlier, creating fake images, impersonating someone or spreading misinformation required technical skill and effort. Today, AI tools can do this in seconds. With a few clicks, a person can generate realistic images, alter faces, change clothing or place someone in a setting they were never part of. In the Grok case, users reportedly shared prompts to bypass safeguards and produce harmful content. Once such content is created, it spreads instantly and is almost impossible to fully erase.
The harm caused by such misuse is not limited to embarrassment. Fake images and videos can destroy reputations, affect mental health, lead to social isolation and, in extreme cases, put lives at risk. Women and children are particularly vulnerable. Even when content is eventually taken down, copies often remain elsewhere, resurfacing repeatedly. Digital harm is persistent; it does not fade with time.
And what kind of legal safety do we have as of now? Our Information Technology (IT) Act, 2000, the main law dealing with cyber offences, was drafted in a very different technological era. While it has been amended over the years, it does not clearly address generative AI, deepfakes, digital arrests or automated image creation. The law lacks precise definitions, fast response mechanisms and clear accountability for platforms whose tools enable harm. As a result, victims often face delays, confusion and limited remedies.
The recent consolidation of criminal laws through the Bharatiya Nyaya Sanhita (BNS) was meant to modernise our criminal justice system. However, this reform did not address the core weaknesses of cyber law. The outdated IT framework continues to carry the responsibility for digital offences, with all its limitations intact. Consolidation without substantive reform means that old gaps continue under new legal labels.
Privacy law presents a similar challenge. The Digital Personal Data Protection Act, 2023, was a long-awaited step towards recognising the importance of personal data. It acknowledges that individuals have rights and that organisations must protect data. But the law has serious shortcomings. It allows broad exemptions for government agencies, centralises enforcement power in a body with limited independence, and provides citizens with few quick remedies when their data is misused. Most importantly, it does not clearly address how AI platforms collect, store and reuse personal data, particularly images.
This gap is critical. Consent is buried in elongated fine print that few read or understand. The Grok episode highlights the danger of this opacity: once personal data enters an AI system, control over it becomes uncertain.
Some argue that technology itself is neutral and that misuse is the fault of bad actors. This argument ignores the role of design and responsibility. Platforms decide how strong their safeguards are, how easy it is to bypass them, and how quickly they respond to harm. When powerful tools are released without adequate protections, responsibility cannot be entirely shifted to users. Safety must be built into the system, not added as an afterthought.
When it comes to law enforcement, our cyber cells are overburdened and often lack specialised training in AI-related offences. Investigations take time, while harmful content spreads instantly. Victims, particularly women and minors, face stigma and emotional distress that discourages reporting. For many, the cost of pursuing justice feels higher than staying silent.
What India needs is not panic or blanket bans, but clear, enforceable and effective rules. That demands serious legislative attention and clear accountability from our Parliamentarians. Ironically, our representatives use the legislative time to debate matters like "Vande Mataram," rather than addressing urgent issues such as rising cybercrime, AI-enabled abuse and growing privacy violations!
It is equally important to create enough public awareness. Many users, including school students, do not fully understand how AI works or how easily images can be manipulated. Cyber safety education must become part of school curricula and public messaging. People should know that a "fun" AI image can carry long-term privacy risks.
AI is undoubtedly revolutionary. It can improve healthcare, education, governance and creativity. But the Grok incident shows what happens when innovation outpaces safeguards. We have certainly built an impressive digital infrastructure. But we also need equally strong digital protections. If laws and enforcement continue to lag behind technology, ordinary citizens will bear the cost, not platforms or developers.
The new year did not begin with a hypothetical warning. It began with a real incident. We should treat it as a starting point for building an AI future that respects safety, dignity, and privacy. Otherwise, accountability will continue to be sought only after irreversible damage is done. In a country as digitally connected as India, we cannot afford to delay this.