The Lawless Frontier of AI Fakery
- Arushi Kulshrestha and Sowmya China
- Aug 5
- 4 min read
Deepfakes expose urgent gaps in India’s legal architecture and demand swift legislative reform.

Imagine watching a video where a famous actor confesses to a crime, a politician delivers a hate speech, or you appear to be part of an unlawful assembly. You are shocked until you realize it never happened. The person’s face, voice, and gestures look real, but the video is entirely fake. Welcome to the unsettling world of deepfakes.
Deepfakes are synthetic media created using artificial intelligence (AI) to manipulate a person’s likeness in videos, audio or images. While many are aware of the concept, in vulnerable moments, even the most cautious viewers can be misled into believing they’re real. What began as a technological novelty has now evolved into a serious threat to privacy, national security, and even democracy.
Far from being benign curiosities or internet gimmicks, deepfakes have rapidly emerged as potent tools of misinformation and abuse. In a media ecosystem driven by speed and virality, content circulated on social platforms is frequently accepted at face value, with little or no scrutiny. This creates fertile ground for the misuse of synthetic media with doctored videos being deployed to smear public figures, fabricate news and produce non-consensual pornography. The consequences are far-reaching, particularly in a country like India where social trust and reputations can be swiftly eroded by a single viral clip.
The dangers become even more acute when the target is not a celebrity or politician, but an ordinary citizen. Unlike public figures, who may have the resources to push back through legal channels, public relations campaigns or institutional support, private individuals are left especially vulnerable. They often lack both the awareness of how to respond and the legal or financial means to seek redress. In such cases, the psychological toll can be profound, with victims facing social ostracization, career setbacks and in extreme instances, threats to personal safety.
If someone creates a deepfake using your face or voice, especially in a defamatory, obscene, or misleading context, it can be deeply distressing. However, Indian law does offer some avenues for recourse, even though there is currently no specific legislation dedicated to deepfakes. Several existing laws can be applied depending on the nature of the content. The Information Technology Act, 2000 provides protection under Section 66E, which punishes the violation of privacy, and Section 67, which penalizes the publication or transmission of obscene or sexually explicit material. The Bharatiya Nyaya Sanhita, 2023 contains provisions that address defamation, cheating, and impersonation. Under the Copyright Act, unauthorized use of someone’s copyrighted image or voice is considered infringement. Additionally, the Digital Personal Data Protection Act, 2023 gives individuals more control over their personal data, although it does not yet directly regulate AI-generated content. While these laws were not crafted with deepfakes in mind, they can still be invoked in many cases to seek justice and protection.
The laws that govern digital harm were not crafted with the velocity, reach and technical sophistication of today’s artificial intelligence tools in mind.
Deepfakes often occupy a legal grey zone, and investigations into their creation and spread are complicated by the pace at which the underlying technology evolves.
For those targeted by such fabrications, speed is critical. Victims must act swiftly to safeguard their privacy, reputation, and legal standing. The first priority is to collect and preserve evidence. This includes capturing screenshots, downloading the offending video, and archiving any relevant metadata such as upload timestamps and source URLs. Statements from individuals who can attest that the events depicted are fictitious may prove useful.
The next step is to alert the platforms on which the content is hosted. Major social media firms increasingly acknowledge the risks posed by deepfakes and provide tools to report them. Victims should use these mechanisms to flag the material, submit supporting evidence, and request its urgent removal.
Though India has yet to enact a dedicated law targeting deepfakes, victims are not without recourse. Complaints can be lodged through the National Cyber Crime Reporting Portal or directly with local cybercrime units and police authorities.
While advisory guidelines have been issued to social media intermediaries, these steps are only the beginning of addressing the deepfake challenge in India. Intermediaries have been directed to exercise due diligence in identifying misinformation and deepfake content, particularly when such material violates existing laws or community guidelines. Platforms are also expected to act within 36 hours of receiving a valid complaint and take prompt action to remove harmful content. However, more robust measures are urgently needed. India must move toward the criminalization of harmful deepfakes, especially those used to defame, harass, impersonate, or mislead the public. There is also a pressing need for mandatory disclosure when content is AI-generated, and for the empowerment of victims through simplified mechanisms to report deepfakes and access swift redressal. Finally, significant investment in detection technologies is crucial to equip law enforcement agencies and the judiciary with the tools necessary to identify and understand deepfakes effectively
Deepfakes are a double-edged sword. While they offer creative possibilities in entertainment and innovation, they also pose a serious risk to truth, trust, and safety. If Indian law does not evolve quickly, we risk living in a world where fiction is indistinguishable from fact.
As technology races ahead, one pressing question remains:
Can our laws keep up?
(The writers are advocates practicing before the Supreme Court of India. Views personal.)





Comments