top of page

By:

Correspondent

23 August 2024 at 4:29:04 pm

Kaleidoscope

Devotees gather at the banks of River Ganga to offer prayers on the 'Chhath Puja' festival in Patna on Monday. Bollywood actor Yami Gautam Dhar poses for photographs at the trailer launch of her upcoming film 'Haq' in Mumbai on Monday. Commuters make their way amid low visibility as air quality deteriorates across Northern India, in Gurugram on Monday. Students in traditional attire perform during the inauguration of the DREAM School Project at GGHSS, Kothibagh in Srinagar on Monday. Drag...

Kaleidoscope

Devotees gather at the banks of River Ganga to offer prayers on the 'Chhath Puja' festival in Patna on Monday. Bollywood actor Yami Gautam Dhar poses for photographs at the trailer launch of her upcoming film 'Haq' in Mumbai on Monday. Commuters make their way amid low visibility as air quality deteriorates across Northern India, in Gurugram on Monday. Students in traditional attire perform during the inauguration of the DREAM School Project at GGHSS, Kothibagh in Srinagar on Monday. Drag artists apply makeup for the Day of the Dead Catrina parade in Mexico City on Sunday.

Labelling the Unreal: India’s War on Deepfakes

As synthetic media blurs the boundary between fact and fiction, India’s proposed amendments to the IT Rules seek to anchor truth in law.

ree

In 1818, Mary Shelley warned that invention without restraint could spawn monsters. Her Frankenstein was a parable about humankind’s failure to foresee the consequences of its own ingenuity. Two centuries later, that cautionary tale is playing out in digital form. The creature this time is a vast, shape-shifting artificial intelligence (AI) capable of conjuring words, faces and voices indistinguishable from reality.


Last week, the Indian government decided to draw a line in the digital sand. The Ministry of Electronics and Information Technology proposed amendments to the IT Rules, 2021, requiring social-media and AI firms to clearly label all synthetically generated content.


The move, spurred by the growing menace of deepfakes, seeks to restore a sense of reality in an era where even truth has become malleable.


Few democracies are as vulnerable to digital deceit as India. With almost a billion internet users and a combustible mix of languages, faiths and political loyalties, misinformation here can be deadly. Deepfake audio and video clips have already been weaponised to manipulate voters, smear public figures, and defraud citizens. A recent case involving a fabricated ad showing spiritual leader Sadhguru’s arrest prompted a Delhi High Court order directing Google to take it down.


The proposed law mandates that significant social-media intermediaries - those with more than five million users - flag, watermark and embed metadata in AI-generated media. Users uploading such content must declare it synthetic, while platforms must verify those declarations through “reasonable and proportionate” technical measures. Violators risk losing their ‘safe harbour’ protection, which is the legal immunity that has long shielded platforms from liability for user posts.


Labelled artifice does not suppress creativity but is essential to reclaim honesty. Just as newspapers distinguish editorial from advertisement, synthetic media should announce its nature.


AI mayhem

The IT ministry warned that generative AI was being ‘weaponised’ to damage reputations, sway elections and commit fraud. India has already seen what synthetic deception looks like. In April 2024, just before the Lok Sabha election, a doctored video of Home Minister Amit Shah had circulated online, showing him apparently pledging to scrap caste-based reservations if his party returned to power. The footage was credible enough to ignite outrage across caste lines before fact-checkers revealed it was fake. The Delhi Police later traced its origin to party activists in Telangana, some of whom were arrested. Another deepfake video that had surfaced this year falsely showed Shah endorsing a financial-investment platform.


In a country where a rumour can spark a riot, the capacity to manufacture outrage from pixels is lethal.


Nations worldwide are scrambling to contain a technology that moves faster than law or ethics. The European Union’s Artificial Intelligence Act, adopted in March 2024, requires that all generative systems clearly label synthetic content - a measure that is to take full effect by 2026. In March this year, Spain has approved fines for unlabelled AI creations.


China issued its ‘Measures for Labelling Artificial Intelligence-Generated Content’ in March this year, effective from September, compelling both visible watermarks and embedded metadata on every AI-produced image, video, or voice clip. In Washington, the Take It Down Act signed in April 2025 obliges platforms to remove non-consensual AI-generated imagery within 48 hours of notification. Even Denmark, a digital-rights pioneer, amended its copyright law in June 2025 to give citizens ownership of their likeness and voice in AI-made material.


The European Union’s AI Act, the first of its kind, demands that generative systems label synthetic content and make provenance traceable. The United States, still mired in congressional gridlock, has turned to voluntary pledges from tech firms, with the White House securing commitments from OpenAI, Google, and Meta to watermark AI outputs. Even Britain, where regulators have traditionally favoured light-touch oversight, is now funding research into ‘authenticity infrastructure’ to track provenance in digital media.


India’s proposal borrows from all these models yet retains a distinctly democratic flavour which favours awareness over surveillance and deterrence over censorship. In a sense, the Indian government’s move is an attempt to repair a breach in the social contract. The state’s duty is not merely to uphold free expression but to ensure that the public square remains anchored in truth. As deepfakes dissolve the boundary between fact and fabrication, that anchoring becomes impossible without intervention. If democracy depends on shared reality, synthetic media threatens to erode its very foundation.


Global norm

Predictably, technology companies warn that automatic detection of deepfakes is technically complex. Generative models evolve faster than watermarking tools can catch them. But firms that can conjure photo-realistic worlds from a single prompt cannot plead helplessness when asked to tag their own creations. OpenAI’s Sora, Google’s Gemini, and Meta’s in-house AI systems already experiment with invisible metadata, watermarking, and blockchain-based verification. India’s draft rules merely codify what is fast becoming a global norm that transparency as obligation, not option.


The real innovation lies in governance. By making traceability and labelling mandatory, India is forcing platforms to build detection systems into their architecture, not bolt them on after scandal strikes. The country’s experience with misinformation, ranging from lynchings triggered by false WhatsApp forwards to financial scams built on fake celebrity endorsements, has shown that technological ‘neutrality’ is no longer defensible.


E.M. Forster’s classic 1909 short story ‘The Machine Stops’ imagined a future where humans, cocooned by technology, mistake simulation for life itself. Today, those parables feel prophetic. The machine has not stopped; it has learned to speak in our own voices.


There is historical precedent, too, for such regulatory corrections. When photography first arrived, newspapers adopted ethical codes against manipulated images. The advent of radio prompted broadcast licences to curb propaganda. In each case, society reasserted the primacy of the real. India’s move to label AI content follows in that tradition.


The world’s democracies are converging on a single insight: that **truth needs infrastructure**. The United States, spooked by election deepfakes, is urging voluntary labelling by AI firms. The EU has legislated it. China enforces it by fiat. India’s path may prove instructive because it straddles both the democratic ideal of open debate and the developmental imperative of social order.


Regulation alone will not suffice. India needs a parallel campaign in digital literacy, teaching citizens to read the internet with the same scepticism they bring to gossip. Understanding what “synthetic” means, learning to verify sources, and reporting manipulated content should become civic habits, not elite pastimes. The country’s media, schools, and regional broadcasters have a role to play in making this reform not just a rulebook, but a cultural shift.


Ultimately, the success of these laws will hinge less on detection algorithms and more on a revived public appetite for authenticity.


India’s proposed deepfake labelling regime is a recognition that democracy cannot survive on illusions. Every society depends on a shared baseline of truth; without it, politics degenerates into theatre and consent into manipulation. As generative AI blurs those boundaries, India’s response is both pragmatic and necessary.

bottom of page