Deepfakes: The Age of Digital Deception
- Dr. Keshav Kumar and Shiwani Phukan
- Oct 7
- 3 min read
If left unchecked, deepfakes pose a direct threat to public trust—the social foundation for media, institutions, and law.

In today’s world, where social media clips can go viral within seconds, a troubling question arises: What happens when the video you are watching is not genuine, yet appears completely convincing? Deepfakes represent one of the most advanced applications of artificial intelligence. They rely on sophisticated models such as Generative Adversarial Networks (GANs) and newer diffusion techniques, harnessing powerful tools that are capable of producing astonishingly realistic videos, images, or even audio recordings. These creations can make it seem as though a person said or did something that never actually happened, blurring the line between reality and fabrication.
While the technology feels new, its roots stretch back more than two decades. In 1997, researchers developed a system called Video Rewrite that could synchronise lip movements with different audio tracks. In 2016, a project called Face2Face took things further by allowing real-time manipulation of facial expressions. A year later, Synthesising Obama stunned viewers by generating a convincing video of the former U.S. President saying scripted lines he never spoke. But it was in the 2020s that deepfakes became truly mainstream, powered by easy-to-use software and open-source AI models.
Several high-profile deepfake cases have recently emerged in India. One of the discussed cases in India is that of BJP leader Manoj Tiwari, who in 2020 had a deepfake created to clone his voice and facial expressions. Another case was in May 2024; a young man named Yash Bhavsar was arrested in Madhya Pradesh for creating inappropriate content images of women using deepfake technology. Two months later, in July 2025, Assam's Pratim Bora was caught using AI to generate explicit images of an ex-classmate, which he sold online through a subscription model. In July 2024, the Bombay High Court delivered a landmark ruling in Arijit Singh vs Codible Ventures, ordering an interim injunction against unauthorised voice cloning using AI. Another case is Ankur Warikoo v. John Doe (Deepfake Identity Misuse), Delhi High Court (May 2025). These are two genuine High Court judgements in India related to deepfake misuse, where courts granted injunctions recognising the threat posed by AI-generated fake content, especially relating to the misuse of personality rights:
However, as of now, the Honourable Supreme Court hasn’t yet passed a landmark judgment specifically on deepfake content. Nevertheless, the legal framework for digital evidence requires strict authentication for all electronic content.
In response to these challenges, both the legal system and the technology sector are moving quickly. One major area of focus has been developing ways to detect and block deepfakes. Zero Defend Security, a company of Bengaluru, launched Vastav.AI, a cloud-based platform that helps analyse and detect deepfakes with up to 99% accuracy. Another breakthrough came with FaceShield, a tool that protects images from being used in deepfake generation. Indian institutions like IISc Bengaluru and IIIT Hyderabad are working on tools that can detect subtle signs of fakery, such as unnatural eye blinks, mismatched lighting, or inconsistencies in speech rhythm. Globally, tech giants like Meta and Google are also building AI to detect AI, creating what’s called “deepfake detectors”.
Yet even with national efforts, the threat is global in scale. International bodies like INTERPOL, the UN’s ITU, and UNODC are now pushing for global standards in watermarking and AI verification, warning that deepfakes are already being linked to child exploitation, online defamation, and election interference. There is an INTERPOL “Beyond Illusions” (2024) report that stresses deepfake threats. Experts like Dr Danielle Citron, Dr Robert Chesney, and Dr Hao Li have all raised red flags about how deepfakes could erode public trust. Prof. Ponnurangam Kumaraguru, at IIT-Hyderabad (Department of AI), is known for projects on deepfake detection and misinformation, especially in Indian contexts. Another renowned scientist is Dr Sumeet Agarwal (IIT Delhi). Works in deep learning, adversarial attacks, and generative AI.
In conclusion, deepfakes are blurring the silver line between truth and fiction at an alarming pace. The results can be almost indistinguishable from reality, and the consequences are starting to show up in real lives, real crimes, and real courtrooms.
(Dr. Kumar is a retired IPS officer and forensic advisor to the Assam government and Shiwani Phukan is a student of National Forensic University, Guwahati. Views personal.)





Comments