top of page

By:

Quaid Najmi

4 January 2025 at 3:26:24 pm

Bhujbal’s chopper lands in Pune parking lot

Mumbai : In what is suspected to be a breach of aviation protocols, a chartered helicopter ferrying Food & Civil Supplies Minister Chhagan Bhujbal from Mumbai to Pune skipped a designated helipad and landed in a vehicle parking lot almost a km away.   The shocker happened in Purandar taluka, where Bhujbal was slated to attend a function marking the 200 th  birth anniversary of the social reformer Mahatma Jyotirao Phule in his home village Khanwadi.   As crowds of bewildered people watched...

Bhujbal’s chopper lands in Pune parking lot

Mumbai : In what is suspected to be a breach of aviation protocols, a chartered helicopter ferrying Food & Civil Supplies Minister Chhagan Bhujbal from Mumbai to Pune skipped a designated helipad and landed in a vehicle parking lot almost a km away.   The shocker happened in Purandar taluka, where Bhujbal was slated to attend a function marking the 200 th  birth anniversary of the social reformer Mahatma Jyotirao Phule in his home village Khanwadi.   As crowds of bewildered people watched from around the sprawling parking lot, the helicopter appeared to drop speed in its flight, flew over some overhead high-tension electric cables, and descended gingerly into the parking lot - raising a thick dust-storm in which it disappeared for seconds - before touching the ground.   Moments later, the Nationalist Congress Party (NCP) senior leader Bhujbal and others stepped out of the chopper, looked around in the unfamiliar territory before several vehicles and police teams rushed there. Minutes before there was chaos and confusion with some locals shouting warnings at the ‘wrong landing’.   Eyewitnesses said that the chopper’s powerful rotors created a thick dust storm and sparked alarm among the people in the vicinity, and many scrambled to the spot to check what exactly was going on in the parking lot.   Later, the Pune Police said that a designated helipad was available for the chopper landing but were at a loss to explain how the pilot missed it and veered off quite a distance away in the vehicle parking space. Subsequently, they asked the pilot to fly it to the correct landing spot.   Shaken and angry local NCP leaders questioned how a pilot flying a VIP on an official trip could mistake a parking lot for a helipad when the weather and visibility was clear. They demanded to know whether the helipad was improperly marked or it was a question of communication or sheer negligence.   The Pune Police indicated that they would report the matter to the Directorate General of Civil Aviation (DGCA) which may take action against the errant pilot and the helicopter company.   “There was no accident. We all emerged safely. The helicopter pilot landed wrongly in a parking lot because the helipad was not visible. All of us are fine and there is nothing to worry,” said Bhujbal, before he was whisked off by his security team.   “There are many faults in numerous airplanes and helicopters, including maintenance issues and other problems. That's why I keep saying consistently that VIPs must exercise caution while flying. Fortunately, an accident was averted today, but that doesn't mean the authorities should be negligent. We expect the government to take urgent precautions.” Rohit R. Pawar, MLA, NCP (SP)

Deepfakes: The Age of Digital Deception

If left unchecked, deepfakes pose a direct threat to public trust—the social foundation for media, institutions, and law.

In today’s world, where social media clips can go viral within seconds, a troubling question arises: What happens when the video you are watching is not genuine, yet appears completely convincing? Deepfakes represent one of the most advanced applications of artificial intelligence. They rely on sophisticated models such as Generative Adversarial Networks (GANs) and newer diffusion techniques, harnessing powerful tools that are capable of producing astonishingly realistic videos, images, or even audio recordings. These creations can make it seem as though a person said or did something that never actually happened, blurring the line between reality and fabrication.


While the technology feels new, its roots stretch back more than two decades. In 1997, researchers developed a system called Video Rewrite that could synchronise lip movements with different audio tracks. In 2016, a project called Face2Face took things further by allowing real-time manipulation of facial expressions. A year later, Synthesising Obama stunned viewers by generating a convincing video of the former U.S. President saying scripted lines he never spoke. But it was in the 2020s that deepfakes became truly mainstream, powered by easy-to-use software and open-source AI models.


Several high-profile deepfake cases have recently emerged in India. One of the discussed cases in India is that of BJP leader Manoj Tiwari, who in 2020 had a deepfake created to clone his voice and facial expressions. Another case was in May 2024; a young man named Yash Bhavsar was arrested in Madhya Pradesh for creating inappropriate content images of women using deepfake technology. Two months later, in July 2025, Assam's Pratim Bora was caught using AI to generate explicit images of an ex-classmate, which he sold online through a subscription model. In July 2024, the Bombay High Court delivered a landmark ruling in Arijit Singh vs Codible Ventures, ordering an interim injunction against unauthorised voice cloning using AI. Another case is Ankur Warikoo v. John Doe (Deepfake Identity Misuse), Delhi High Court (May 2025). These are two genuine High Court judgements in India related to deepfake misuse, where courts granted injunctions recognising the threat posed by AI-generated fake content, especially relating to the misuse of personality rights:


However, as of now, the Honourable Supreme Court hasn’t yet passed a landmark judgment specifically on deepfake content. Nevertheless, the legal framework for digital evidence requires strict authentication for all electronic content.


In response to these challenges, both the legal system and the technology sector are moving quickly. One major area of focus has been developing ways to detect and block deepfakes. Zero Defend Security, a company of Bengaluru, launched Vastav.AI, a cloud-based platform that helps analyse and detect deepfakes with up to 99% accuracy. Another breakthrough came with FaceShield, a tool that protects images from being used in deepfake generation. Indian institutions like IISc Bengaluru and IIIT Hyderabad are working on tools that can detect subtle signs of fakery, such as unnatural eye blinks, mismatched lighting, or inconsistencies in speech rhythm. Globally, tech giants like Meta and Google are also building AI to detect AI, creating what’s called “deepfake detectors”.


Yet even with national efforts, the threat is global in scale. International bodies like INTERPOL, the UN’s ITU, and UNODC are now pushing for global standards in watermarking and AI verification, warning that deepfakes are already being linked to child exploitation, online defamation, and election interference. There is an INTERPOL “Beyond Illusions” (2024) report that stresses deepfake threats. Experts like Dr Danielle Citron, Dr Robert Chesney, and Dr Hao Li have all raised red flags about how deepfakes could erode public trust. Prof. Ponnurangam Kumaraguru, at IIT-Hyderabad (Department of AI), is known for projects on deepfake detection and misinformation, especially in Indian contexts. Another renowned scientist is Dr Sumeet Agarwal (IIT Delhi). Works in deep learning, adversarial attacks, and generative AI.


In conclusion, deepfakes are blurring the silver line between truth and fiction at an alarming pace. The results can be almost indistinguishable from reality, and the consequences are starting to show up in real lives, real crimes, and real courtrooms.


(Dr. Kumar is a retired IPS officer and forensic advisor to the Assam government and Shiwani Phukan is a student of National Forensic University, Guwahati. Views personal.)

Comments


bottom of page