top of page

By:

Rahul Kulkarni

30 March 2025 at 3:32:54 pm

The Boundary Collapse

When kindness becomes micromanagement It started with a simple leave request.   “Hey, can I take Friday off? Need a personal day,” Meera messaged Rohit. Rohit replied instantly:   “Of course. All good. Just stay reachable if anything urgent comes up.”   He meant it as reassurance. But the team didn’t hear reassurance. They heard a rule.   By noon, two things had shifted inside The Workshop:   Meera felt guilty for even asking. Everyone else quietly updated their mental handbook: Leave is...

The Boundary Collapse

When kindness becomes micromanagement It started with a simple leave request.   “Hey, can I take Friday off? Need a personal day,” Meera messaged Rohit. Rohit replied instantly:   “Of course. All good. Just stay reachable if anything urgent comes up.”   He meant it as reassurance. But the team didn’t hear reassurance. They heard a rule.   By noon, two things had shifted inside The Workshop:   Meera felt guilty for even asking. Everyone else quietly updated their mental handbook: Leave is allowed… but not really. This is boundary collapse… when a leader’s good intentions unintentionally blur the limits that protect autonomy and rest. When care quietly turns into control Founders rarely intend to micromanage.   What looks like control from the outside often starts as care from the inside. “Let me help before something breaks.” “Let me stay involved so we don’t lose time.” “Loop me in… I don’t want you stressed.” Supportive tone.   Good intentions.   But one invisible truth defines workplace psychology: When power says “optional,” it never feels optional.
So when a client requested a revision, Rohit gently pinged:   “If you’re free, could you take a look?” Of course she logged in.   Of course she handled it.   And by Monday, the cultural shift was complete: Leave = location change, not a boundary.   A founder’s instinct had quietly become a system. Pattern 1: The Generous Micromanager Modern micromanagement rarely looks aggressive. It looks thoughtful :   “Let me refine this so you’re not stuck.” “I’ll review it quickly.”   “Share drafts so we stay aligned.”   Leaders believe they’re being helpful. Teams hear:   “You don’t fully trust me.” “I should check with you before finishing anything.”   “My decisions aren’t final.” Gentle micromanagement shrinks ownership faster than harsh micromanagement ever did because people can’t challenge kindness. Pattern 2: Cultural conditioning around availability In many Indian workplaces, “time off” has an unspoken footnote: Be reachable. Just in case. No one says it directly.   No one pushes back openly.   The expectation survives through habit: Leave… but monitor messages. Rest… but don’t disconnect. Recover… but stay alert. Contrast this with a global team we worked with: A designer wrote,   “I’ll be off Friday, but available if needed.” Her manager replied:   “If you’re working on your off-day, we mismanaged the workload… not the boundary.”   One conversation.   Two cultural philosophies.   Two completely different emotional outcomes.   Pattern 3: The override reflex Every founder has a version of this reflex.   Whenever Rohit sensed risk, real or imagined, he stepped in: Rewriting copy.   Adjusting a design.   Rescoping a task.   Reframing an email. Always fast.   Always polite.   Always “just helping.” But each override delivered one message:   “Your autonomy is conditional.” You own decisions…   until the founder feels uneasy.   You take initiative…   until instinct replaces delegation.   No confrontation.   No drama.   Just quiet erosion of confidence.   The family-business amplification Boundary collapse becomes extreme in family-managed companies.   We worked with one firm where four family members… founder, spouse, father, cousin… all had informal authority. Everyone cared.   Everyone meant well.   But for employees, decision-making became a maze: Strategy approved by the founder.   Aesthetics by the spouse.   Finance by the father. Tone by the cousin.   They didn’t need leadership.   They needed clarity.   Good intentions without boundaries create internal anarchy. The global contrast A European product team offered a striking counterexample.   There, the founder rarely intervened mid-stream… not because of distance, but because of design:   “If you own the decision, you own the consequences.” Decision rights were clear.   Escalation paths were explicit.   Authority didn’t shift with mood or urgency. No late-night edits.   No surprise rewrites.   No “quick checks.”   No emotional overrides. As one designer put it:   “If my boss wants to intervene, he has to call a decision review. That friction protects my autonomy.” The result:   Faster execution, higher ownership and zero emotional whiplash. Boundaries weren’t personal.   They were structural .   That difference changes everything. Why boundary collapse is so costly Its damage is not dramatic.   It’s cumulative.   People stop resting → you get presence, not energy.   People stop taking initiative → decisions freeze.   People stop trusting empowerment → autonomy becomes theatre.   People start anticipating the boss → performance becomes emotional labour.   People burn out silently → not from work, but from vigilance.   Boundary collapse doesn’t create chaos.   It creates hyper-alertness, the heaviest tax on any team. The real paradox Leaders think they’re being supportive. Teams experience supervision.   Leaders assume boundaries are obvious. Teams see boundaries as fluid. Leaders think autonomy is granted. Teams act as though autonomy can be revoked at any moment. This is the Boundary Collapse → a misunderstanding born not from intent, but from the invisible weight of power. Micromanagement today rarely looks like anger.   More often,   it looks like kindness without limits. (Rahul Kulkarni is Co-founder at PPS Consulting. He patterns the human mechanics of scaling where workplace behavior quietly shapes business outcomes. Views personal.)

Deepfakes: The Age of Digital Deception

If left unchecked, deepfakes pose a direct threat to public trust—the social foundation for media, institutions, and law.

ree

In today’s world, where social media clips can go viral within seconds, a troubling question arises: What happens when the video you are watching is not genuine, yet appears completely convincing? Deepfakes represent one of the most advanced applications of artificial intelligence. They rely on sophisticated models such as Generative Adversarial Networks (GANs) and newer diffusion techniques, harnessing powerful tools that are capable of producing astonishingly realistic videos, images, or even audio recordings. These creations can make it seem as though a person said or did something that never actually happened, blurring the line between reality and fabrication.


While the technology feels new, its roots stretch back more than two decades. In 1997, researchers developed a system called Video Rewrite that could synchronise lip movements with different audio tracks. In 2016, a project called Face2Face took things further by allowing real-time manipulation of facial expressions. A year later, Synthesising Obama stunned viewers by generating a convincing video of the former U.S. President saying scripted lines he never spoke. But it was in the 2020s that deepfakes became truly mainstream, powered by easy-to-use software and open-source AI models.


Several high-profile deepfake cases have recently emerged in India. One of the discussed cases in India is that of BJP leader Manoj Tiwari, who in 2020 had a deepfake created to clone his voice and facial expressions. Another case was in May 2024; a young man named Yash Bhavsar was arrested in Madhya Pradesh for creating inappropriate content images of women using deepfake technology. Two months later, in July 2025, Assam's Pratim Bora was caught using AI to generate explicit images of an ex-classmate, which he sold online through a subscription model. In July 2024, the Bombay High Court delivered a landmark ruling in Arijit Singh vs Codible Ventures, ordering an interim injunction against unauthorised voice cloning using AI. Another case is Ankur Warikoo v. John Doe (Deepfake Identity Misuse), Delhi High Court (May 2025). These are two genuine High Court judgements in India related to deepfake misuse, where courts granted injunctions recognising the threat posed by AI-generated fake content, especially relating to the misuse of personality rights:


However, as of now, the Honourable Supreme Court hasn’t yet passed a landmark judgment specifically on deepfake content. Nevertheless, the legal framework for digital evidence requires strict authentication for all electronic content.


In response to these challenges, both the legal system and the technology sector are moving quickly. One major area of focus has been developing ways to detect and block deepfakes. Zero Defend Security, a company of Bengaluru, launched Vastav.AI, a cloud-based platform that helps analyse and detect deepfakes with up to 99% accuracy. Another breakthrough came with FaceShield, a tool that protects images from being used in deepfake generation. Indian institutions like IISc Bengaluru and IIIT Hyderabad are working on tools that can detect subtle signs of fakery, such as unnatural eye blinks, mismatched lighting, or inconsistencies in speech rhythm. Globally, tech giants like Meta and Google are also building AI to detect AI, creating what’s called “deepfake detectors”.


Yet even with national efforts, the threat is global in scale. International bodies like INTERPOL, the UN’s ITU, and UNODC are now pushing for global standards in watermarking and AI verification, warning that deepfakes are already being linked to child exploitation, online defamation, and election interference. There is an INTERPOL “Beyond Illusions” (2024) report that stresses deepfake threats. Experts like Dr Danielle Citron, Dr Robert Chesney, and Dr Hao Li have all raised red flags about how deepfakes could erode public trust. Prof. Ponnurangam Kumaraguru, at IIT-Hyderabad (Department of AI), is known for projects on deepfake detection and misinformation, especially in Indian contexts. Another renowned scientist is Dr Sumeet Agarwal (IIT Delhi). Works in deep learning, adversarial attacks, and generative AI.


In conclusion, deepfakes are blurring the silver line between truth and fiction at an alarming pace. The results can be almost indistinguishable from reality, and the consequences are starting to show up in real lives, real crimes, and real courtrooms.


(Dr. Kumar is a retired IPS officer and forensic advisor to the Assam government and Shiwani Phukan is a student of National Forensic University, Guwahati. Views personal.)

Comments


bottom of page