top of page

By:

Divyaa Advaani 

2 November 2024 at 3:28:38 am

When agreement kills growth

In the early stages of building a business, growth is often driven by clarity, speed, and conviction. Founders make decisions quickly, rely on their instincts, and push forward with a strong sense of belief in their methods. This decisiveness is not only necessary, it is often the very reason the business begins to grow. However, as businesses cross certain thresholds, particularly beyond the Rs 5 crore mark, the nature of growth begins to change. What once created momentum can quietly begin...

When agreement kills growth

In the early stages of building a business, growth is often driven by clarity, speed, and conviction. Founders make decisions quickly, rely on their instincts, and push forward with a strong sense of belief in their methods. This decisiveness is not only necessary, it is often the very reason the business begins to grow. However, as businesses cross certain thresholds, particularly beyond the Rs 5 crore mark, the nature of growth begins to change. What once created momentum can quietly begin to create limitations. In many professional environments, it is not uncommon to encounter business owners who are deeply convinced of their approach. Their methods have delivered results, their experience reinforces their judgment, and their confidence becomes a defining trait. Yet, in this very confidence lies a subtle risk that is often overlooked. When conviction turns into certainty without space for dialogue, conversations begin to narrow. Suggestions are heard, but not always considered. Perspectives are offered, but not always encouraged. Decisions are made, but not always explained. From the outside, this may still appear as strong leadership. Internally, however, a different dynamic begins to take shape. People start to agree more than they contribute. This is where many businesses unknowingly enter a critical phase. When teams, partners, or stakeholders begin to hold back their perspective, the quality of thinking around the business reduces. What appears as alignment is often silent disengagement. What looks like efficiency is sometimes the absence of challenge. Over time, this directly affects the decisions being made. At a Rs 5 crore level, this may not be immediately visible. Operations continue, revenue flows, and the business appears stable. But as the organisation attempts to grow further, this lack of diverse thinking begins to surface as a constraint. Growth slows, not because of lack of effort, but because of limited perspective. On the other side of this equation are individuals who consistently find themselves accommodating such dynamics. They recognise when their voice is not being fully heard, yet choose not to assert it. The intention is often to preserve relationships, avoid friction, or maintain a sense of professional ease. Initially, this approach appears collaborative. Over time, however, it begins to shape perception. When individuals do not express their perspective, they are gradually seen as agreeable rather than essential. Their presence is valued, but their input is not actively sought. In many cases, they become part of the process, but not part of the decision. This is where personal branding begins to influence business outcomes in ways that are not immediately obvious. A personal brand is not built only through visibility or achievement. It is built through how consistently one demonstrates clarity, confidence, and openness in moments that require it. It is shaped by whether people feel encouraged to think around you, or restricted in your presence. At higher levels of business, this distinction becomes critical. If people agree with you more than they challenge you, it may not be a sign of strong leadership. It may be an indication that your environment is no longer enabling better thinking. Similarly, if you find yourself constantly adjusting to others without expressing your own perspective, your contribution may be diminishing in ways that affect both your influence and your growth. Both situations carry a cost. They affect decision quality, limit innovation, and over time, restrict the scalability of the business itself. What makes this particularly challenging is that these patterns develop gradually, often going unnoticed until the impact becomes difficult to ignore. The most effective leaders recognise this early. They create space for dialogue without losing direction. They express conviction without dismissing perspective. They build environments where contribution is expected, not avoided. In doing so, they strengthen not only their business, but also their personal brand. For entrepreneurs operating at a stage where growth is no longer just about execution but about expanding thinking, this becomes an important point of reflection. If there is even a possibility that your current interactions are limiting the quality of thinking around you, it is worth addressing before it begins to affect outcomes. I work with a select group of founders and professionals to help them refine how they are perceived, communicate with greater impact, and build personal brands that support sustained growth. You may explore this further here: https://sprect.com/pro/divyaaadvaani In the long run, it is not only the decisions you make, but the thinking you allow around those decisions, that determines how far your business can truly grow. (The author is a personal branding expert. She has clients from 14+ countries. Views personal.)

Labelling the Unreal: India’s War on Deepfakes

As synthetic media blurs the boundary between fact and fiction, India’s proposed amendments to the IT Rules seek to anchor truth in law.

In 1818, Mary Shelley warned that invention without restraint could spawn monsters. Her Frankenstein was a parable about humankind’s failure to foresee the consequences of its own ingenuity. Two centuries later, that cautionary tale is playing out in digital form. The creature this time is a vast, shape-shifting artificial intelligence (AI) capable of conjuring words, faces and voices indistinguishable from reality.


Last week, the Indian government decided to draw a line in the digital sand. The Ministry of Electronics and Information Technology proposed amendments to the IT Rules, 2021, requiring social-media and AI firms to clearly label all synthetically generated content.


The move, spurred by the growing menace of deepfakes, seeks to restore a sense of reality in an era where even truth has become malleable.


Few democracies are as vulnerable to digital deceit as India. With almost a billion internet users and a combustible mix of languages, faiths and political loyalties, misinformation here can be deadly. Deepfake audio and video clips have already been weaponised to manipulate voters, smear public figures, and defraud citizens. A recent case involving a fabricated ad showing spiritual leader Sadhguru’s arrest prompted a Delhi High Court order directing Google to take it down.


The proposed law mandates that significant social-media intermediaries - those with more than five million users - flag, watermark and embed metadata in AI-generated media. Users uploading such content must declare it synthetic, while platforms must verify those declarations through “reasonable and proportionate” technical measures. Violators risk losing their ‘safe harbour’ protection, which is the legal immunity that has long shielded platforms from liability for user posts.


Labelled artifice does not suppress creativity but is essential to reclaim honesty. Just as newspapers distinguish editorial from advertisement, synthetic media should announce its nature.


AI mayhem

The IT ministry warned that generative AI was being ‘weaponised’ to damage reputations, sway elections and commit fraud. India has already seen what synthetic deception looks like. In April 2024, just before the Lok Sabha election, a doctored video of Home Minister Amit Shah had circulated online, showing him apparently pledging to scrap caste-based reservations if his party returned to power. The footage was credible enough to ignite outrage across caste lines before fact-checkers revealed it was fake. The Delhi Police later traced its origin to party activists in Telangana, some of whom were arrested. Another deepfake video that had surfaced this year falsely showed Shah endorsing a financial-investment platform.


In a country where a rumour can spark a riot, the capacity to manufacture outrage from pixels is lethal.


Nations worldwide are scrambling to contain a technology that moves faster than law or ethics. The European Union’s Artificial Intelligence Act, adopted in March 2024, requires that all generative systems clearly label synthetic content - a measure that is to take full effect by 2026. In March this year, Spain has approved fines for unlabelled AI creations.


China issued its ‘Measures for Labelling Artificial Intelligence-Generated Content’ in March this year, effective from September, compelling both visible watermarks and embedded metadata on every AI-produced image, video, or voice clip. In Washington, the Take It Down Act signed in April 2025 obliges platforms to remove non-consensual AI-generated imagery within 48 hours of notification. Even Denmark, a digital-rights pioneer, amended its copyright law in June 2025 to give citizens ownership of their likeness and voice in AI-made material.


The European Union’s AI Act, the first of its kind, demands that generative systems label synthetic content and make provenance traceable. The United States, still mired in congressional gridlock, has turned to voluntary pledges from tech firms, with the White House securing commitments from OpenAI, Google, and Meta to watermark AI outputs. Even Britain, where regulators have traditionally favoured light-touch oversight, is now funding research into ‘authenticity infrastructure’ to track provenance in digital media.


India’s proposal borrows from all these models yet retains a distinctly democratic flavour which favours awareness over surveillance and deterrence over censorship. In a sense, the Indian government’s move is an attempt to repair a breach in the social contract. The state’s duty is not merely to uphold free expression but to ensure that the public square remains anchored in truth. As deepfakes dissolve the boundary between fact and fabrication, that anchoring becomes impossible without intervention. If democracy depends on shared reality, synthetic media threatens to erode its very foundation.


Global norm

Predictably, technology companies warn that automatic detection of deepfakes is technically complex. Generative models evolve faster than watermarking tools can catch them. But firms that can conjure photo-realistic worlds from a single prompt cannot plead helplessness when asked to tag their own creations. OpenAI’s Sora, Google’s Gemini, and Meta’s in-house AI systems already experiment with invisible metadata, watermarking, and blockchain-based verification. India’s draft rules merely codify what is fast becoming a global norm that transparency as obligation, not option.


The real innovation lies in governance. By making traceability and labelling mandatory, India is forcing platforms to build detection systems into their architecture, not bolt them on after scandal strikes. The country’s experience with misinformation, ranging from lynchings triggered by false WhatsApp forwards to financial scams built on fake celebrity endorsements, has shown that technological ‘neutrality’ is no longer defensible.


E.M. Forster’s classic 1909 short story ‘The Machine Stops’ imagined a future where humans, cocooned by technology, mistake simulation for life itself. Today, those parables feel prophetic. The machine has not stopped; it has learned to speak in our own voices.


There is historical precedent, too, for such regulatory corrections. When photography first arrived, newspapers adopted ethical codes against manipulated images. The advent of radio prompted broadcast licences to curb propaganda. In each case, society reasserted the primacy of the real. India’s move to label AI content follows in that tradition.


The world’s democracies are converging on a single insight: that **truth needs infrastructure**. The United States, spooked by election deepfakes, is urging voluntary labelling by AI firms. The EU has legislated it. China enforces it by fiat. India’s path may prove instructive because it straddles both the democratic ideal of open debate and the developmental imperative of social order.


Regulation alone will not suffice. India needs a parallel campaign in digital literacy, teaching citizens to read the internet with the same scepticism they bring to gossip. Understanding what “synthetic” means, learning to verify sources, and reporting manipulated content should become civic habits, not elite pastimes. The country’s media, schools, and regional broadcasters have a role to play in making this reform not just a rulebook, but a cultural shift.


Ultimately, the success of these laws will hinge less on detection algorithms and more on a revived public appetite for authenticity.


India’s proposed deepfake labelling regime is a recognition that democracy cannot survive on illusions. Every society depends on a shared baseline of truth; without it, politics degenerates into theatre and consent into manipulation. As generative AI blurs those boundaries, India’s response is both pragmatic and necessary.

Comments


bottom of page