top of page

By:

Correspondent

23 August 2024 at 4:29:04 pm

Kaleidoscope

A pilgrim kisses a child before departing for a pilgrimage to gurdwaras in Pakistan ahead of Baisakhi festival, at the India-Pakistan Attari-Wagah border in Attari on Friday. Bollywood actor Mrunal Thakur during the special screening of film 'Dacoit Ek Prem Katha' in Mumbai on Thursday. School teachers and students perform 'Bhangra', a traditional folk dance, ahead of the Baisakhi festival in a wheat field near Jammu on Friday. Members of the public arrive to attend Ladies Day, the second day...

Kaleidoscope

A pilgrim kisses a child before departing for a pilgrimage to gurdwaras in Pakistan ahead of Baisakhi festival, at the India-Pakistan Attari-Wagah border in Attari on Friday. Bollywood actor Mrunal Thakur during the special screening of film 'Dacoit Ek Prem Katha' in Mumbai on Thursday. School teachers and students perform 'Bhangra', a traditional folk dance, ahead of the Baisakhi festival in a wheat field near Jammu on Friday. Members of the public arrive to attend Ladies Day, the second day of the Grand National Horse Racing festival, at Aintree racecourse near Liverpool, England, on Friday. A worker unloads sacks of wheat grain at a warehouse in Bhopal on Friday.

War at Machine Speed

The US–Israel strikes on Iran have shown how artificial intelligence will dictate the future of warfare.

Military history is punctuated by moments when technology abruptly shifts the balance of power. The machine gun, radar and nuclear weapons each transformed warfare in their time. Artificial intelligence now appears poised to join that list. Recent clashes involving Israel, the United States and Iran suggest that algorithms are beginning to shape the outcome of conflicts as decisively as tanks or aircraft once did.


In modern war, victory increasingly belongs not only to the side with superior firepower but to the one that can process information fastest. AI systems can sift through torrents of intelligence, from satellites, drones, intercepted communications and social-media signals, and convert them into precise targeting decisions within minutes. This compression of time has altered what strategists call the ‘kill chain’, or the sequence that turns raw data into military action.


During recent hostilities in the Middle East, American and Israeli forces reportedly deployed sophisticated machine-learning systems to integrate streams of intelligence and guide precision strikes against Iranian assets. Before launching attacks, cyber teams infiltrated digital networks, intercepting surveillance feeds and communications that were then analysed by AI tools to verify targets. Iranian air-defence systems were swiftly neutralised, allowing waves of coordinated strikes. Tehran’s response relied largely on missile launches, highlighting the asymmetry between traditional firepower and data-driven warfare.

Algorithmic Kill Chain

At the centre of this transformation lies Project Maven, a Pentagon programme first launched in 2017 to harness artificial intelligence for battlefield intelligence. Developed with the help of firms such as Palantir Technologies, the system analyses vast volumes of surveillance imagery and communications data to identify potential targets.


Traditionally, military analysts required hours or even days to evaluate intelligence streams and decide where to strike. The United States once acknowledged that assembling reliable targeting options could take as long as 72 hours. AI systems such as Maven can compress that process into minutes, producing hundreds of potential strike options almost instantly. During the opening hours of recent hostilities, American and Israeli forces reportedly launched hundreds of AI-coordinated strikes, overwhelming Iranian defences before commanders could react.


The implications are profound. Modern battlefields generate enormous volumes of data: satellite imagery, drone feeds, intercepted phone calls, emails and encrypted messages from platforms such as WhatsApp and Telegram. Some of it is genuine intelligence; much of it is deliberate disinformation designed to confuse analysts. AI systems excel at detecting patterns within this chaos, filtering out false signals and highlighting credible threats.


In effect, machines are beginning to assist, if not entirely replace, human judgement in selecting targets, estimating damage and recommending the most effective course of action.


Major Shift

The shift from human analysis to algorithmic decision-making is stirring unease. Critics warn that delegating lethal decisions to machines risks lowering the threshold for war or amplifying errors at unprecedented speed. American policymakers have wrestled with the ethical implications of such systems, particularly as private technology firms become involved in military programmes.


One such debate has centred on Anthropic, the developer of the large language model Claude. American defence officials have explored whether similar AI systems could assist in military planning and logistics. The discussion highlights a deeper tension: advanced AI is becoming indispensable to national security, yet its autonomous capabilities raise difficult moral and strategic questions.


Nevertheless, the logic of competition is relentless. As great powers integrate AI into their armed forces, others feel compelled to follow.


For India, the rise of AI-driven warfare carries particular urgency. The country faces persistent security challenges along its borders and through proxy conflicts, including militant activity linked to neighbouring Pakistan. In such an environment, technological superiority can offer a decisive advantage.


Globally, India currently ranks around tenth in overall AI capability. The leaders remain the United States and China, followed by technologically advanced states such as Singapore and the United Kingdom. Yet India possesses notable strengths: a vast pool of technical talent and a rapidly expanding digital ecosystem.


The weaknesses are equally clear. The country’s digital infrastructure still trails that of leading AI powers, and it accounts for only a small share of global high-performance computing capacity. Venture capital investment in AI remains heavily concentrated in America and China.


Recognising the stakes, New Delhi has begun pushing forward. Institutions such as NITI Aayog have crafted a national AI strategy emphasising education, data infrastructure and collaboration between universities and industry. Premier institutions, from the Indian Institutes of Technology to the Indian Institute of Science, are expanding programmes in machine learning and data science. The goal is not merely to use foreign technology but to develop indigenous AI systems suited to India’s needs, including tools for healthcare, agriculture and Indian-language computing.


India also sees AI through a broader geopolitical lens. As the world’s largest democracy and a leading voice of the Global South, it hopes to shape the governance of emerging technologies rather than simply adapt to them. At gatherings such as the G20 Delhi Summit and the India AI Impact Summit, officials have emphasised the need for inclusive innovation and ethical safeguards.


Yet the strategic message remains unmistakable. Just as nuclear capability once conferred geopolitical weight, mastery of artificial intelligence may soon define the hierarchy of power in the twenty-first century.


In war as in commerce, the countries that command algorithms and the data that feeds them are likely to command the future.


Digital Shields in Proxy War

Terrorism, like most other forms of conflict, has migrated online. The modern extremist organisation no longer relies solely on clandestine camps in remote mountains; it recruits, radicalises and coordinates through the glow of smartphone screens. For countries such as India, facing persistent proxy warfare in its neighbourhood, artificial intelligence is becoming an increasingly vital defensive tool.


The digital battlefield is vast. Extremist groups have learned to exploit social-media ecosystems to spread propaganda, identify vulnerable recruits and orchestrate attacks with alarming sophistication. These networks blend online persuasion with operational planning. Encrypted messaging, algorithm-driven propaganda and psychological manipulation form part of a digital playbook designed to convert alienated individuals into instruments of violence.


Recent incidents illustrate the pattern. A terrorist attack near Red Fort on November 10, 2025, and another assault weeks later at Bondi Beach revealed how seemingly isolated ‘lone wolf attacks can in fact be carefully orchestrated through online networks. The perpetrators may appear to act alone, but their radicalisation often occurs in the hidden corners of the internet, where extremist narratives circulate unchecked.


The shift to digital radicalisation reflects a broader geopolitical reality. Even as territorial strongholds in Iraq and Syria were dismantled, groups such as Islamic State adapted by strengthening their online operations. Secure messaging platforms and anonymous forums now serve as substitutes for physical training camps, allowing extremist networks to operate across borders with relative ease.


For India, the problem is compounded by the long shadow of cross-border militancy. Security officials frequently accuse Pakistan of sustaining a strategy of proxy warfare through militant intermediaries. Groups such as The Resistance Front and People’s Anti-Fascist Front have been linked to online propaganda campaigns designed to recruit and radicalise youth in the sensitive region of Jammu and Kashmir.


Investigations into attacks such as the Red Fort bombing revealed a striking detail: some of those drawn into extremist plots were highly educated professionals, including doctors. The term “white-collar terrorism” has begun to circulate among investigators, reflecting the uncomfortable reality that digital radicalisation can reach far beyond marginalised communities. Encrypted platforms such as Threema complicate forensic investigations, allowing recruiters to communicate with potential operatives while evading surveillance.


To counter this evolving threat, governments are increasingly turning to artificial intelligence. AI systems can analyse vast streams of digital information to detect early signs of radicalisation or coordinated activity. For intelligence agencies overwhelmed by the sheer volume of online data, such tools offer a crucial advantage.


India has begun integrating these technologies into its security apparatus. At the country’s AI Impact Summit, police personnel demonstrated smart glasses equipped with AI-powered facial-recognition capabilities, designed to identify suspects in crowded public spaces. The government has also moved aggressively to curb online propaganda: in 2025 alone, authorities blocked nearly 9,845 internet addresses linked to extremist or terrorist content.


The effort is hardly confined to India. Countries across Asia including Australia, Malaysia, Singapore and Indonesia have strengthened legislation and surveillance tools to combat online radicalisation. Regional cooperation has also deepened, reflecting the recognition that digital extremism respects no borders.


Artificial intelligence, of course, is no panacea. Terrorist groups adapt quickly, exploiting new platforms as soon as old ones are closed. Yet the technology offers governments a powerful means of shifting the balance by detecting patterns invisible to the human eye and enabling earlier interventions.


Revolutionizing Professions, Boosting Incomes

AI generated image
AI generated image

Artificial intelligence is often portrayed as a technological juggernaut poised to devour jobs. Yet the reality emerging across industries is subtler and more optimistic.

 

Like earlier general-purpose technologies such as electricity or the internet, AI is proving less a destroyer of professions than a multiplier of human productivity. By automating drudgery and refining decision-making, it is quietly raising incomes and creating niches that did not exist a decade ago.

 

Consider an unlikely beneficiary: the humble florist. Algorithms that analyse footfall, seasonal demand and social-media trends now guide flower retailers in managing their most perishable asset fresh blooms. Smart inventory systems help sellers maintain just enough stock to keep bouquets fresh while avoiding spoilage. Machine-learning tools sift through sales data to forecast seasonal hits, allowing shopkeepers to place their displays strategically. Staff spend less time on guesswork and manual records, and more on customer service. A small shop becomes, in effect, a data-driven enterprise.

 

Such transformations echo a broader historical pattern. The mechanisation of textile mills in the 19th century did not eliminate textile workers; it changed their roles and multiplied output. AI is performing a similar function across today’s knowledge economy.

 

Education illustrates the point well. Adaptive learning platforms such as Duolingo adjust lessons to a student’s pace, improving retention rates dramatically. Teachers increasingly rely on AI-assisted grading tools and virtual tutors, saving hours each week that can be redirected toward mentoring and classroom engagement. Algorithms also flag students at risk of falling behind, enabling earlier interventions that can improve graduation rates.

 

Agriculture, the world’s oldest industry, is undergoing a comparable technological revival. AI-driven drones and sensors enable precision farming: crops are monitored in real time, irrigation is tailored to soil conditions, and yields are optimised while water use falls. Smartphone-based image recognition allows farmers to identify pests instantly and deploy targeted pesticides instead of blanket spraying. In regions where weather shocks can devastate livelihoods, from India’s monsoon belt to America’s Midwest, predictive models offer a valuable early warning. The result is not only greater efficiency but also a measurable rise in farm incomes.

 

Finance, long an early adopter of computing, has embraced AI with particular enthusiasm. Algorithms now scan thousands of transactions per second to detect fraud in real time, a task once handled by armies of analysts. Retail investors increasingly turn to automated advisory platforms that build portfolios and rebalance them with mathematical discipline. Natural-language tools sift through regulatory documents and compliance reports in hours rather than weeks, sparing banks costly errors.

 

Healthcare offers perhaps the most striking examples. AI-assisted imaging can analyse scans and flag potential cancers with remarkable accuracy within seconds, helping doctors make faster diagnoses. Predictive analytics forecast complications before they occur, allowing hospitals to shorten patient stays and allocate resources more efficiently. Robotic-assisted surgery systems such as da Vinci Surgical System reduce the likelihood of human error while enabling surgeons to perform delicate procedures with unprecedented precision. Even mundane tasks are shifting: chatbots handle routine patient queries, freeing nurses to focus on bedside care.

 

 

Factories, too, are becoming laboratories of algorithmic efficiency. Predictive-maintenance systems analyse vibrations, temperatures and machine performance to anticipate breakdowns before they occur. Collaborative robots (‘cobots’) work alongside humans, boosting output while maintaining safety. Vision systems inspect products on assembly lines, identifying defects far more reliably than the human eye.

 

The legal profession, once thought resistant to automation, is also adapting. AI systems now scan lengthy contracts for hidden risks and precedents in seconds, enabling lawyers to focus on strategy rather than paperwork. Similar tools analyse past rulings to estimate the likelihood of success in litigation. In creative industries, generative systems from Midjourney to GitHub Copilot are accelerating content creation and software development, giving rise to new roles such as ‘prompt engineers.’

 

The cumulative effect is striking. Across manufacturing, predictive maintenance alone can halve equipment downtime; automated quality control dramatically reduces defects; and algorithmic supply-chain management trims inventory costs.

 

History suggests that such technological leaps rarely shrink the total number of jobs. The steam engine, electrification and the computer all displaced certain roles while creating new industries and professions. AI appears set to follow the same path.

Comments


bottom of page