top of page

By:

Bhalchandra Chorghade

11 August 2025 at 1:54:18 pm

BMC plans parking curbs in narrow lanes

Mumbai: Amid mounting concerns over delayed emergency response in congested neighbourhoods, the Brihanmumbai Municipal Corporation (BMC) is preparing to enforce parking restrictions in several narrow lanes across the city, where indiscriminate on-street parking has increasingly emerged as a critical civic hazard. The move, expected to be implemented soon, is aimed at ensuring unobstructed access for fire engines and ambulances in densely populated pockets where even minor delays can have...

BMC plans parking curbs in narrow lanes

Mumbai: Amid mounting concerns over delayed emergency response in congested neighbourhoods, the Brihanmumbai Municipal Corporation (BMC) is preparing to enforce parking restrictions in several narrow lanes across the city, where indiscriminate on-street parking has increasingly emerged as a critical civic hazard. The move, expected to be implemented soon, is aimed at ensuring unobstructed access for fire engines and ambulances in densely populated pockets where even minor delays can have life-threatening consequences. “Illegal parking is not merely a compliance issue; it reflects the structural gap between the rapid growth in vehicle ownership and the limited parking infrastructure available in our cities,” said Prashant Sharma, President of NAREDCO Maharashtra. “As urban centres continue to densify, there is a pressing need to integrate well-planned and technologically enabled parking solutions within city planning as well as new real estate developments. Adequate parking infrastructure will play a crucial role in ensuring smoother traffic flow and improving overall urban mobility,” he added. Highlighting the urgency for scalable interventions, Ashish Majithia, Founder and CEO of Nextkraft Parking Technologies, said, “Mumbai’s parking crisis, especially in older and congested localities, underscores the need for innovative approaches such as automated and multi-level parking systems. Automated or mechanised parking should be installed at every public parking spot, which can significantly increase capacity, reduce dependence on on-street parking and ensure that critical access routes remain unobstructed. Alongside regulatory measures, adopting vertical parking infrastructure will be the key to building safer and more efficient cities.” The civic concern is particularly acute in older parts of South and Central Mumbai, including Chandanwadi, Girgaon, Kalbadevi, Gaondevi, Tardeo, Mumbai Central, Nagpada, Agripada and Byculla, where over 240 narrow lanes have been identified. Civic assessments indicate that nearly 35 to 40 of these are so constricted that only a single vehicle can pass at a time, making them highly vulnerable during emergencies when every second is critical. Commercial Zones The situation is further exacerbated in high-density commercial zones such as Zaveri Bazaar and Kalbadevi, where wholesale trade activity leads to persistent vehicular congestion. Authorities warn that in the event of fires or medical emergencies, blocked access routes could result in severe loss of life and property, underlining the gravity of the issue as more than just a traffic inconvenience. According to civic officials, proposed measures include introducing odd-even parking systems in select lanes and declaring complete no-parking zones in others, coupled with stricter enforcement against violators. However, residents and business owners have raised concerns over the absence of adequate alternative parking infrastructure, arguing that enforcement without viable substitutes could shift the burden rather than resolve the problem. As Mumbai continues to grapple with rising vehicle ownership and shrinking urban space, the proposed restrictions bring into sharp focus a deeper civic challenge, balancing immediate regulatory action with long-term infrastructure planning. Experts maintain that unless supported by systematic investments in organised, high-capacity parking solutions, the city’s emergency access bottlenecks may persist despite stricter rules.

When Machines Masquerade as Scientists

Updated: 2 hours ago

Two recent developments in scientific publishing should worry us. In one case, AI-generated inputs were found in peer review, leading to the rejection of papers. In another, an AI-written paper successfully passed peer review. Together, these highlight a deeper shift in how science is now created and assessed.


For the first time, AI is not just aiding science on the fringes. It is now involved in the entire scientific process, from writing to evaluation.


There is a fundamental question we need to ask now. For decades, we have relied on printed words, especially in scientific journals. That reliance assumes that what appears in print has undergone careful human review. Is that assumption still valid?


System Under Strain


Science has always relied on a human-centered process. A researcher identifies a problem, designs an approach, analyzes results, and presents findings. Peer reviewers assess the work using their knowledge and judgment. The strength of this system comes from human thinking, interpretation, and accountability.


That foundation is now shifting. AI tools can generate structured papers, summarize literature, and build convincing arguments. Meanwhile, reviewers are starting to rely on AI to read and evaluate manuscripts, often due to time and volume pressures. There are even reports of hidden instructions embedded in papers to sway AI-assisted reviews.


Now, both writing and reviewing are influenced by the same tools. Independent judgment, inevitably, diminishes.


The scope of science was already growing. Global publication output has exceeded about 2.5 million papers annually. Peer review is strained, with limited time and increasing expectations. AI shifts this balance significantly. Tasks that once took weeks can now be completed in hours, causing a sharp increase in submissions.


This shift is not only about speed. It is reshaping the system’s behaviour. No reviewer today has the time to read everything, and AI is widening that gap. It is compressing the scientific process itself. Steps that once demanded depth are shortened and sometimes bypassed.


This opens the door to a kind of ‘synthetic science.’ Here, the entire cycle of research, from conceptualisation and experimental design to methodology, results, discussion and conclusions, unfolds entirely within the digital world. Data are simulated, experiments are virtual, and interpretations may never be grounded in direct observation. The concern is not the use of digital tools, but the risk that such fully synthetic outputs begin to resemble validated science and are accepted without rigorous real-world verification.


This leads to what can be called the ‘flooding effect in science.’ where production outpaces the system’s capacity to evaluate carefully.


The most important shift is from truth to plausibility. AI-generated papers are well written, logically structured and technically convincing. They resemble good science. However, evidence shows that such content can include incorrect references, unsupported claims, or shallow reasoning that is not immediately visible.


Peer review was designed to examine reasoning and evidence. It was not designed to detect machine-generated coherence.



At the centre of this issue is a simple constraint. Human attention is finite, while AI can generate knowledge at scale. This creates the ‘attention bottleneck in science’ where the limiting factor is evaluation rather than production.


As this bottleneck tightens, strong work can be missed, while well-presented but weak work gains visibility which can potentially shape research directions in unintended ways.


If this continues, the consequences will extend beyond academia. Science underpins medicine, engineering, policy, and environmental decisions. If trust in scientific literature weakens, decision-making becomes uncertain. Regulators hesitate, industries second-guess evidence, and public confidence erodes. In such a situation, identifying reliable knowledge becomes extremely difficult.


Loss of trust in science is far more than a mere technical issue; it is a societal risk. It is therefore necessary to restate a basic principle. Science is not a text production process. It is a judgment process. If this distinction weakens, output may rise, but understanding will not.


Practical Response

This situation requires a clear and practical response. Journals should mandate explicit disclosure of AI use in writing and review so that readers are aware of what they are evaluating. Submissions should also go through automated citation and data checks to identify fabricated or inconsistent references before peer review.


Peer review needs to be strengthened. Reviewers should confirm that their evaluation reflects independent judgment, not unverified AI output. A simple “human-reviewed and verified” statement can restore accountability.


Academic evaluation systems should shift focus from counting publications to recognizing originality, depth, and reproducibility. Dedicated space for replication and validation studies must be established, as these are crucial for credibility but are currently undervalued.


Researcher training must also evolve. Merely using AI is not enough. Scientists need to learn to question, verify, and challenge AI-generated outputs. The aim should be to use AI to boost thinking, not replace it. Journals might even consider an “AI involvement score” to show how much machine help was used in a paper.


The implications may extend beyond publishing. As AI systems merge with robotics and humanoid platforms, knowledge could increasingly lead to automated actions in healthcare, manufacturing, and environmental systems. If the foundational science is weak, automated decisions might amplify errors on a large scale. The risk is no longer limited to wrong conclusions; it now includes wrong actions.


Science has adapted to change many times. However, this shift is different because it influences how knowledge is judged, not just how it is created. We are entering a phase where knowledge can be produced at unprecedented speed, but understanding may not keep up.


So, we come back to the fundamental question. If printed words can now be created, reviewed, and accepted with minimal human involvement, what exactly are we trusting?


Machines can produce text quickly and accurately. However, meaning, judgment, and responsibility stay firmly with humans. That distinction needs to be maintained.


If we preserve it, AI becomes a powerful ally in advancing science. If we blur it, the consequences will not seem like failure but as a gradual erosion. Everything may still look correct, yet something essential will be missing.


(The writer is an ANRF Prime Minister Professor at COEP Technological University, Pune; former Director of the Agharkar Research Institute, Pune; and former Visiting Professor at IIT Bombay. Views personal).

Comments


bottom of page