When Machines Masquerade as Scientists
- Dr. Kishore Paknikar

- 1 day ago
- 4 min read
Updated: 2 hours ago

Two recent developments in scientific publishing should worry us. In one case, AI-generated inputs were found in peer review, leading to the rejection of papers. In another, an AI-written paper successfully passed peer review. Together, these highlight a deeper shift in how science is now created and assessed.
For the first time, AI is not just aiding science on the fringes. It is now involved in the entire scientific process, from writing to evaluation.
There is a fundamental question we need to ask now. For decades, we have relied on printed words, especially in scientific journals. That reliance assumes that what appears in print has undergone careful human review. Is that assumption still valid?
System Under Strain
Science has always relied on a human-centered process. A researcher identifies a problem, designs an approach, analyzes results, and presents findings. Peer reviewers assess the work using their knowledge and judgment. The strength of this system comes from human thinking, interpretation, and accountability.
That foundation is now shifting. AI tools can generate structured papers, summarize literature, and build convincing arguments. Meanwhile, reviewers are starting to rely on AI to read and evaluate manuscripts, often due to time and volume pressures. There are even reports of hidden instructions embedded in papers to sway AI-assisted reviews.
Now, both writing and reviewing are influenced by the same tools. Independent judgment, inevitably, diminishes.
The scope of science was already growing. Global publication output has exceeded about 2.5 million papers annually. Peer review is strained, with limited time and increasing expectations. AI shifts this balance significantly. Tasks that once took weeks can now be completed in hours, causing a sharp increase in submissions.
This shift is not only about speed. It is reshaping the system’s behaviour. No reviewer today has the time to read everything, and AI is widening that gap. It is compressing the scientific process itself. Steps that once demanded depth are shortened and sometimes bypassed.
This opens the door to a kind of ‘synthetic science.’ Here, the entire cycle of research, from conceptualisation and experimental design to methodology, results, discussion and conclusions, unfolds entirely within the digital world. Data are simulated, experiments are virtual, and interpretations may never be grounded in direct observation. The concern is not the use of digital tools, but the risk that such fully synthetic outputs begin to resemble validated science and are accepted without rigorous real-world verification.
This leads to what can be called the ‘flooding effect in science.’ where production outpaces the system’s capacity to evaluate carefully.
The most important shift is from truth to plausibility. AI-generated papers are well written, logically structured and technically convincing. They resemble good science. However, evidence shows that such content can include incorrect references, unsupported claims, or shallow reasoning that is not immediately visible.
Peer review was designed to examine reasoning and evidence. It was not designed to detect machine-generated coherence.
At the centre of this issue is a simple constraint. Human attention is finite, while AI can generate knowledge at scale. This creates the ‘attention bottleneck in science’ where the limiting factor is evaluation rather than production.
As this bottleneck tightens, strong work can be missed, while well-presented but weak work gains visibility which can potentially shape research directions in unintended ways.
If this continues, the consequences will extend beyond academia. Science underpins medicine, engineering, policy, and environmental decisions. If trust in scientific literature weakens, decision-making becomes uncertain. Regulators hesitate, industries second-guess evidence, and public confidence erodes. In such a situation, identifying reliable knowledge becomes extremely difficult.
Loss of trust in science is far more than a mere technical issue; it is a societal risk. It is therefore necessary to restate a basic principle. Science is not a text production process. It is a judgment process. If this distinction weakens, output may rise, but understanding will not.
Practical Response
This situation requires a clear and practical response. Journals should mandate explicit disclosure of AI use in writing and review so that readers are aware of what they are evaluating. Submissions should also go through automated citation and data checks to identify fabricated or inconsistent references before peer review.
Peer review needs to be strengthened. Reviewers should confirm that their evaluation reflects independent judgment, not unverified AI output. A simple “human-reviewed and verified” statement can restore accountability.
Academic evaluation systems should shift focus from counting publications to recognizing originality, depth, and reproducibility. Dedicated space for replication and validation studies must be established, as these are crucial for credibility but are currently undervalued.
Researcher training must also evolve. Merely using AI is not enough. Scientists need to learn to question, verify, and challenge AI-generated outputs. The aim should be to use AI to boost thinking, not replace it. Journals might even consider an “AI involvement score” to show how much machine help was used in a paper.
The implications may extend beyond publishing. As AI systems merge with robotics and humanoid platforms, knowledge could increasingly lead to automated actions in healthcare, manufacturing, and environmental systems. If the foundational science is weak, automated decisions might amplify errors on a large scale. The risk is no longer limited to wrong conclusions; it now includes wrong actions.
Science has adapted to change many times. However, this shift is different because it influences how knowledge is judged, not just how it is created. We are entering a phase where knowledge can be produced at unprecedented speed, but understanding may not keep up.
So, we come back to the fundamental question. If printed words can now be created, reviewed, and accepted with minimal human involvement, what exactly are we trusting?
Machines can produce text quickly and accurately. However, meaning, judgment, and responsibility stay firmly with humans. That distinction needs to be maintained.
If we preserve it, AI becomes a powerful ally in advancing science. If we blur it, the consequences will not seem like failure but as a gradual erosion. Everything may still look correct, yet something essential will be missing.
(The writer is an ANRF Prime Minister Professor at COEP Technological University, Pune; former Director of the Agharkar Research Institute, Pune; and former Visiting Professor at IIT Bombay. Views personal).





Comments