FPJ 97th Anniversary: As AI Rewrites The Rules Of Publishing, Can Trust And Originality Survive The Shift?
Exploring how AI is reshaping publishing, raising critical questions on trust, originality, and control

The publishing industry is undergoing a massive transformation, and Artificial Intelligence, or AI, plays a significant role in this change. From educational to science journals, AI is now becoming a big factor in creating content. It is time we discuss whether it is a boon or a bane.
Usage
Dr Vaishaly Bharambe, the founder of VB Anatomy Academy and a medical educator, has worked on medical journals. She agrees with the widespread usage of AI in publishing, especially for journals. "For general articles, people often ask AI to directly generate content or summarise existing published work. In research publishing, some submit their data for AI to craft an article, while others even rely on AI to create synthetic data. This has led to reduced personal involvement, with more influence from AI-generated patterns and ideas.”
Carrying forward Bharambe’s thoughts, Monica Malhotra Kandhari, Managing Director of MBD Group, also discusses how AI is being utilised to systemise and support various stages of the publishing process, from editing language to peer-review assistance and content suggestions. “AI products may streamline editorial processes, propose suitable references, and even create initial article drafts or abstracts, potentially accelerating the publication timeframe. To researchers and publishers, they also provide pragmatic benefits of efficiency and scalability. At the same time, though, this fast-paced adoption of AI has raised fundamental questions regarding content ownership and ethical usage. A few global and Indian publishers have initiated legal proceedings against AI firms, alleging that their copyrighted material—especially journalistic and editorial content—was utilised without authorisation to train massive AI models.”
She feels that as AI rises in influence in the publishing environment, esp. the journals, the business must carefully move into new areas with caution while keeping in mind that innovation cannot come at the cost of rights, responsibilities and editorial purity.
Arjun Sinha, a partner at AP & Partner – a legal firm, states the obvious aspect of AI being part of almost all sections of the research/publishing workflow. But he wants researchers to be aware that the models used are suitable for their work. "Secondly, use of AI cannot absolve the researchers of their duty to oversee the final output. The buck at the end of the day for hallucination should fall with the researcher, unless the AI tool itself makes claims and operates in a manner contrary to the expectation of the researcher. Researchers also need to be mindful that AI tools can be a source of a data leak, so unless specifically set up for confidentiality, it can result in disclosure of sensitive raw data that they input into the models.”
Identification
Monica calls it tricky to find AI influence in content. But she does consider some clues as a tip-off. "Keep an eye out for repeating phrases, vague or generic statements, and overly rigid structures, as these are common giveaways. Another clear sign is when incorrect or made-up references are included—AI tools sometimes generate citations that aren’t real. Even if the facts are right, AI writing might simplify things too much or lack a clear, unique viewpoint.”
She does mention that even if the AI-detecting tools are getting better, it is always important to depend on a good editor’s intuition to spot such writing.
According to Bharambe, such content has too smooth a flow, excessive use of analogies, and fancy English. It is seen to have superficial information or be overfilled with information. "It may also lack depth, original insights, or a unique perspective, instead offering generalised responses—all of which are signs that the content might be AI-generated.”
Boon or bane?
Sinha feels AI has slashed production time and cost, but eroded value once the content has been model-trained. "That tension explains the advocacy and wave of copyright suits in courts (including India). However, at the end of the day, research and review of materials is a recognised fair use, the only difference being that it is today being done by machines at scale. Long term, this will require a fundamental change to how research is paid for and incentivised, either through changed business models, or by amending laws, resulting in new licensing schemes or collective bargaining.”
He adds, “For authors, AI can be extremely helpful. However, at the end of the day AI cannot substitute the research or the researcher, and over reliance is at your own cost. The industry will probably also need to tackle new forms of bad behaviour, from auto generated materials used for boosting citations, to researchers hiding prompts in text to get favourable AI-generated reviews. But that’s the case with all new technology; it results in new behaviour both good and bad.”
Bharambe thinks removing the language barrier is a positive thing. “It helps authors express ideas more clearly and fluently. It allows for easy editing, summarisation, and content generation, which improves efficiency and accessibility. However, with AI offering quick, polished responses, people may stop deeply thinking or reflecting on the topic they’re writing about. This is especially problematic in research publishing, where AI’s role becomes more questionable. Since AI is limited by its training data, it can reinforce existing thought patterns rather than foster new insights.”
She further adds that journals can face the risk of publishing unoriginal or shallow content. Publishers will find it tough to maintain academic rigour and originality. Overall, she considers AI to be a useful tool, but unchecked use in research can lower the essence of scholarly publishing.
Monica does feel AI has the potential to either be a highly beneficial tool or a concerning threat. It all depends on the usage. "For publishers and journal editors, the advantages of AI are significant—the tools will streamline workflows, improve quality control, and provide faster manuscript handling. The trouble will arise when researchers misuse AI, for example, submitting AI-generated research without indicating AI usage, which can ultimately undermine the credibility of the whole system.”
She feels that over time, AI will change the way we work in the field since it might take over simple jobs. “But people will still be needed for key tasks—like checking each other’s work, thinking and making sure everything’s done right. The tricky part is figuring out how to use AI’s strengths without losing what makes scholarly publishing special—its fresh ideas and human touch.”
Dealing with AI
Bharambe suggests taking proactive measures essential to safeguard the integrity of the publishing industry, especially in academic journals. “Develop reliable AI-detection tools to identify content that is fully or predominantly AI-generated. Establish strict editorial policies. If an article is found to be falsely declared or primarily AI-generated, the author should face serious consequences, such as retraction or blacklisting.”
She further suggests, “Uphold the value of original thought. Journals must prioritise depth, originality, and critical thinking over fluency and surface polish. Guard against devaluation of scholarly publishing. If AI-written content becomes the norm, the credibility and respect for published research may collapse, turning the industry into a machine-driven echo chamber.”
Sinha doesn't think the human aspect gets overridden, but becomes more focused. "At the end of the day, humans should be responsible for the output and cross-checking the output. And we have to be mindful of finding ways to keep researchers incentivised to publish. Because at the end of the day, we will always need human inventiveness to keep pushing for newer advances in research.”
Monica talks about clear policies and oversight required to manage AI responsibly in publishing. “Disclosure should be normal practice when using AI tools to assist with writing or research. At the same time, it is essential to keep editorial and peer review decisions in human hands, while machines can assist and not replace these essential decisions. It’s just as crucial to build awareness and provide training throughout the publishing world. Editors, reviewers, and authors all need to grasp both what AI can do and its limitations. This helps make sure the technology is used responsibly, while still keeping the creativity, context, and critical thinking that only humans really bring to research and publishing.”
Technologies like AI can mimic humans, especially in publishing. But time will tell if it will also write the way humans think and feel.
RECENT STORIES
-
Panvel Municipal Corporation Installs Eco-Friendly Bus Shelters With Solar-Powered Lighting And CCTV... -
MP News: BJP Leader Calls Armed Goons To Threaten Woman & Encroach Land In Gwalior-- VIDEO -
Canada: Lawrence Bishnoi Gang Opens Fire At 3 Locations Over ₹50 Lakh Extortion Claim; Fateh... -
Haryana Inks 6 MoUs With Japanese Cos, Attracting ₹1,185 Cr Investment, Generating Over 13k Jobs -
From Railways To Waterways: The Many Ways Mumbaikars Can Reach Navi Mumbai International Airport