INTRODUCTION
Peer review remains the cornerstone of scholarly publishing, yet it is also one of the most contested and evolving processes in academia. For decades, it has served as the principal quality-control mechanism, ensuring that published research meets rigorous scholarly standards. However, its implementation varies significantly across regions, disciplines, and journal models, leading to long-standing debates over transparency, bias, reviewer fatigue, and inequitable access to publishing opportunities.
In recent years, artificial intelligence (AI) has moved from being a peripheral aid to becoming a significant part of editorial workflows. Once limited to basic keyword-based searches, AI now supports plagiarism detection, reviewer matching, language refinement, and even preliminary manuscript triage.[1–3] For many, these developments mark a turning point where peer review might become faster, more inclusive, and potentially fairer.
Yet the promise of AI is accompanied by important concerns. As argued in Why the Scholarly Publishing Community Has Failed to Contain Predatory Journals,[4] technological adoption without robust policy frameworks can increase inequities rather than reduce them. In the context of peer review, this risk is particularly relevant for journals in underrepresented research regions where access to advanced tools is limited, training is inconsistent, and governance infrastructure may be underdeveloped.
GLOBAL STANDARDS AND LOCAL REALITIES
International frameworks such as those from the Committee on Publication Ethics (COPE),[5] the Asian Council of Science Editors (ACSE),[6] and the European Association of Science Editors (EASE)[7] outline principles of integrity, confidentiality, fairness, and accountability. These organizations also emphasize transparency, ethical reviewer selection, and appropriate training as essential to quality peer review.
However, applying these principles in practice can be difficult. Many smaller journals, especially in low- and middle-income countries, operate without the financial resources to license advanced AI platforms. Others may have access to tools but lack the editorial training needed to interpret AI-generated recommendations critically. In some contexts, cultural norms such as collaborative or consensus-based reviewing differ from the individualized, double-blind peer review common in Western publishing.[8]
This diversity means that while global guidelines offer a shared ethical framework, operational realities vary widely. Addressing these differences requires policies that are sensitive to local publishing cultures while remaining anchored in internationally recognized best practices.
OPPORTUNITIES AND RISKS: A COMPARATIVE VIEW
While AI's potential in peer review is clear, so are the associated risks. Table 1 outlines the main opportunities and challenges, with examples from the literature and current editorial practices.
| Opportunities | Risks |
| Automates routine checks such as plagiarism, statistical consistency, and reference formatting[1,9] | Algorithmic bias if trained on skewed or non-representative datasets[2,10] |
| Expands reviewer pools by matching manuscripts with global expert databases | Lack of transparency in decision-making with limited explainability |
| Provides language support for non-native English authors, improving clarity and accessibility | Over-reliance on automation could reduce human critical judgment |
| Reduces reviewer fatigue by streamlining repetitive tasks | Inconsistent policy adoption may create uneven ethical standards between journals[4] |
| Improves efficiency in editorial decision-making[2] | Data privacy and confidentiality concerns when manuscripts are processed by third-party AI |
THE RISKS OF A UNIFORM APPROACH
A common mistake in AI adoption is assuming that tools and policies designed for well-resourced, high-volume journals can be applied without adaptation in low-resource contexts. Without modification, such tools may perpetuate disparities by favoring English-language outputs, privileging researchers with access to high-quality institutional data, and reinforcing established academic networks.
Some AI platforms are proprietary and require significant subscription fees. While such costs may be manageable for large publishers, they can be prohibitive for regional or society-owned journals. This creates a risk of a two-tiered publishing environment in which elite journals operate with cutting-edge tools while smaller journals rely solely on manual processes, widening the innovation gap.
RESPONSIBLE ADOPTION: POLICY BEFORE TECHNOLOGY
Responsible integration of AI into peer review should begin with clear policy frameworks. Journals need to define acceptable uses, disclosure requirements, and the roles of human editors in interpreting AI-generated recommendations. ACSE's recent white paper on AI in peer review[5] emphasizes that without transparent policy, the benefits of AI will be unevenly distributed and ethically uncertain.
Capacity building is equally important. Editors, reviewers, and authors must receive training not only in the use of AI tools but also in understanding their limitations and potential biases. Such training can address issues of fairness, data privacy, and the scope of automation. Workshops, online modules, and collaborative learning can help close the knowledge gap, particularly in regions where technical expertise is still emerging.
INCLUSIVITY AND DIVERSITY IN AI DEVELOPMENT
For AI to serve as a genuine equalizer in peer review, its underlying datasets must reflect the diversity of the global research community. Tools trained mainly on English-language or Western-centric datasets risk marginalizing research from other linguistic and cultural contexts.[8]
Developers have a responsibility to ensure that AI systems are inclusive from the outset. This involves incorporating multilingual datasets, engaging with regional editorial bodies such as ACSE, and testing tools across different disciplines and journal models. Inclusive development can help prevent AI from becoming another source of exclusion in scholarly publishing.
KEEPING HUMAN OVERSIGHT CENTRAL
Regardless of technological advances, human judgment must remain central to peer review. Editors provide contextual understanding, ethical reasoning, and subject expertise that algorithms cannot replicate. While AI can assist in identifying issues or suggesting reviewers, the final decision to accept, revise, or reject a manuscript must remain with human editors.
Clear communication with authors and reviewers about the use of AI in the review process is also essential. Journal policies and submission guidelines should specify when and how AI tools are used, reinforcing transparency and trust.
CONCLUSION
In a time when scholarly publishing is under close scrutiny, trust is the foundation that sustains its credibility. AI can support that trust if adopted with transparency, inclusivity, and adherence to established ethical standards. Guided by organizations such as COPE, ACSE, and EASE, which foster both international dialogue and regional capacity building, the scholarly community can align technological innovation with ethical responsibility.
The future of peer review will depend not only on the technologies introduced but on the values preserved. With foresight, inclusivity, and a commitment to fairness, AI can contribute to making peer review more efficient, equitable, and globally representative.
DECLARATIONS
Acknowledgement
None.
Author contributions
Sayab M contributed solely to the editorial.
Source of funding
This research received no external funding.
Ethical approval
Not applicable.
Informed consent
Not applicable.
Conflict of interest
The author declares no competing interest.
Use of large language models, AI and machine learning tools
OpenAI's ChatGPT (paid subscription, GPT-4 model, accessed in September 2025) was used only to support language editing and clarity improvements in preparing this manuscript. All substantive intellectual content, analysis, interpretation, and final text were conceived, written, and approved by the author, who takes full responsibility for the content.
Data availability statement
No additional data.
REFERENCES
- Resnik DB, Elliott KC. The ethical challenges of using artificial intelligence in peer review. Sci Eng Ethics. 2023;29(2):19. DOI: 10.1007/s11948-023-00387-1
- Tennant JP, Ross-Hellauer T. AI-assisted peer review: Opportunities, risks, and the road ahead. Learn Publ. 2024;37(2):160-174. DOI: 10.1002/leap.1623
- Horbach SPJM. Pandemic publishing: Medical journals strongly speed up their publication process for COVID-19. Quant Sci Stud. 2020;1(3):1056-1067. DOI: 10.1162/qss_a_00076
- Sayab M. Why the scholarly publishing community has failed to contain predatory journals: An institutional and systemic analysis. Trends in Scholarly Publishing. 2025;4(1):59-65. DOI: 10.21124/tsp.2025.59.65
- Fox CW, Albert AYK, Vines TH. Recruitment of reviewers is becoming harder at some journals: A test of the influence of reviewer fatigue. Res Integr Peer Rev. 2017;2:3. DOI: 10.21124/tsp.2025.59.65:10.1186/s41073-017-0027-x
- Slone C. Peer Review Week 2025: Rethinking Peer Review in the AI Era. Accessed September 10, 2025. https://editorscafe.org/details.php?id=73
- COPE Council. COPE position - Authorship and AI - English. Accessed September 10, 2025. https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools
- Lee CJ, Sugimoto CR, Zhang G, Cronin B. Bias in peer review. J Am Soc Inf Sci Technol. 2013;64(1):2-17. DOI: 10.1002/asi.22784
- Gasparyan AY, Yessirkepov M, Diyanova SN, Kitas GD. Publishing ethics and predatory practices: A dilemma for all stakeholders of science communication. J Korean Med Sci. 2015;30(8):1010-1016. DOI: 10.3346/jkms.2015.30.8.1010
- Nature Editorial. How AI could change academic publishing. Nature. 2023;620:116-117. DOI: 10.1038/d41586-023-02678-9




