Impact of generative artificial intelligence on scientific paper writing and regulatory pathways | Editing Practice

Impact of generative artificial intelligence on scientific paper writing and regulatory pathways

Authors

  • Youdong Wang China Medical University Journal Center
  • Jiang Chen
  • Yuxin Wu
  • Xi Yu
  • Lixia He

DOI:

https://doi.org/10.54844/ep.2025.1011

Keywords:

generative artificial intelligence, scientific paper writing, academic integrity, intellectual property rights, research ethics

Abstract

Generative artificial intelligence (AI), a key branch of natural language processing, is transforming scientific paper writing. This paper examines its technical features, applications, benefits, and risks and suggests regulatory measures. Generative AI enhances writing efficiency but poses challenges to academic integrity. Future development requires a balance among technical, institutional, and ethical approaches.

Published

2025-07-31

How to Cite

1.
Wang Y, Chen J, Wu Y, Yu X, He L. Impact of generative artificial intelligence on scientific paper writing and regulatory pathways. EP. 2025;3. doi:10.54844/ep.2025.1011

Issue

Section

Editorial Research

Downloads

Download data is not yet available.
EDITORIAL RESEARCH

Impact of generative artificial intelligence on scientific paper writing and regulatory pathways


Youdong Wang, Jiang Chen*, Yuxin Wu, Xi Yu, Lixia He

Journal Center, China Medical University, Shenyang 110001, Liaoning Province, China


*Corresponding Author:

Jiang Chen, Journal Center, China Medical University, NO. 2 Beier Road, Shenyang 110001, Liaoning Province, China. Email: chenjiang@cmu.edu.cn; https://orcid.org/0009-0004-6667-8321


Received: 17 June 2025 Revised: 29 June 2025 Accepted: 1 July 2025


ABSTRACT

Generative artificial intelligence (AI), a key branch of natural language processing, is transforming scientific paper writing. This paper examines its technical features, applications, benefits, and risks and suggests regulatory measures. Generative AI enhances writing efficiency but poses challenges to academic integrity. Future development requires a balance among technical, institutional, and ethical approaches.

Key words: generative artificial intelligence, scientific paper writing, academic integrity, intellectual property rights, research ethics

INTRODUCTION

The rapid advancement of artificial intelligence (AI) has revolutionized various sectors, and scientific paper writing is no exception. Generative AI, a subset of AI focused on generating human-like text, has emerged as a powerful tool in this domain. It offers significant advantages such as rapid literature screening, optimized experimental design, and quick generation of coherent texts. However, it also introduces challenges related to academic integrity, intellectual property (IP), and research ethics.[14] This paper explores these impacts and proposes regulatory pathways to ensure responsible use.

TECHNICAL CHARACTERISTICS AND APPLICATION SCENARIOS OF GENERATIVE AI

Technical characteristics

Generative AI models, such as GPT-3 and its successors, are trained on massive datasets that include books, articles, and web content.[5,6] This extensive training allows the models to generate text that is not only coherent but also contextually relevant. The models use advanced algorithms to predict the next word in a sequence based on the context provided, which enables them to produce high-quality text that can be tailored to specific needs. This adaptability is particularly useful in scientific writing, where the ability to generate text that aligns with specific research contexts can save significant time and effort.

Application scenarios in scientific paper writing

In the context of scientific paper writing, generative AI can be used to create a structured outline of a paper, including sections such as the Introduction, Literature Review, Methodology, Results, and Discussion. By inputting key research data and viewpoints, AI can generate a draft that serves as a foundation for further refinement.[79] This not only accelerates the writing process but also ensures that the initial draft is well organized and logically structured. For non-native English speakers, AI can help in refining language usage, ensuring that the text is clear and free of grammatical errors. This can significantly improve the chances of acceptance by peer-reviewed journals. Additionally, tools such as Codex can generate codes for statistical analysis, which can be particularly beneficial in fields where data analysis is complex and time-consuming. By automating this process, researchers can focus more on the interpretation of the results and less on the mechanics of data analysis.

POSITIVE IMPACTS OF GENERATIVE AI

Enhancing research efficiency

The efficiency gains from using generative AI in scientific writing can be substantial.[10,11] Researchers often spend a significant amount of time organizing literature, drafting initial versions of papers, and refining language. Generative AI can automate many of these tasks, allowing researchers to focus on the core aspects of their research. This can lead to faster publication times and quicker dissemination of important findings. Tools such as Research AI can extract relevant literature from vast academic databases and automatically generate review paragraphs. For example, a biomedical researcher can use this tool to quickly filter out 400 articles related to the research topic and generate a preliminary literature review, reducing the work that traditionally takes several weeks to just a few days.

For non-native English speakers, the language support provided by generative AI can be transformative.[1214]. Writing in a second language can be challenging, and grammatical errors or unclear phrasing can hinder the acceptance of papers by peer-reviewed journals. Generative AI can help in overcoming these barriers by providing accurate and contextually appropriate text.[15] This not only improves the chances of acceptance but also boosts the confidence of researchers, encouraging them to participate more actively in the global scientific community.

Although generative AI has great potential in scientific writing, it inevitably has limitations.[16] For example, AI cannot generate original hypotheses and struggles with understanding complex matters. It cannot truly understand the meaning of language but only mimics human language behavior through statistical analysis. Additionally, AI faces challenges in handling non-English languages with complex grammar and cultural nuances, such as Indian languages.

Promoting interdisciplinary research

Interdisciplinary research is becoming increasingly important as many of the world’s most pressing problems, such as climate change, public health crises, and sustainable development, require insights from multiple disciplines,[17] Generative AI can play a crucial role in this context by integrating literature from various fields and providing a comprehensive overview of relevant research.[18] This can help researchers identify new connections and potential solutions that might not be apparent within a single discipline. For example, a researcher working on a public health issue might use generative AI to integrate insights from epidemiology, sociology, and economics to develop a more holistic understanding of the problem.

However, the effective use of generative AI in interdisciplinary research requires researchers to have a broad knowledge base and strong critical thinking skills. The information generated by AI must be carefully evaluated and contextualized within the specific research domain. This ensures that the insights gained are both accurate and relevant to the research question at hand. Researchers must also be aware of the limitations of AI-generated content and use it as a tool to support, rather than replace, their own expertise and judgment.

POTENTIAL RISKS OF GENERATIVE AI

Academic integrity issues

One of the most significant risks associated with the use of generative AI in scientific writing is the potential for plagiarism and false citations.[1921] Generative AI models are trained on vast amounts of text, which means that they can produce content that closely resembles existing work. This raises the risk of unintentional plagiarism, where researchers might use AI-generated text without proper attribution. Additionally, the ease with which AI can generate text can lead to the temptation to use it inappropriately, such as by generating entire sections of a paper without sufficient oversight.

To mitigate these risks, researchers must be diligent in reviewing and verifying the content generated by AI tools. They should cross-check the AI-generated text against existing literature to ensure that it is original and properly cited. Academic institutions and journals should also implement robust detection mechanisms to identify instances of plagiarism or false citations. This could involve using advanced plagiarism detection software that is specifically designed to identify AI-generated content. Additionally, educational programs should be developed to train researchers on the ethical use of AI in academic writing, emphasizing the importance of originality and proper citation practices.

IP disputes

IP issues are another significant concern in the context of generative AI.[22] The training data used to develop these models often includes copyrighted material, which can lead to legal disputes if the AI-generated content is used without proper authorization. For example, if an AI model is trained on a dataset that includes copyrighted articles or books, the resulting text generated by the AI could potentially infringe on those copyrights.

Moreover, the ownership of the content generated by AI is not always clear-cut. In some cases, it may be difficult to determine whether the AI-generated content is a derivative work of the training data or an entirely new creation.[23] This ambiguity can create legal challenges, particularly when it comes to publishing and commercializing AI-generated research.

To address these issues, it is crucial to develop clear guidelines and legal frameworks that balance the need for IP protection with the potential benefits of AI for innovation and collaboration. This could involve creating licensing agreements that specify the terms under which AI-generated content can be used and establishing standards for attributing ownership and credit for AI-generated works. Additionally, researchers and institutions should be encouraged to use AI models that are trained on open-source or publicly available data to minimize the risk of IP disputes.

Research ethics challenges

The use of generative AI in scientific writing introduces several ethical challenges that need to be carefully addressed.[2428] One of the primary concerns is the attribution of research contributions. When AI is involved in the writing process, determining the extent of AI's contribution versus that of the human researcher can be difficult. This can lead to confusion regarding who should be credited as the author of the work, potentially undermining the traditional concept of authorship in academic research.

Another ethical issue is the potential for over-reliance on AI, which could lead to a decline in critical thinking skills among researchers. If researchers become too dependent on AI-generated content, they may lose the ability to think critically and analytically regarding their work. This could result in low-quality research and a lack of innovation.

Data privacy and algorithmic bias are also significant ethical concerns. AI models often require large amounts of data for training, which can include sensitive personal information. If these data are not properly anonymized and protected, they could lead to privacy breaches and misuse of personal information. Additionally, AI algorithms can sometimes produce biased results, particularly if the training data are not representative of diverse populations. This can lead to skewed research findings and potentially harmful consequences, especially in fields such as medicine and social sciences.

To address these ethical challenges, it is essential to establish clear guidelines and standards for the use of AI in research. This could involve creating protocols for attributing authorship in AI-assisted research and developing training programs to help researchers maintain and enhance their critical thinking skills. Additionally, measures should be taken to ensure data privacy and to mitigate algorithmic bias in AI models. This could involve using techniques such as differential privacy to protect data and employing fairness algorithms to reduce bias in AI-generated content.

EXISTING REGULATORY MEASURES AND THEIR LIMITATIONS

Responses from academic institutions

In response to the growing use of generative AI in academic writing, many academic institutions have begun to implement various regulatory measures. One common approach is to require researchers to disclose their use of AI tools in the writing process.[2931] This can help in ensuring transparency and allowing readers to understand the role that AI played in the creation of the research. Some institutions have also adopted policies that exclude AI tools from being listed as authors on research papers, emphasizing the importance of human authorship and responsibility.

In addition to these measures, many institutions are using advanced detection tools to identify AI-generated text. These tools can help in detecting instances of plagiarism or inappropriate use of AI in academic writing. Furthermore, institutions are increasingly promoting AI ethics education, providing researchers with the knowledge and skills that they need to use AI responsibly and ethically.

However, despite these efforts, the regulatory landscape remains fragmented and inconsistent. Different institutions have adopted different standards and practices, which can create confusion for researchers and make it difficult to navigate the regulatory environment. This lack of uniformity can also lead to inconsistencies in how AI is used and regulated across different academic communities.

Limitations of existing regulations

The current regulatory measures for the use of generative AI in academic writing are limited in several ways. One of the main limitations is the lack of unified standards for disclosing AI use. Different journals and academic institutions have varying requirements for how and when AI use should be disclosed, which can create confusion for researchers. This lack of standardization also makes it difficult to enforce regulations consistently across different institutions.

Another significant challenge is the detection of AI-generated content. Although detection tools have improved in recent years, they are still not foolproof. AI-generated text can often be difficult to distinguish from human-written text, particularly if AI has been trained on high-quality datasets. This makes it challenging to detect instances of plagiarism or other forms of academic misconduct involving AI.

Enforcement and supervision mechanisms are also often inadequate. Many institutions have policies in place regarding the use of AI in academic writing, but these policies are not always effectively enforced. This can lead to situations where researchers may use AI inappropriately without facing any consequences. Additionally, the penalties for violating these policies are often not severe enough to deter potential misconduct.

To address these limitations, it is necessary to develop more unified and effective regulatory measures. This could involve creating a set of standardized guidelines for the use of AI in academic writing, which could be adopted by academic institutions and journals worldwide. These guidelines could include clear standards for disclosing AI use and protocols for detecting and addressing instances of AI-generated content. Additionally, stronger enforcement mechanisms and more severe penalties for violations could help ensure that AI is used responsibly and ethically in academic research.

REGULATORY SUGGESTIONS

Technical regulation

To address the challenges posed by generative AI in academic writing, it is crucial to develop advanced technical solutions. One important area of development is AI content detection tools. These tools should be capable of accurately identifying AI-generated text, even if it has been modified or adapted to appear more human-like. By improving the accuracy of detection tools, we can better ensure that academic publications maintain their integrity and that instances of plagiarism or other forms of misconduct are identified and addressed.

Another important technical measure is the development of academic-specific trustworthy AI models. These models would be specifically designed for use in academic research and would incorporate features to ensure their reliability and trustworthiness. For example, they could be trained on datasets that are specifically curated for academic use, ensuring that the content that they generate has high quality and is relevant to the research context. Additionally, these models could include built-in mechanisms for detecting and preventing plagiarism, further enhancing their reliability.

However, institutions or publishers may still face issues such as technical feasibility, cost-effectiveness, and compatibility with existing workflows in the process of developing and applying these AI detection tools and trustworthy AI models. Moreover, the deployment of these tools may trigger challenges such as algorithmic bias, data privacy protection, and the potential impact on the work of human editors and reviewers. Therefore, further discussion is needed on how to overcome these obstacles to ensure that these tools can be effectively applied in practical scenarios.

Institutional regulation

In addition to technical measures, it is essential to implement institutional regulations to ensure the responsible use of generative AI in academic writing. One key step is to establish a unified standard for disclosing AI use. This standard should be adopted by academic institutions and journals worldwide, ensuring that all researchers follow the same guidelines when disclosing their use of AI in the writing process. This standard should be clear and specific, outlining exactly what information needs to be disclosed and how it should be presented in the paper. Another important aspect of institutional regulation is improving accountability mechanisms for academic misconduct. This could involve creating more severe penalties for those who misuse AI, such as by plagiarizing or falsely citing sources. Additionally, institutions could establish clear procedures for investigating and addressing allegations of misconduct involving AI.

Training for journal editors and reviewers is also crucial. These individuals play a key role in the publication process, and they need to identify and appropriately handle AI-generated content. This could involve training sessions on how to recognize the signs of AI use and how to evaluate the ethical implications of such use. By enhancing the skills of these gatekeepers, we can improve the overall quality and integrity of academic publishing.

Ethical regulation

Ethical regulation is a crucial component in ensuring the responsible use of generative AI in scientific research. One of the primary steps in this direction is the development and promotion of consensus guidelines on AI use. These guidelines should be developed through a collaborative process involving researchers, ethicists, legal experts, and other stakeholders. They should cover a wide range of issues, including data privacy, algorithmic bias, authorship attribution, and the ethical implications of AI-generated content.

Strengthening ethical education for researchers is also essential. This could involve incorporating AI ethics into the curriculum of graduate programs, providing workshops and seminars on the topic, and developing online resources that researchers can access to learn more about ethical AI use. By equipping researchers with the knowledge and skills that they need to navigate the ethical landscape of AI, we can promote a culture of responsible research.

Cross-disciplinary ethical review committees can play a vital role in addressing complex ethical issues related to AI. These committees, comprising experts from various fields, can provide a comprehensive evaluation of the ethical implications of AI use in research. They can also develop and implement policies to ensure that AI is used in a manner that is consistent with ethical standards across different disciplines.

CONCLUSION

The advent of generative AI represents a significant advancement in the field of scientific paper writing. It offers numerous benefits, including increased efficiency, enhanced interdisciplinary collaboration, and improved support for non-native English speakers. These advantages have the potential to accelerate the pace of scientific discovery and foster a more inclusive global research community.

However, the integration of generative AI into the research process also brings with it a set of challenges that must be carefully managed. Issues related to academic integrity, such as plagiarism and false citations, pose a threat to the credibility of scientific research. IP disputes arising from the use of copyrighted material in AI training data and the unclear ownership of AI-generated content further complicate the landscape. Additionally, ethical concerns surrounding data privacy, algorithmic bias, and the attribution of research contributions must be addressed to ensure that AI is used in a manner that upholds the highest standards of research ethics.

To address these challenges, a balanced and multifaceted approach is necessary. This includes the development of advanced technical solutions, such as AI content detection tools and academic-specific trustworthy AI models, to ensure the integrity of published research. It also involves the implementation of robust institutional regulations, including unified standards for AI use disclosure and improved accountability mechanisms for academic misconduct. Furthermore, it requires a strong emphasis on ethical regulation, the development of consensus guidelines on AI use, enhanced ethical education for researchers, and the establishment of cross-disciplinary ethical review committees.

By combining these technical, institutional, and ethical approaches, we can create a regulatory framework that supports the responsible and beneficial use of generative AI in scientific research. This will not only maximize the potential benefits of this powerful technology but also ensure that it is used in a manner that is consistent with the core values of the scientific community. As we continue to integrate AI into the research process, it is crucial to remain vigilant and proactive in addressing the ethical, legal, and social implications of this technology, thereby fostering a future where AI serves as a valuable tool for advancing scientific knowledge and improving the human condition.

DECLARATIONS

Acknowledgement

None.

Author contributions

Wang YD: Propose the research framework, and draft and revise the manuscript. Chen J: Propose the thesis topic, and revise the manuscript. Wu YX, Yu X, He LX: Collect materials. All authors have read and approved the final version of the manuscript.

Source of funding

This research received no external funding.

Ethical approval

Not applicable.

Informed consent

Not applicable.

Conflict of interest

The authors have no conflicts of interest to declare.

Use of large language models, AI and machine learning tools

In preparing this paper, the authors used ChatGPT for language polishing of original drafts. After using this tool/service, the authors have reviewed and edited the content as necessary and take full responsibility for the content of the publication.

Data availability statement

No additional data.

REFERENCES

  1. Gulumbe BH. Obvious artificial intelligence-generated anomalies in published journal articles: A call for enhanced editorial diligence. Learn Publ. 2024;37(4):e1626.    DOI: 10.1002/leap.1626
  2. Currie GM. Academic integrity and artificial intelligence: Is ChatGPT hype, hero or heresy? Semin Nucl Med. 2023;53(5):719-730.    DOI: 10.1053/j.semnuclmed.2023.04.008
  3. Christou CD, Tsoulfas G. Challenges and opportunities in the application of artificial intelligence in gastroenterology and hepatology. World J Gastroenterol. 2021;27(37):6191-6223.    DOI: 10.3748/wjg.v27.i37.6191
  4. Bouhouita-Guermech S, Gogognon P, Bélisle-Pipon JC. Specific challenges posed by artificial intelligence in research ethics. Front Artif Intell. 2023;6:1149082.    DOI: 10.3389/frai.2023.1149082
  5. Floridi L, Chiriatti M. GPT-3: Its nature, scope, limits, and consequences. Mines Mach. 2020;30(4):681-694.    DOI: 10.1007/s11023-020-09548-1
  6. Kim TW. Application of artificial intelligence chatbots, including ChatGPT, in education, scholarly work, programming, and content generation and its prospects: A narrative review. J Educ Eval Health Prof. 2023;20:38.    DOI: 10.3352/jeehp.2023.20.38
  7. Salvagno M, Taccone FS, Gerli AG. Can artificial intelligence help for scientific writing? Crit Care. 2023;27(1):75.    DOI: 10.1186/s13054-023-04380-2
  8. Kacena MA, Plotkin LI, Fehrenbacher JC. The use of artificial intelligence in writing scientific review articles. Curr Osteoporos Rep. 2024;22(1):115-121.    DOI: 10.1007/s11914-023-00852-0
  9. Chirichela IA, Mariani AW, Pêgo-Fernandes PM. Artificial intelligence in scientific writing. Sao Paulo Med J. 2024;142(5):e20241425.    DOI: 10.1590/1516-3180.2024
  10. Chen TJ. ChatGPT and other artificial intelligence applications speed up scientific writing. J Chin Med Assoc. 2023;86(4):351-353.    DOI: 10.1097/JCMA.0000000000000900
  11. Lee PY, Salim H, Abdullah A, Teo CH. Use of ChatGPT in medical research and scientific writing. Malays Fam Physician. 2023;18:58.    DOI: 10.51866/cm0006
  12. Giglio AD, Costa MUPD. The use of artificial intelligence to improve the scientific writing of non-native english speakers. Rev Assoc Med Bras (1992). 2023;69(9):e20230560.    DOI: 10.1590/1806-9282.20230560
  13. Daungsupawong H, Wiwanitkit V. Artificial intelligence and the scientific writing of non-native English speakers. Rev Assoc Med Bras (1992). 2024;70(2):e20231291.    DOI: 10.1590/1806-9282.20231291
  14. Ingley SJ, Pack A. Leveraging AI tools to develop the writer rather than the writing. Trends Ecol Evol. 2023;38(9):785-787.    DOI: 10.1016/j.tree.2023.05.007
  15. Selim ASM. The transformative impact of AI-powered tools on academic writing: Perspectives of EFL university students. Int J Engl Linguist. 2024;14(1):14.    DOI: 10.5539/ijel.v14n1p14
  16. Manley K, Salingaros S, Fuchsman AC, Dong X, Spector JA. Using ChatGPT to write a literature review on autologous fat grafting. J Plast Reconstr Aesthet Surg. 2025;105:292-304.    DOI: 10.1016/j.bjps.2025.04.015
  17. Miao H, Ahn H. Impact of ChatGPT on interdisciplinary nursing education and research. Asian Pac Isl Nurs J. 2023;7:e48136.    DOI: 10.2196/48136
  18. Schryen G, Marrone M, Yang J. Exploring the scope of generative AI in literature review development. Electron Mark. 2025;35(1):13.    DOI: 10.1007/s12525-025-00754-2
  19. Scientists brace for a "flood of junk" papers written with AI help: One researcher estimated more than 1% of all scientific papers published in 2023 involved the use of AI. The Hindu. Updated August 4, 2024. Accessed June 16, 2025. https://www.thehindu.com/sci-tech/science/scientists-brace-for-a-flood-of-junk-papers-written-with-ai-help/article68515779.ece
  20. Lin JC, Sabet CA, Chang C, Scott IU. Artificial intelligence in medical education assessments: Navigating the challenges to academic integrity. Med Sci Educ. 2024;35(1):509-512.    DOI: 10.1007/s40670-024-02178-7
  21. Weidmann AE. Artificial intelligence in academic writing and clinical pharmacy education: Consequences and opportunities. Int J Clin Pharm. 2024;46(3):751-754.    DOI: 10.1007/s11096-024-01705-1
  22. Li P, Huang J, Wu H, Zhang Z, Qi C. SecureNet: Proactive intellectual property protection and model security defense for DNNs based on backdoor learning. Neural Netw. 2024;174:106199.    DOI: 10.1016/j.neunet.2024.106199
  23. Li K, Wu H, Dong Y. Copyright protection during the training stage of generative AI: Industry-oriented U.S. law, rights-oriented EU law, and fair remuneration rights for generative AI training under the UN's international governance regime for AI. Comput Law Secur Rev. 2024;55:106056.    DOI: 10.1016/j.clsr.2024.106056
  24. Liebrenz M, Schleifer R, Buadze A, Bhugra D, Smith A. Generating scholarly content with ChatGPT: Ethical challenges for medical publishing. Lancet Digit Health. 2023;5(3):e105-e106.    DOI: 10.1016/S2589-7500(23)00019-5
  25. Inglada Galiana L, Corral Gudino L, Miramontes González P. Ethics and artificial intelligence. Rev Clin Esp (Barc). 2024;224(3):178-186.    DOI: 10.1016/j.rceng.2024.02.003
  26. Keskinbora KH. Medical ethics considerations on artificial intelligence. J Clin Neurosci. 2019;64:277-282.    DOI: 10.1016/j.jocn.2019.03.001
  27. Abdullah YI, Schuman JS, Shabsigh R, Caplan A, Al-Aswad LA. Ethics of artificial intelligence in medicine and ophthalmology. Asia Pac J Ophthalmol (Phila). 2021;10(3):289-298.    DOI: 10.1097/APO.0000000000000397
  28. Pearson GS. Artificial intelligence and publication ethics. J Am Psychiatr Nurses Assoc. 2024;30(3):453-455.    DOI: 10.1177/10783903241245423
  29. American Psychological Association. APA journals policy on generative AI: Additional guidance. American Psychological Association. Accessed June 16, 2025. https://www.apa.org/pubs/journals/resources/publishing-tips/policy-generative-ai
  30. Princeton University. Disclosing the use of AI. Princeton University Library. Updated January 13, 2025. Accessed June 16, 2025. https://libguides.princeton.edu/generativeAI
  31. Ganjavi C, Eppler MB, Pekcan A, et al. Publishers' and journals' instructions to authors on use of generative artificial intelligence in academic and scientific publishing: Bibliometric analysis. BMJ. 2024;384:e077192.    DOI: 10.1136/bmj-2023-077192