Last updated: March 25, 2026
Artificial Intelligence Tool Usage Policy
All journals published by Baishideng Publishing Group (BPG) adhere to the policies on the use of artificial intelligence (AI) tools established by international academic organizations, including the Committee on Publication Ethics (COPE) , World Association of Medical Editors (WAME), and International Committee of Medical Journal Editors (ICMJE). The following outlines BPG’s policies regarding the rejection of AI authorship, as well as the use of AI in peer review and editorial processes.
1 Rejection of AI Authorship
In accordance with ICMJE’s policy on “Defining the Role of Authors and Contributors”, authorship criteria are designed to ensure that authorship is attributed to individuals who deserve recognition and can take responsibility for the work. Since AI tools (such as ChatGPT or other large language models) cannot meet these criteria, they are not eligible to be listed as authors, nor should they be cited as such. Authors assume full responsibility for the content of their manuscripts, including any content generated with the assistance of AI tools, and are accountable for any breaches of publication ethics. If AI tools are used to generate content in a manuscript, such use must be clearly disclosed, with the exception of using AI tools solely for grammar correction. Failure to disclose the use of AI tools, once identified, will be considered academic misconduct and may result in manuscript retraction. To ensure transparency in the publication process, BPG journals require authors to disclose, during the revision stage, whether AI tools were used to generate the abstract, main text, figures, or tables of their manuscripts via the “INTELLIGENT MANUSCRIPT FORMAT EDITOR” feature in the F6Publishing system. If authors confirm that AI tools were used to generate content, such use should be described in the methods section (or the corresponding section) of the manuscript.
2 Use of AI in Peer Review
Editorial board members and peer reviewers are prohibited from using AI tools to generate peer review reports. However, they may use AI tools to assist in checking and correcting grammar and other non-content-related errors (such as spelling, capitalization, and punctuation) in peer review reports, as well as to help detect potential academic misconduct, such as plagiarism and duplicate publication. Additionally, peer reviewers must strictly maintain the confidentiality of manuscripts and are forbidden from uploading any manuscript content to any AI tools. Specifically, AI tools are permitted as supplementary aids but not as substitutes for human judgment. To ensure compliance during the peer review process, BPG journals include a reminder regarding the “Policy on AI Usage” in the invitation emails sent to Editorial Board Members and peer reviewers, and require them to explicitly disclose, via the F6Publishing system, whether AI tools were used to generate the peer review report (by responding to the question: “Are your review comments generated by AI tools?”). The Editorial Office will verify, using a combination of manual checks and technical tools, whether a peer review report contains content generated by AI tools. If it is determined that a reviewer used AI tools to generate all or part of a peer review report, that reviewer will be flagged and excluded from the journal’s peer reviewer database, and the relevant peer review report will be deemed invalid.
3 Use of AI in Editorial Decisions
Editors play a crucial role in the editorial decision-making process. During this process, Editors are responsible for verifying whether authors have disclosed their use of large language models and AI tools, and how such tools were employed. For instance, the use of AI for spell-checking and grammar correction is generally considered acceptable. Editors are not permitted to upload any manuscript content into AI tools, as doing so would violate BPG’s confidentiality policy.
