AI Policy

AG Editor acknowledges the increasing impact of artificial intelligence (AI) technologies, including large language models (LLMs) and generative tools, on academic publishing.
This policy follows the recommendations of the ICMJE, WAME, and COPE, and aligns with the ethical frameworks established by leading international publishers such as Elsevier, Nature, IEEE, and PLOS.

Our goal is to promote the ethical, transparent, and responsible use of AI in all stages of the book and chapter publishing process, from manuscript preparation to editorial decision-making.

AG Editor does not prohibit the use of large language models (LLMs), such as ChatGPT, and aligns with the WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications.
However, LLMs do not meet the authorship criteria established by the ICMJE.
If any LLM or AI-assisted tool is used during manuscript preparation, this must be explicitly declared in the appropriate section of the book or chapter (e.g., “Methods” or “Acknowledgments”).
The use of AI tools does not exempt authors or editors from full responsibility for the accuracy, integrity, and originality of the content.

For Authors

  • AI tools cannot be listed as authors under any circumstances.
  • If AI tools were used to draft text, analyze data, or generate figures, tables, or summaries, this must be clearly disclosed in the “Methods” or “Acknowledgments” section, indicating the tool’s name, version, date of use, and type of prompts employed.
  • Authors are fully responsible for verifying the accuracy, originality, and proper attribution of any AI-assisted content.
  • The use of AI to fabricate results, references, or bibliographic data is strictly prohibited and will be considered a serious ethical breach. In such cases, the Retraction and Misconduct Policy will apply.
  • Any misuse of AI tools may lead to rejection, retraction, or other editorial sanctions.

For Editors

  • Editors must clearly inform authors and reviewers of AG Editor’s AI policy.
  • Any use of AI by the editorial team for correspondence, workflow optimization, or text editing must be transparent, documented, and must not compromise the integrity or confidentiality of the manuscript.
  • Editors may use AI-assisted tools only for administrative or linguistic purposes (e.g., grammar, formatting), never for evaluating content or making editorial decisions.
  • The editorial office should maintain access to reliable AI-detection software to identify potential manipulation, plagiarism, or artificially generated content.
  • Editorial confidentiality and data protection must be maintained at all times.

For Reviewers

  • Reviewers must not upload or input manuscript content into open-access AI tools (such as ChatGPT) that store user data, as this violates confidentiality.
  • If reviewers use AI tools to assist in drafting their review reports (e.g., to improve clarity or structure), they must disclose this to the editors.
  • Reviewers are accountable for any AI-generated content included in their reports, ensuring its accuracy, neutrality, and relevance.
  • Under no circumstances should reviewers use AI tools to evaluate the manuscript’s academic merit or generate a recommendation automatically.

Final Remarks

AG Editor Books explicitly aligns with Elsevier’s guidance on the ethical, transparent, and responsible use of AI technologies in academic publishing.
These guidelines can be reviewed here: https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals

In case of disputes or uncertainty regarding AI use, AG Editor will apply the standards of the Committee on Publication Ethics (COPE) and the World Association of Medical Editors (WAME).

For Further Reference