New RFI – Recommendations on the Use of AI in Scholarly Communication

Following the launch of the Peer Review Quality Assessment page of the Toolkit in June, the EASE Peer Review Committee invite comments and suggestions on the draft of new Peer Review Toolkit entry, Recommendations on Use of AI in Scholarly Communication.

Response to this request for information (RFI) is voluntary and may be submitted anonymously till 15th September 2024 using the feedback form.

The recent development of Large Language Models (LLMs) and Generative AI (GenAI) presents new challenges and opportunities in scholarly communication. This has resulted in diverse policies of journals, publishers and funders around the use of AI tools. Research studies, including surveys, suggest  that researchers are already using AIs at a significant scale, for both creation or editing of manuscripts and peer review reports. And yet AI accuracy, effectiveness, and reproducibility remains uncertain.

This toolkit aims to promote responsible and transparent use of AI by editors, authors, and reviewers, with links to examples of current policies and practices. As AIs are fast evolving the PRC will be monitoring and updating these recommendations as new information becomes available. Please contact them to share any opinions, policies, and examples that could help them improve this guide.

EASE Members can discuss the document in the Forum, under this post created by PRC Chair, Mario Malicki.  Why not read the document and add any questions you may have to the forum before completing the form?

The Peer Review Committee looks forward to engaging in conversation on AI and learning what works and does not work in the proposed steps.