Share on
Article
The all-changing artificial intelligence (AI) revolution is currently underway. The sooner we recognize it, the better. As in all revolutions driven by technological progress and innovations, it is necessary to thoroughly discuss and to understand all surging developments and to address them, as soon as possible, by setting out standards, guidelines and policies.
Presumably AI will fully impact the generations to come. They will be able to experience its potentially infinite applications and unparalleled opportunities and, at the same time, will have the means to evaluate the consequences of what we now consider its potential benefits and threats. However, it is without doubt that at present we are witnessing an exponential proliferation of AI applications, tools and services and that AI itself is at the centre of the debate, in the general public and within the scientific and scholarly community. It is a hot topic which plays a growing role in different aspects of our society and is becoming relevant for all stakeholders in science communication: researchers, authors, editors, reviewers and publishers.
In scholarly publishing and in research practices, the potential impact of GPT models (Generative Pre-trained Transformer) recently emerged in a substantial number of discussions, conferences, webinars, editorials which always ended up by involving a plethora of different ethical implications.
One of these models is ChatGPT which is an AI chatbot tool for content creation produced by OpenAI, a research and deployment company working on artificial general intelligence (AGI). Among its limitations, as reported by the same producer, it “sometimes writes plausible-sounding but incorrect or nonsensical answers”. ChatGPT is, de facto, already used by researchers for different purposes like translating, editing, drafting abstracts and for improving writing practices, and it potentially could be a valuable tool for ideation and writing while not being a source of original and reliable information. As recently reported, “ChatGPT and other LLMs (Large Language Models) produce text that is convincing, but often wrong, so their use can distort scientific facts and spread misinformation” [1].
Its misuse by authors is raising concerns among science editors who are already busy in trying to maintain the quality standards and high levels of integrity throughout the whole publication process of their journals, in an environment blurred by predatory publishers and paper mill organizations, which are profit-oriented and responsible for an increased number of fraudulent and fake publications.
To meet these concerns and advocate for a safe, transparent and sound use of AI tools in science communication, the International Committee of Medical Journal Editors (ICMJE) on May 2023 updated the Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals including a whole new section and a revision of other sections to provide guidance on how work conducted with the assistance of AI technology (including ChatGPT) should and should not be acknowledged: “At submission, the journal should require authors to disclose whether they used artificial intelligence (AI)–assisted technologies (such as Large Language Models [LLMs], chatbots, or image creators) in the production of submitted work. Authors who use such technology should describe, in both the cover letter and the submitted work, how they used it. Chatbots (such as ChatGPT) should not be listed as authors because they cannot be responsible for the accuracy, integrity, and originality of the work, and these responsibilities are required for authorship (see Section II.A.1). Therefore, human beings are responsible for any submitted material that included the use of AI-assisted technologies. Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased. Authors should not list AI and AI-assisted technologies as an author or co-author, nor cite AI as an author. Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI. Humans must ensure there is appropriate attribution of all quoted material, including full citations” [2].
The Committee on Publication Ethics (COPE) recognising the value of AI tools for ideation and writing, issued a position statement clarifying that authors can use the AI tools, so long as they are properly credited and attributed, being fully responsible for the content of their manuscripts: “The use of artificial intelligence (AI) tools such as ChatGPT or Large Language Models in research publications is expanding rapidly. COPE joins organisations, such as WAME and the JAMA Network among others, to state that AI tools cannot be listed as an author of a paper (…) Authors who use AI tools in the writing of a manuscript, production of images or graphical elements of the paper, or in the collection and analysis of data, must be transparent in disclosing in the Materials and Methods (or similar section) of the paper how the AI tool was used and which tool was used. Authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication ethics” [3]. COPE also started a discussion on March 2023 about AI and fake papers and ethical implications.
Many publishers have also inserted disclaimers or included specific guidelines for authors wishing to use AI in the production of articles or in the conduct of research. Elsevier addresses the use of AI and AI-assisted writing technologies in scientific writing [4]; Taylor & Francis clarifies the responsible use of AI tools in academic content creation [5]; journals like JAMA specify that Authors should include in their paper a description of the content created or edited by AI and the name of the AI model or tool used, including producer, version and extension numbers [6], and Springer Nature set down guidelines for its use [7]. The same European Commission posed some questions concerning the intellectual property of the ChatGPT-generated content (Who owns it? Is it possible to use it without infringing someone’s copyright and so on) [8] and the European Union has prepared a general regulation on Artificial Intelligence, the EU AI Act, whose positions have been adopted by the MEPs on 14 June 2023 [9, 10] and which will continue to be discussed till approved in the final form of a law.
While AI is posing some threats, at the same time it might offer the way to overcome those same challenges, for instance to detect machine-created content or paper mill articles. It could also help in generating ideas, suggesting innovative methods of studies, help in bioimage analysis [11], increasing equity and inclusion for people with disabilities who might use AI tools as assistive technologies or for alleviating linguistic disparities [12]. An Artificial Intelligence Review Assistant (AIRA) is in use at Frontiers, a major Open Access scholarly Publisher, in its digital peer-review platform, enabling faster, more efficient quality control and manuscript handling [13].
Will it be beneficial in support of the many activities involved in the scientific reporting and publication process? Those who are already experimenting AI tools feel fascinated by its potential but, at the same time, rightly scared. What is clear is that nobody can stop this revolution but can try to make it as beneficial as possible for everyone and to prevent its misuse. This is precisely what Annali ISS, the journal for public health published by the Italian National Institute of Health, will try to do in the near future (Authors’ Guidelines are being updated to cover the use of AI tools) in full compliance with the recommendations and best practices issued by international organizations to ensure quality standards, transparency and integrity in science reporting. We do not expect this to be our last change in policy on this topic and we welcome the opinions of our readers both at this stage and in the future.