The use of Artificial Intelligence (AI) in medical science has been widely discussed and debated. Topol foresaw that AI, particularly deep learning, would be used in a variety of applications, with users ranging from specialty doctors to paramedics [1]. He discussed how deep neural networks (DNNs) of AI can help interpret medical scans, pathology slides, skin lesions, retinal images, electrocardiograms, endoscopy, faces, and vital signs. He has described its application in radiology, pathology, dermatology, ophthalmology, cardiology, mental health, and other fields [1]. Among several other AI applications used in our daily life, the next generation breakthrough, AI model ChatGPT-3 (https://chat.openai.com/) was launched on November 30, 2022 by OpenAI, California, which is well-known for its innovations in automated text generation. ChatGPT converses with the user, ascertains the user’s needs, and responds accordingly. It can write a poem, a diet plan, recipes, letters, computer programmes, a eulogy, do copy editing, and so on.
There has been a lot of discussion on the uses and misuses of ChatGPT. It could help non-English-speaking authors to improve efficiency and accuracy in writing; or assist in planning and structuring of various types of writing, including research communications. The rising concern is its potential ability to generate a research paper without actually conducting research. In addition, there have been published research papers in which ChatGPT has been listed as an author. This may have several consequences and may alter scientific facts, spread misinformation, and give rise to unethical practices in scholarly communication [2, 3, 4]. The text generated by this artificial author may even be plagiarised, giving rising to doubts about the author’s credibility. Researchers have already discussed the fabricated and fake references generated by ChatGPT [5]. Moreover, it will challenge the nature of errors, biases, research integrity and ethics [4]. If ChatGPT is one of the artificial authors in research papers published in journals indexed in citation databases, it will change the entire approach to bibliometric studies. This editorial focuses on the consequences of an artificial author being present in citation databases.
Citations are a dynamic relationship between the cited (original paper) and citing (containing references) documents. If either is generated by an AI bot like ChatGPT, it will challenge existing authorship norms, quantitative assessment of research contributions, intellectual property rights, and ownership of data [6]. In addition, citation-based research metrics like the Impact Factor of a journal, h-index, i10 index and CiteScore will be misleading and questionable. For example, consider an article published in a journal that is indexed in Web of Science and has a 2.5 impact factor for the year 2021. If ChatGPT is a co-author with the most citations, the impact factor in 2023 will be much higher.
Artificial authorship will cause serious problems in citation analysis. It will produce deceptive metrics. The Scopus database was searched for the term “ChatGPT” (All fields) to investigate this. It is not surprising that a total of 335 published papers were identified (upto April 21, 2023). Scopus considers ChatGPT as an individual author because it appears as a co-author in three papers. Moreover, these articles have received citations. Because ChatGPT is treated as an individual author, it has its own Scopus profile [https://drive.google.com/file/d/1hh53gyb4g9ejt2muJbvuQO13cCZfAUUY/view], which includes citations, h-index, and ORCID iD. This will have an impact on the ranking of individual authors, journals, and countries. Although ChatGPT is a co-author, its profile shows that, as of April 21, 2023, it has been cited 32 times in 29 documents, which is a matter of great concern because it suggests that an AI can possess domain specialisations and knowledge, like a human researcher.
Another important parameter of bibliometric studies is the ranking of authors based on the number of papers they have published and the number of citations their works have received. ChatGPT is ranked fourth in this data set [https://drive.google.com/file/d/15DejBUrv-U5xw-zgYJZ6WZh4E43Q-I0p/view]. Ranking an artificial author would create further problems, especially with regard to the affiliation of such an author. For instance, if any one outside the USA uses ChatGPT as a co-author, the USA would automatically be listed as a collaborative country since ChatGPT’s address is OpenAI, USA. In case of citation-based rankings, the paper, in which ChatGPT is one of the authors, is ranked fourth. In future, if citations of these papers increase, then the total metrics of papers in disciplines may be misleading because, while the metrics show large numbers of papers, many may not be human-authored or solely-human authored.
Since an author is a legal, institutional and societal entity in the publication/communication, accountabilty for the creation and its impact on society remain with the author [7]. Potts explains that the AI or digital scripter generates communications by imitating language (or data) from its “immense dictionary” or database, then mixing and blending that data into newly generated work [8]. Therefore, the proof of concept of ChatGPT to generate original and novel textual content is yet to be established, as novelty and originality are two aspects of any new creation. Subsequently, the question of accountability is probably the biggest hurdle that papers with ChatGPT as a co-author will encounter. To date, OpenAI cannot be held responsibile for the text generated by ChatGPT and ownership of intellectual property rights [9].
It is crucial to sensitise the scientific community, researchers, publishers of journals and pre-print archives and holders of citation databases about the consequences, advantages and disadvantages of AI tools in scholarly communication. The World Association of Medical Editors (WAME) has recommended that Chatbots cannot be authors in any type of publication, thus altering their publication policies to counter the ChatGTP invasion. Conversely, it could be made mandatory for authors to disclose usage of tools like ChatGPT along with the assurance of non-plagiarised text [10]. Recently, the publishers — Elsevier and Cambridge University Press & Assessment — have announced the acceptance of use of ChatGPT in writing research papers but not accepted it as a co-author [11].
Recognising ChatGPT as an author will corrupt the research processes and bibliometric studies and wipe out the efforts of pioneers in this field. Plagiarism, predatory journals, paper retraction, duplicate submission, data fabrication, and paper mills derived from the “publish or perish” pressure on researchers have already assumed epidemic proportions. ChatGPT has, in addition, posed fresh challenges that will change the landscape of research communications [12] and, ultimately, impact bibliometric studies.
Finally, each discourse has its own genre and its own concerns documented after observations and experimentation, and this is where ChatGPT falls short. Consequently, ChatGPT with its inherent advantages and disadvantages should only be considered a tool. The knowledge created by Generative AI far exceeds its comprehension by human beings and it offers a difficult set of challenges to science.