DOI: https://doi.org/10.20529/IJME.2014.003
2013 has been a landmark year, in fact, a bad year for biomedical journals. Medical journals and their editors have been respected for long, as they are the harbingers of change and of progress in scientific thought. Science expects transparency from the agents through which scientists publish their latest research findings and this expectation is usually fulfilled. Recent developments have, however, thrown into doubt the integrity of some science journals, their editors, and by extension, the entire field of biomedical and science publishing. These developments involve wide-ranging issues – the impact factor, the International Committee of Medical Journal Editors (ICMJE), and the birth, existence and rise of predatory journals.
The impact factor is that much loved and much reviled magic number that authors (and the editors of high-impact journals) like to quote with aplomb. Enough has been said about the appropriate use of this index and its more common abuse. Despite its shortcomings, even the editors of journals with a low-impact factor, who mostly scoff at it, often secretly wish that their journals had higher impact factors. However, most of them accept their journal’s low impact factor. Most editors do not indulge in unethical practices to unfairly raise the impact factor, but some do stoop to borderline or even outright wrong measures to do so. Some of these journals have, in fact, been found out and have been removed from the Science Citation Index. In December 2012, journal editors, publishers and scientific associations made a serious effort to deal with this situation. At a meeting of the American Society of Cell Biology at San Francisco, a group of about 150 editors decided that while determining the importance of a research paper, cognisance should be taken of the misuse of the impact factor by officials and journals. In May 2013, they released the Declaration on Research Assessment (DORA), which states that the impact factor must not be used as "a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions" (1).. The group has suggested alternative, more correct means of evaluating science. Will this make a difference? Only time will tell.
It has been pointed out by Chalmers that the failure to publish negative results is unethical (2). This is because in the absence of those particular data, someone else may repeat the same study and thereby, waste time and money. In 2004, the ICMJE, led by a venerable group of editors, made a move to fulfil an ethical responsibility towards the participants of clinical trials and the public, and to prevent selective reporting. The ICMJE committed to publishing clinical trials from 2005 onwards, provided that these were properly registered (3). However, there is a surprising revelation in Bad Pharma by Ben Goldacre (reviewed in Indian J Med Ethics, July-September, 2013, p. 207) (4). Goldacre writes that a study by Mathieu et al in 2009 showed that of 323 trials published in 2008 in the top 10 medical journals, all of which were members of the ICMJE, only half were adequately registered and trial registration was completely lacking in over a quarter (5). Clearly, the policing method was faulty. Subsequently, in 2013, Bhaumik and Biswas also studied the practice followed by 30 Indian medical journals and found that only nine required the clinical trial registration number (6).
Predatory journals have also been in the news. What are predatory journals? Readers of this journal must undoubtedly be receiving e-mails daily from new journals inviting them to submit their research papers to them. These journals invariably promise quick peer review and publication on acceptance, an international editorial board, etc. They follow the business model of open access: the author usually pays. The only difference is that while many established journals which follow the "author pays" model reject inappropriate science, these "predatory" journals accept nearly all papers that are submitted. Some readers must also be receiving mails to join the editorial boards of these journals. Many (but, of course, not all) such journals are run by small, obscure publishing houses which have sprung up suddenly and publish magazines with titles suggesting serious science. The words "British", "European", "American" and "international" are used liberally in their titles, without adequate justification for their use. (There are often clues that would make one suspect that a journal is predatory, but that is beyond the scope of this editorial.)
In the 1950s, on the basis of an extensive study of scientific journals, the information scientist and seer, Derek de Solla Price, had calculated that the number of science journals doubled every 10-15 years (7). The increase was not due to just a rise in the number of people involved in the field of science, but also to the need to publish for promotions. However, this was in the age of traditional paper journals. The Internet has changed just about everything in all aspects of life and has consigned this interesting statistic to the history books. I am not aware of whether there is a revised equivalent of the Solla Price figure today. Journals are now available on the net, which has made production far easier and cheaper.
What are the ethical issues involved in publishing in predatory journals? Students and researchers today often look up Google rather than PubMed to search the literature. There is no denying the advantages of using Google – after all, Google helps search textbooks, PDFs and areas of text that may not be included in the keywords of a PubMed abstract. However, this advantage comes at the great disadvantage of the inclusion of much noise. Some – indeed, much – of that noise consists of journals that are nonindexed and consequently, usually of a lower standard. Young inexperienced researchers are unaware of this and often base their research on substandard studies published in predatory journals. Thus, if the primary paper is based on flawed science, it means that the secondary papers will also be flawed. The resulting research is, therefore, a waste of precious time and resources, and most importantly, possibly endangers the lives and health of volunteers. Genuine, well-meaning referees who are unaware of the predatory nature of such journals also waste time reviewing the papers published in them, while the final decision, usually an unqualified acceptance, does not really depend on their feedback. Finally, because many are unaware of the "dark side of publishing," they may publish genuinely good research in such journals, thereby risking the possibility of having their valuable findings ignored by scientists and editors, who would consider them guilty by association.
Is there evidence that these journals are, indeed, predatory? And is what they publish genuinely poor science? Or are we being harsh merely on the basis of a gut feeling? Unfortunately for such journals, there is enough data on this now (7, 8). Jeffrey Beall, a librarian at the University of Colorado, Denver, USA, was concerned about the veracity of the mails he was receiving from new journals, so he decided to investigate their practices to ascertain whether they were genuine scientific journals. He discovered that many of these open-access journals were run by small firms, the sole motive of which was profit, with science not even on the radar. He has compiled a list of predatory journals (available at www.scholarlyoa.com). Beall’s methods and zeal have been questioned and on occasion, he has got it wrong. Hindawi publishers, for instance, had been included in his list initially, but were later deleted because they appeared to have a fair and transparent editorial process (8). A glance at the website of one of the journals on Beall’s list says it all: Abhinav proudly states that their journal is on various indexing lists, including "Beall’s list of predatory" (sic)! (http://www.abhinavjournal.com/index.aspx, cited 2013 Dec 4).
The final nail in the coffin of predatory journals – at least as far as supplying evidence on their lack of authenticity is concerned – was the sting operation conducted by John Bohannon of Science (9) In brief, Bohannon wrote a paper containing junk science under the fictitious pseudonym of Ocarrafoo Cobange, claiming to be from the equally non-existent Wasee Institute of Medicine in Asmara. The paper was about the supposed anti-cancer properties of a chemical that he had extracted from a lichen. Bohannon submitted 304 versions of the paper to 304 different open-access journals. Here is the scary part: more than half of the journals accepted the research as genuine, thus exposing the flaw in the peer review they claimed to conduct. There is no doubt that the details of the sting operation make for entertaining reading. However, it is particularly disconcerting to note that about a third of the journals that Bohannan targeted were from India. It appears that India is the world capital of open-access journals,- as many as 64 journals accepted the paper, while only 15 rejected it. Even more worrying is the fact that among the publishers which accepted the journal is Medknow/Wolters Kluwer, a well-known and respected publishing house. Some Elsevier and SAGE journals also accepted the papers.
Finally, while misconduct exists among the fraternity of medical researchers in India, most editors of medical journals have not been able to tackle the problem. As there are no empirical data on the prevalence of such misconduct in India, we are in the process of completing a questionnaire-based study which attempts to estimate the prevalence of research misconduct in South Asian medical journals. Interim data were presented at the Seventh International Congress on Peer Review and Biomedical Publication, at Chicago, USA, in September 2013 (10). Three factors stood out. First, only 37% of the123 editors whom we contacted responded to the survey. This suggests inertia or disinterest, something which is not conducive to good editorial practice. Second, many editors who did respond reported that their attempts to elicit a satisfactory explanation from the authors accused of misconduct were generally unsuccessful. Third, they often received no answer when they attempted to inform the authors’ institutions. Further, many journals did not have appropriate mechanisms to deal with misconduct.
So what are journals to do if they wish to improve? Much of the push can come from the readers and authors. It is up to scientists to refrain from associating themselves with predatory journals. They should not publish in them as most of them are only a form of vanity press anyway; they should not be fooled into accepting invitations to join the editorial boards of such journals; and they should not referee papers sent in bulk, with their name being one among a long list of names. Next, they should educate their colleagues and more importantly, the officials who are responsible for their promotions on the limitations of the impact factor. Scientists must remember that science is a search to uncover nature’s secrets and discover the truth, and this calls for honesty on the part of the researcher.