Published online: March 23, 2018
DOI: https://doi.org/10.20529/IJME.2018.024
The evaluation of performance in scientific research at any level – whether at the individual, institutional, research council or country level – is not easy. Traditionally, research evaluation at the individual and institutional levels has depended largely on peer opinion, but with the rapid growth of science over the last century and the availability of databases and scientometric techniques, quantitative indicators have gained importance. Both peer review and metrics are subject to flaws, more so in India because of the way they are used. Government agencies, funding bodies and academic and research institutions in India suffer from the impact factor and h-index syndrome. The uninformed use of indicators such as average and cumulative impact factors and the arbitrary criteria stipulated by agencies such as the University Grants Commission, Indian Council of Medical Research and the Medical Council of India for selection and promotion of faculty have made it difficult to distinguish good science from the bad and the indifferent. The exaggerated importance given by these agencies to the number of publications, irrespective of what they report, has led to an ethical crisis in scholarly communication and the reward system in science. These agencies seem to be unconcerned about the proliferation of predatory journals and conferences. After giving examples of the bizarre use of indicators and arbitrary recruitment and evaluation practices in India, we summarise the merits of peer review and quantitative indicators and the evaluation practices followed elsewhere.
This paper looks critically at two issues that characterise Indian science, viz (i) the misuse of metrics, particularly impact factor (IF) and h-index, in assessing individual researchers and institutions, and (ii) poor research evaluation practices. As the past performance of individual researchers and the funds they seek and obtain for subsequent projects are inextricably intertwined, such misuse of metrics is prevalent in project selection and funding as well.
This study is based on facts gathered from publicly available sources such as the websites of organisations and the literature. After explaining the meaning of impact factor and h-index and how not to use them, we give many examples of misuse in reports by Indian funding and regulatory agencies. In the next two sections we give examples of the arbitrariness of the criteria and indicators used by the agencies for the selection and promotion of faculty, selection of research fellows, and funding. We follow this up with the evaluation practices in use elsewhere. If we have cited only a few examples relating to medicine, it is for two reasons: one, medicine forms only a small part of the Indian academic and research enterprise; and two, what applies to research and higher education in other areas applies to medicine as well.
The regulatory and funding agencies give too much importance to the number of papers published and use indicators such as average IF, cumulative IF and IF aggregatein the selection of researchers for awards, the selection and promotion of faculty, awarding fellowships to students and grants to departments and institutions, and thus contribute to the lowering of standards of academic evaluation, scholarly communication, and the country’s research enterprise.
Impact factors, provided by Clarivate Analytics in their Journal Citation Reports (JCR), are applicable to journals and not to individual articles published in the journals. Nor is there such a thing as impact factors of individuals or institutions. One cannot attribute the IF of a journal to a paper published in that journal, as not all papers are cited the same number of times; and the variation could be of two to three orders of magnitude. This metonymic fallacy is the root cause of many ills.
Let us consider the 860 articles and reviews that Nature published in 2013, for example. These have been cited 99,539 times as seen from Web of Science on January 20, 2017; about 160 papers (<19%) account for half of these citations, with the top 1% contributing nearly 12% of all citations, the top 10% contributing 37% of citations, and the bottom one percentile contributing 0.09% of citations. While hardly any paper published in Nature or Science goes uncited, the same is not true of most other journals. A substantial proportion of the papers indexed in Web ofScience over a period of more than a hundred years has not been cited at all and only about 0.5% has been cited more than 200 times (1). Of the six million papers published globally between 2006 and 2013, more than a fifth (21%) has not been cited (2). As Lehman et al (3) have pointed out, the journal literature is made up of “a small number of active, highly cited papers embedded in a sea of inactive and uncited papers”. Also, papers in different fields get cited to different extents. And, as the tremendously skewed distribution of impact factors of journals indicates, only a minority of journals receives the majority of citations.
A Web of Science search for citations to papers published during 2006–2013 made on February 20, 2017, revealed that there is a wide variation in the number of articles not cited among different fields. To cite examples of some fields, the percentage shares of articles and reviews that have not yet been cited even once are: Immunology (2.2%), Neuroscience (3%), Nanoscience and Nanotechnology (4%), Geosciences (5.6%), Surgery (7%), Spectroscopy (8.6%) and Mathematics (16%).
The regulatory and funding agencies lay emphasis on the h-index (4), which is based on the number of papers published by an individual and the number of times they are cited. The h-index of an author is 10 if at least 10 of his/her papers have each received not less than 10 citations, irrespective of the total number of papers he/she has published, and it is arrived at by arranging papers in descending order of the number of times cited. The index does not take into account the actual number of citations received by each paper even if these are far in excess of the number equivalent to the h-index and can thus lead to misleading conclusions.
According to the Joint Committee on Quantitative Assessment of Research formed by three international Mathematics institutions (5), “Citation-based statistics can play a role in the assessment of research, provided they are used properly, interpreted with caution, and make up only part of the process. Citations provide information about journals, papers, and people. We don’t want to hide that information; we want to illuminate it.” The committee has shown that using the h-index in assessing individual researchers and institutions is naïve (5). The Stanford University chemist Zare believes that the h-index is a poor measure in judging researchers early in their career, and it is more a trailing, rather than a leading, indicator of professional success (6).
As early as 1963, when the Science Citation Index (SCI) was released, Garfield (7) cautioned against “the possible promiscuous and careless use of quantitative citation data for sociological evaluations, including personnel and fellowship selection”. He was worried that “in the wrong hands it might be abused” (8). Wilsdon et al have also drawn attention to the pitfalls of the “blunt use of metrics such as journal impact factors, h-indices and grant income targets” (9).
Regrettably, Indian agencies are not only using impact factors and the h-index the wrong way, but also seem to have institutionalised such misuse. Many researchers and academic and research institutions are under the spell of impact factors and the h-index, and even peer-review committees are blindly using these metrics to rank scientists and institutions without understanding their limitations, which prompted Balaram to comment, “Scientists, as a community, often worry about bad science; they might do well to ask hard questions about bad scientometrics.” (10) To be fair, such misuse of metrics is not unique to India.
The ranking of universities by Times Higher Education, Quacquarelli Symonds, Academic Ranking of World Universities and at least half a dozen other agencies has only exacerbated many a vice chancellor’s/director’s greed to improve their institution’s rating by any means, ethics be damned. The advent of the National Institutional Ranking Framework (NIRF), an initiative of the Ministry of Human Resource Development (MHRD), has brought many institutions that would not have found a place in international rankings into the ranking game.
Of late, higher educational institutions in India including the Indian Institute of Management (11), Bengaluru, have started to give monetary rewards to individual researchers who publish papers in journals with a high impact factor. Some institutions have extended this practice to the presentation of papers in conferences, writing of books, and obtaining of grants (12).
The regulatory and funding agencies in India use the IF and h-index in bizarre ways. As early as 1998, an editorial in Current Science commented: “Citation counts and journal impact factors were gaining importance in discussions on science and scientists in committee rooms across the country” (13). In another editorial, the Current Science editor lamented the use of poorly conceived indicators such as the “average impact factor” “for assessing science and scientists” (14). A progress report on science in India commissioned by the Principal Scientific Advisor to the Government of India published in 2005 used the average IF to compare the impact of Indian research published in foreign and local journals, the impact of work done in different institutions, the impact of contributions made to different fields, and to compare India’s performance with that of other countries (15).
Since then there have been five other reports on science and technology in India, two of them by British think tanks and three commissioned by the Department of Science and Technology (DST), Government of India.
We wonder why so many bibliometric projects were carried out to gather the same kind of data and insights. Also, why should one use journal impact factors instead of actual citation data in a study covering long periods? Impact factors are based on citations within the first two years or less, and virtually in all cases, the number of citations to articles published in a journal drops steeply after the initial two or three years (21).
The Department of Science and Technology (DST) has claimed that a budget support of Rs 1.3 million per scientist provided in 2007 led to an IF aggregateof 6.6 per Rs 10 million budget support (22). While there is a positive relationship between research funding and knowledge production (measured by the number of publications and citations) (23), the following three issues must be considered:
Institutions under the DST such as the Indian Association for the Cultivation of Science (IACS) and the SN Bose National Centre for Basic Sciences set targets for the number of papers to be published, citationsto those papers, citations per rupee invested, cumulative impact factor and institutional h-index in the 12th Five-Year Plan (2012–2017) (22). To aim to publish a very large number of papers, earn a large number of citations and score high on the h-index even before conducting the research is not the right way to do research. Instead institutions may do well to concentrate on the quality of research, originality and creativity. Besides, while writing papers may be in one’s hands, getting them accepted in journals is not, let alone ensuring a certain number of citations and predicting the h-index that such citations would lead to.
The DST also requires applicants for its prestigious Swarnajayanthi award to provide impact factors of their publications. It believes that the h-index is a measure of “both the scientific productivity and the apparent scientific impact of a researcher” and that the index can be “applied to judge the impact and productivity of a group of researchers at a department or university” (24). It has started assigning monetary value for citations. It awards incentive grants of up to Rs 300 million to universities on the basis of the h-index calculated by using citation data from Scopus (24).
This seems illogical, as a university’s h-index may be largely dependent on the work of a small number of individuals or departments. For example, Jadavpur University’s strength lies predominantly in the fields of Computer Science and Automation, while Annamalai University is known for research in Chemistry and Banaras Hindu University for Chemistry, Materials and Metallurgy, and Physics. Is it justified to allocate funds to a university on the basis of the citations received by a few researchers in one or two departments? Or, should the bulk of the funds be allocated to the performing departments?
With regard to the use of metrics, the Department of Biotechnology (DBT) seems to follow an ambivalent policy. It does not use journal impact factors in programmes that it conducts in collaboration with international agencies such as the Wellcome Trust and the European Molecular Biology Organisation (EMBO). However, when it comes to its own programmes, it insists on getting impact factor details from researchers applying for grants and fellowships. The Wellcome Trust does not use impact factors or other numeric indices to judge the quality of work; it depends on multi-stage peer review and a final in-person interview. As an associate member of EMBO, the DBT calls for proposals for “EMBO Young Investigators”, wherein the applicants are advised “not to use journal-based metrics such as impact factor during the assessment process”. Indeed, the applicants are asked NOT to include these in their list of publications (25). In their joint Open Access Policy, the DST and DBT have stated that “DBT and DST do not recommend the use of journal impact factors either as a surrogate measure of the quality of individual research articles to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions” (26).
However, the DBT considers it a major achievement that about 400 papers published by researchers funded by it since 2006 had an average IFof 4–5 (27) and uses cumulative IF as a criterion in the selection of candidates for the Ramalingaswami Re-entry Fellowship, the Tata Innovation Fellowship, the Innovative Young Biotechnologist Award, and the National Bioscience Awards for Career Development, and in its programme to promote research excellence in the North-East Region.
A committee that evaluated the work of the Indian Council of Medical Research (ICMR) in 2014 thought it important to report that the more than 2800 research papers published by ICMR institutes had an average IF of 2.86, and that more than 1100 publications from extramural research had an average IFof 3.28 (28). The ICMR routinely uses average IF as a measure of performance of its laboratories.
The Council of Scientific and Industrial Research (CSIR) has used four different indicators, viz (i) the average IF; (ii) the impact energy, which is the sum of the squares of impact factors of journals in which papers are published; (iii) the energy index (C2/P where P is the number of papers and C is the total number of citations) calculated on the basis of papers in the target window of the preceding five years and citations received in the census year; and (iv) the number of papers published per scientist in each laboratory (29, 30).
The University Grants Commission (UGC) uses the cumulative IFas the major criterion for granting Mid-Career Awards and Basic Science Research Fellowships (BSR) under its Faculty Research Promotion Scheme. The cumulative IF of the papers published by the applicant should be ≥ 30 for the Mid- Career Award and ≥ 50 for the BSR Fellowship (31). From 2017, the UGC also started demanding that institutions seeking grants provide the cumulative IF and h-index for papers published in the preceding five years at both the individual and institutional levels.
The National Assessment and Accreditation Council’s (NAAC) online form that institutions use to provide data (32) asks for, among other things:
As per the UGC (Minimum Qualifications for Appointment of Teachers and other Academic Staff in Universities and Colleges and Measures for the Maintenance of Standards in Higher Education) Regulations 2013 (2nd Amendment), an aspiring teacher in a university or college will be evaluated on the basis of her Academic Performance Indicator (API) score, which is based on her contribution to teaching, research publications, bringing research projects, administration, etc. As far as research is concerned, one gets 15 points for every paper one publishes in any refereed journal and 10 for every paper published in a “non-refereed (reputed) journal.” In July 2016, the UGC increased the score for publication in refereed journals to 25 through an amendment (33). The API score for papers in refereed journals would be augmented as follows: (i) indexed journals – by 5 points; (ii) papers with IF between 1 and 2 – by 10 points; (iii) papers with IF between 2 and 5 – by 15 points; (iv) papers with IF between 5 and 10 – by 25 points. (34) UGC does not define what it means by “indexed journals”. We presume it means journals indexed in databases such as Web of Science, Scopus and PubMed. Does this mean that papers published in journals like the Economic and Political Weekly, Indian Journal of Medical Ethics, and Leonardo, which do not have an IF, are worth nothing?
The calculation of the API on the basis of IF is inherently defective for the simple reason that the IF range varies from field to field. The IF of mathematics, agriculture and social science journals are usually low and those of biomedical journals high (35). Also, the IF of a journal varies from year to year (Fig 1). The IF of 55% of journals increased in 2013, and that of 45% decreased (36). The IF of 49 journals changed by 3.0 or more between 2013 and 2014 (37). In some cases, the change is drastic. The IF of Acta Crystallographica Section A, for example, is around 1.5–2.5 most of the time, but it rose to 49.737 in 2009 and 54.066 in 2010 as seen from JCR – more than 20 times its usual value – because a 2008 paper reviewing the development of the computer system SHELX (38) was cited many thousand times in a short span of time. As one would expect, in 2011 the IF of this journal dropped to its usual level (1.728).
Let us see how using journal IF affects the fortunes of a faculty member. Take the hypothetical case of a journal whose IF is around 2.000, say 1.999 or 2.001. No single paper or author is responsible for these numbers. If a couple of papers receive a few more citations than the average, the IF will be 2.001 or more and the candidate will get a higher rating; if a couple of papers receive less than the average number of citations the IF will fall below 2.000 for the same paper reporting the same work. Some journals, especially those published in developing and emerging countries, are indexed by JCR in some years but dropped later (39); eg the Asian-Australasian Journal of Animal Sciences and Cereal Research Communications were delisted in 2008 when they showed an unusually large increase in self-citations and consequently increased IF but reinstated in 2010. Some papers published in high-IF journals do not get cited very frequently, if that is the reason an agency wants to reward the author. On the other hand, some papers published in low-IF journals are cited often within a short span of time. We found from InCites that 56 papers published during 2014–16 in 49 journals in the IF range 0.972–2.523 (JCR 2016) were cited at least 100 times as on November 2, 2017. Thus, depending on the journal IF for faculty evaluation may not be wise.
The UGC has also set rules for the allocation of API points to individual authors of multi-authored papers: “The first and principal / corresponding author / supervisor / mentor would share equally 70% of the total points and the remaining 30% would be shared equally by all other authors” (34). This clause effectively reduces the credit that should be given to the research student. Allocating differential credit to authors on the basis of their position in the byline can lead to problems. The authors of a paper would compete to be the first or corresponding author, destroying the spirit of collaboration. Besides, some papers carry a footnote saying that all authors have contributed equally; in some papers the names of the authors are listed in alphabetical order; while in others the order of names is by rotation. In addition, there are instances of tossing a coin to decide on the order of authors who contribute equally. Some papers are by a few authors, while others may have contributions from a few thousand authors (eg in high energy physics, astronomy and astrophysics). To cite an example of the latter, the article “Combined Measurement of the Higgs Boson Mass in pp Collisions at root s=7 and 8 TeV with the ATLAS and CMS Experiments” in Physical Review Letters had 5126 authors. Currently, some mathematicians from around the world have joined hands under the name Polymath to solve problems. Polymath is a crowdsourced project initiated by Tim Gowers of the University of Cambridge in 2009 and has so far published three papers, while nine more are in the pipeline.
There are many so-called “refereed” and “reputed” journals in India that are substandard and predatory, so much so that India is considered to be the world’s capital for predatory journals. The publishers of these journals seduce researchers with offers of membership in editorial boards and at times add the names of accomplished researchers to their editorial boards without their consent. They also host dubious conferences and collect large sums from authors of papers. To keep such journals out of the evaluation process, the UGC decided to appoint a committee of experts to prepare a master list of journals. This Committee released a list of 38,653 journals (see UGC notification No.F.1-2/2016 (PS) Amendments dated 10 January 2017). According to Curry, this move is tantamount to an abdication of responsibility by the UGC in evaluating the work of Indian researchers (40). Ramani and Prasad have found 111 predatory journals in the UGC list (41). Pushkar suspects that many people might already have become teachers and deans in colleges and universities, and even vice chancellors, on the strength of substandard papers published in such dubious journals (42). Even researchers in well-known institutions have published in such journals (43).
Following the UGC, the All India Council for Technical Education has also started using API scores to evaluate aspiring teachers in universities and colleges (44).
The MHRD has recently introduced a “credit point” system for the promotion of faculty in the National Institutes of Technology (NIT) (45). Points can be acquired through any one of 22 ways, including being a dean or head of a department or being the first/corresponding author in a research paper published in a journal indexed in Scopus or Web of Science, or performing non-academic functions such as being a hostel warden or vigilance officer (45). As per the new regulations, no credit will accrue for publishing articles by paying article-processing charges (APC). According to the additional secretary for technical education (now secretary, higher education) at the MHRD (46), “non-consideration of publications in ‘paid journals’ for career advancement is a standard practice in IITs and other premium institutions, not only NITs.” This policy is commendable.
As per the latest version of the “Minimum qualifications of teachers in medical institutions” (47), candidates for professors must have a minimum of four accepted/published research papers in an indexed/national journal as first or second author of which at least two should have been published while he/she was an associate professors; candidates for associate professors must have a minimum of two accepted/published research papers in an indexed/national journal as first or second author. Unfortunately, the Medical Council of India (MCI) has left one to guess what it means by “indexed journals”, giving legitimacy to many predatory journals indexed in Index Copernicus, which many consider to be of doubtful veracity. A group of medical journal editors had advised against the inclusion of Index Copernicus as a standard indexing service (48), but the MCI did not heed the advice. The net result is the mushrooming of predatory journals claiming to be recognised by the MCI and indexed in the Index Copernicus, and to have an IF far above those of standard journals in the same field. Many faculty members in medical colleges across the country, who find peer review an insurmountable barrier, find it easy to publish their papers in these journals (49), often using taxpayers’ money to pay APC, and meet the requirement for promotion, never mind if ethics is jettisoned along the way.
Like the UGC, the ICMR is also assigning credits for publications in “indexed journals” and these credits depend on the IF of the journals. In addition, an author gets credits for the number of times her publications are cited. The credits an author gets for a paper depends on her position in the byline as well (50).
The National Academy of Agricultural Services (NAAS) has been following an unacceptable practice in the selection of fellows. Like many other agencies, it calculates the cumulative IF, but as many of the journals in which agricultural researchers publish are not indexed in the Web of Science and hence, not assigned an IF, NAAS assigns them IFs on its own. What is more, it has arbitrarily capped the IF of journals indexed in the Web of Science at 20 even if JCR has assigned a much higher value (51). The absurdity of this step can be seen by comparing the NAAS-assigned IF of 20 with the actual 2016 IF of Nature (>40), Science (>37) and Cell (>30) assigned by JCR. Similarly, the Annual Review of Plant Biology had an IF of 18.712 in 2007, which rose to 28.415 in 2010. Yet, the NAAS rating of this journal recorded a decrease of four points between the two years. This highlights the need for transparency in the evaluation process. The Faculty of Agriculture, Banaras Hindu University uses only the much-flawed NAAS journal ratings for the selection of faculty (52).
If the rating of journals by the NAAS is arbitrary, the criteria adopted by the Indian Council of Agricultural Research (ICAR) for the recruitment and promotion of researchers and teachers is even more so. If a journal has not been assigned a rating by the NAAS, it is rated arbitrarily by a screening committee empowered by the ICAR (53).
Clearly such policies not only help breed poor scholarship, but also encourage predatory and substandard journals. The scenario is becoming so bad that India could even apply for the “Bad Metrics” award! (See: https://responsiblemetrics.org/bad-metrics/).
In recent years, academic social networks such as ResearchGate have become popular among researchers around the world, and many researchers flaunt their ResearchGate score as some journals flaunt their IF on the cover page. While ResearchGate undoubtedly helps one follow the work of peers and share ideas (54), the ResearchGate score, which appears to be based on the number of downloads and views, number of questions asked and answered, and number of researchers one follows and one is followed by, is considered a bad metric (55).
Research is a multifaceted enterprise undertaken in different kinds of institutions by different types of researchers. In addition, there is a large variation in publishing and citing practices in different fields. Given such diversity, it is unrealistic to expect to reduce the evaluation of research to simple measures such as the number of papers, journal impact factors and author h-indices. Unfortunately, we have allowed such measures to influence our decisions.
Consider Peter Higgs, who published just 27 papers in a career spanning 57 years (See: http://www.ph.ed.ac.uk/higgs/peter-higgs), with the interval between two papers often being five years or more. Were he to be judged by the UGC’s standards during one of the several 5-year periods during which he did not publish a paper, he would have been rated a poor performer! Yet the world honoured him with a Nobel Prize. Ed Lewis, the 1995 physiology/medicine Nobel laureate, was another rare and irregular publisher with a very low h-index (56). Laying undue emphasis on number of publications and bibliometric measures might lead scientists to write several smaller papers than one (or a few) substantive paper(s) (23). As Bruce Alberts says, the automated evaluation of a researcher’s quality will lead to “a strong disincentive to pursue risky and potentially groundbreaking work, because it takes years to create a new approach in a new experimental context, during which no publications should be expected” (57: p 13).
In addition, any evaluation process based on metrics is liable to gaming. When the number of papers published is given weightage, the tendency is to publish as many papers as possible without regard for the quality of papers. This is what happened with the Research Excellence Framework (REF) exercise in the UK. The numbers of papers published before and after the REF deadline of 2007 differed by more than 35% and the papers published during the year preceding the deadline received 12% fewer citations (58).
Many articles have been written on the misuse of IF. The persistent misuse of IF in research assessment led scientists and editors to formulate the Declaration on Research Assessment (DORA) (59) which recognises the need for eliminating the use of journal-based metrics, such as journal IF, in funding, appointment, and promotion considerations and assessing research, and recommends the use of article-level metrics instead.
As early as 1983, Garfield said that citation analysis was not everything and that it “should be treated as only one more, though distinctive, indicator of the candidate’s influence” (60). In his view, it helps to increase objectivity and the depth of analysis. He had drawn attention to the flaws in peer review, quoting the experiences and views of many (60). Sociologist Merton had pointed out that faculty evaluation letters could be tricky since there was no methodical way of assessing and comparing the estimates provided by different evaluators as their personal scales of judgement could vary widely (60).
The selection and promotion of faculty worldwide is more or less based on metrics and peer- review, with their relative importance depending on the costs involved and the academic traditions. Lord Nicholas Stern had said in his report to the UK’s Minister of Universities and Science, “the UK and New Zealand rely close-to-uniquely on peer review, whilst Belgium, Denmark, Finland and Norway use bibliometrics for the assessment of research quality…..Internationally, there is a trend towards the use of bibliometrics and simple indicators” (61).
According to Zare (62), references are far more important than metrics when evaluating researchers for academic positions. A faculty member in the Stanford University Department of Chemistry is not judged simply by the tenured faculty of the department, but by the views of 10–15 experts outside the department, both national and international, on “whether the research of the candidate has changed the community’s view of the nature of chemistry in a positive way” (62). In contrast, Zare feels that in India, too much emphasis is placed on things such as the number of publications, h-index and the name order in the byline in assessing the value of an individual researcher (63). There are exceptions though. The National Centre for Biological Sciences, Bengaluru, gives great importance to peer review and the quality of research publications in the selection of faculty and in granting them tenure (64). The Indian Institute of Science also gives considerable weightage to peer opinion in the selection of new faculty and in granting tenure. As pointed out by the Research Evaluation Framework (REF) Review (61), systems that rely entirely on metrics are generally less expensive and less compliance-heavy than systems that use peer review. In India, with some 800 universities and 37,000 colleges, the cost of peer reviews for nationwide performance assessments would be prohibitive. However, the way forward would be to introduce the tenure system with the participation of external referees in all universities (as practised at the Indian Institute of Science and Stanford University).
Research has to be evaluated for rigour, originality and significance, and that cannot be done in a routine manner. Evaluation could become more meaningful with a shift in values from scientific productivity to scientific originality and creativity (65), but the funding and education systems seem to discourage originality and curiosity (65).
Research councils and universities need to undertake a radical reform in research evaluation (66). When hiring new faculty members, institutions ought to look not only at the publication record (or other metrics) but also at whether the candidate has really contributed something original and important in her field (65). When evaluating research proposals, originality and creativity should be considered rather than feasibility, and a greater emphasis should be laid on previous achievements rather than the proposed work (63). “The best predictor of the quality of science that a given scientist will produce in the near future is the quality of the scientific work accomplished during the preceding few years. It is rarely that a scientist who continually does excellent science suddenly produces uninteresting work, and conversely, someone producing dull science who suddenly moves into exciting research” (65).
There needs to be greater transparency and accountability. Even in the West, there is a perception that today academia suffers from centralised top-down management, increasingly bureaucratic procedures, teaching according to a prescribed formula, and research driven by assessment and performance targets (67). The NIRF exercise currently being promoted might lead to research driven by assessment and performance targets in the same way that the REF exercise in Britain did. One may go through such exercises so long as one does not take them very seriously and uses the results as a general starting point for in depth discussions based on details and not as the bottom line based on which decisions are made [personal communication from E D Jemmis].
Unfortunately, in India the process of accreditation of institutions has become corrupt over the years and academic autonomy has eroded (68). Indeed, even the appointments of vice chancellors and faculty are mired in corruption (69, 70, 71) and the choice of vice chancellors and directors of IITs and IISERs is “not left to academics themselves but directed by political calculations” (72). “If you can do that (demonetize), I don’t see any difficulty in (taking action) in higher education and research. The most important thing is to immediately do something about the regulatory bodies in higher education, the UGC, AICTE and NAAC,” says Balaram (68). According to him (68), a complete revamp and depoliticisation of the three crucial bodies is a must as there needs to be some level of professionalism in education.” He opines that unlike the NDA1 and UPA1 governments the current government appears to be “somewhat disinterested in the area of higher education and research”.
The blame does not lie with the tools, but the users in academia and the agencies that govern and oversee academic institutions and research. They are neither well-informed about how to use the tools, nor willing to listen to those who are. Given these circumstances, the answer to the question in the title cannot be anything but “No.”
*Note
The views expressed here are those of the authors and not of the institutions to which they belong.
We are grateful to Prof. Satyajit Mayor of the National Centre for Biological Sciences, Prof. Dipankar Chatterjee, Department of Biophysics, Indian Institute of Science, Prof. N V Joshi, Centre for Ecological Sciences, Indian Institute of Science, Prof. T A Abinandanan, Department of Materials Engineering, Indian Institute of Science, Prof. E D Jemmis, Department of Inorganic and Physical Chemistry, Indian Institute of Science, Prof. Vijaya Baskar of Madras Institute of Development Studies, Prof. Sundar Sarukkai, National Institute of Advanced Studies, and Mr. Vellesh Narayanan of i-Loads, Chennai, for their useful inputs. We are indebted to the two referees for their insightful comments.