Citations, impact factors and shady publication practices: how should the lasting clinical and social value of research really be measured?
Readers of this journal will be aware of the increasing importance of publication outputs for both individual researchers or clinicians and the academic or clinical institutions with which they are affiliated. There is increasing concern that many of the bibliometric analyses being used to measure publication outputs or ‘research excellence’ defined by such means are being used in ways for which they were never intended however (Ketefian & Comerford-Freda 2009), and I am certainly receiving more requests from academic heads of department to verify the impact factor of this journal so that an author (or authors) can proceed to full academic tenureship or receive other reward as a result of their publications in this or other peer reviewed journals based primarily upon such indices.
There is a danger however, that unquestioning subservience to the citation or impact factor ‘game’ within the research community is changing the nature of academic scholarship, and an overemphasis on such measurements may lead to new forms of selection, publication or citation bias within peer reviewed journals which represent the scientific discourse of any given community of clinicians, researchers, academics and scholars. The growing hegemony of publication outputs as a means to determine the scientific worth of an individual's, department's, or entire institution's true worth is both surprising and alarming since it clearly encourages a number of well known abuses within the publication process. These include the temptation to engage in unwarranted guest authorship, excessive self-citation, the salami-slicing of data which are better published together, and the entry of multiple redundant publications into a peer review process which is already creaking at the seams as the number of scientific journals and manuscript submissions grows exponentially.
As a journal editor, I have certainly encountered attempts to engage in some of these practices, and know from conversations with other editors that this is not an uncommon experience, but how many others go unrecognised and are therefore successful within the peer review process? My own instinct, based admittedly upon anecdotal experience as the editor of just one journal suggests that this may be more prevalent in those countries where such measures are used to excess in calculating the worth of individual researcher's, department's or institution's work, and I am not convinced that individuals are entirely to blame for this phenomenon. There may of course be poor understanding or a lack of awareness about the unsuitability of such practices although specific guidance has been published by a number of authorities and caveats have been raised by the editors of other journals in the past about such practices (Norman & Griffiths 2008). The fault is however, both systemic and individual, and it is surely time to take a more radical stance on the subject since no individual working in clinical research or academic environments is immune from the pressure to meet such targets, and no scientific journal (regardless of its impact factor) is immune to the logical consequences of this (Choi et al. 2009), whether in terms of the growing number of manuscripts submitted for peer review, or the somewhat less than admirable practices designed to guarantee their publication or citation (Greenberg 2009).
Greenberg (2009) suggests that such developments may be responsible for changing the face of academic scholarship since they encourage the distortion, amplification, diversion and transmutation of research findings including for instance, the reporting of secondary data or analyses from literature reviews as if they were the results of primary research, or the uncritical and oftentimes incorrect citation of findings which do not bear scrutiny or the test of time in relation to their supposed significance. This does little to serve the scientific community or those we care for in the long term, particularly since we know that citation ratings are not infallible measures of the social or clinical worth of any scientific study and offer no guarantee that previous findings cited in a paper have actually been read or reported correctly (Stordal 2009). Moreover, the impact factor of the journal in which any manuscript is published would appear to have less effect upon the citation chances of an individual paper than most authors apparently believe. Leimu and Koricheva (2005) for instance, argue that even in the rapidly expanding ecology field, the so called ‘journal effect’ hypothesis on a paper's citation chances is unfounded given the considerable variation in citation rates published in all peer reviewed journals and in particular, those with some of the highest impact factors.
The question remains then, why are institutions, managers and individual researchers so concerned to publish in journals with the highest impact factors when the clinical impact of their work may be greater if published elsewhere, and why are the education or scientific ministries of national governments so obsessed with inter-institutional or international citation ratings within their higher education and health service delivery sectors, Citation rates and impact factors do not necessarily equate with the uptake of research findings by clinical practitioners or indeed, significant advances in the field in which the research takes place (Aksnes & Rip 2009; Ketefian & Comerford-Freda 2009), although of course a small number of individual studies and many larger research programmes invariably are. This is of particular importance to the oncology community since cancer research is frequently one of the largest (if not the largest) area of clinical research within any academic or clinical institution, certainly in the Netherlands and increasingly in other European countries as well, so it is time perhaps, to ask ourselves whether these are the only reliable and valid measurements of a research project or programme's true worth.
In another of their recent editorials, Norman and Griffiths (2009) asked the authors of scientific papers to consider the question ‘to whom are we writing?’ and challenge the assumption that the primary answer to this question is, or should necessarily be ‘to other scientists’ as Beckstead (2009) argues in the same issue of their journal. A more fundamental question which needs to be addressed however, and one which cannot be answered by a series of simplistic or reductionist bibliometric analyses is surely ‘why are we writing?’ If the answer to that question is ‘to improve the experience of cancer patient care’, then one obvious response to this is ‘how do we know that this is actually being achieved?’ The answer to such a question is slightly more difficult to ascertain, and it would seem that relatively few have sought to seriously measure the clinical or social, as opposed to the citation impact of research findings published within any scientific field of inquiry. If we did so, we may well find that very different criteria are needed to measure the true societal impact of such work and the focus of our research endeavours may as a consequence be very different.
It would be unwise to assume on the basis of this that information on article downloads provide a better indicator of a manuscript's clinical or social value than citation rates alone, but it is somewhat reassuring to find that no less than 7 of the 15 most downloaded articles from this journal in 2009 report the findings of studies designed to evaluate the use of new or novel care interventions for cancer patients given that the paucity of such research has frequently been commented upon (Rahm Hallberg 2006). These range from the use of art therapy to improve quality of life in women undergoing treatment for breast cancer (Svensk et al. 2009), self-management strategies for older patients with cancer pain (Chatwin et al. 2009), and the provision of a domiciliary occupational therapy service for patients in one community healthcare trust (Kealey & McIntyre 2005). Five of the remainder look specifically at patient or carer experiences of service delivery or treatment including one looking at the patient experience of chemotherapy related nausea (Molassiotis et al. 2008), quality of life among patients with primary, metastatic and recurrent cancer (Siddiqi et al. 2009), and cancer patients' views on cancer rehabilitation and quality of life (Korstjens et al. 2008).
It is unlikely that many of these will have attracted large sums of money by way of research funding, and some may not even find their way into the reference lists of others' published work. However, the fact that someone, somewhere is downloading such papers in relatively large quantities is extremely encouraging since it suggests that the need to better understand the complexities of the cancer patient experience and improve the quality of life, survivorship and rehabilitation of such patients by relatively simple, non-technological means is of concern to those who read this journal. In contrast with the view that peer reviewed journals are primarily there to serve the needs of those engaged in the conduct of scientific research (Beckstead 2009), I am rather more concerned that this journal addresses the very real needs of cancer patients and those who care for them, and I rather suspect on the basis of the papers which are downloaded from the journal that much of its readership feels the same way. In the absence of evidence to the contrary, I am relatively content therefore, that the journal is making a real and lasting impact not only on the development of knowledge within the field of cancer care, but also to the lives of the many millions of people affected by cancer each year, and for me at least, that is all the satisfaction I need. Perhaps I am to naïve or utopian to believe that others might feel the same way?