Vol , Issue Date of Publication: October 01, 2013
DOI: https://doi.org/10.20529/IJME.2013.075

Views
, PDF Downloads:

DISCUSSIONS

Screening for cervical cancer revisited: understanding implementation research

Ruth Macklin

DOI: https://doi.org/10.20529/IJME.2013.075
In the editorial “Ethics of ‘standard care’ in randomised controlled trials of screening for cervical cancer” (1), Sandhya Srinivasan argues persuasively that a series of placebo-controlled trials on screening for cervical cancer in India were unethical. The purported aim of the trials was to study the method that uses visual inspection of the cervix following staining with acetic acid (VIA), to determine the efficacy of the method in a low-resource setting. Srinivasan notes: “The researchers in these trials have argued that only a ‘no care’ control arm can give definitive results and this information is essential to guide policies and programmes….VIA has been researched at least since the early 1990s. VIA is an affordable screening test, and there is evidence suggesting that it works about as well as the Pap smear” (1:p149). The author also identifies the design of the research as cluster randomised trials: “The trials actively denied care, by comparing – as intervention and control groups – entire clusters of urban wards or rural primary health centres, rather than individuals, ensuring that women in the control groups would not somehow gain access to the interventions” (1:p148). Several issues need to be sorted out to clarify what is at stake here. First, one must determine exactly what is wrong with the researchers’ defence of the placebo-controlled design of the study. Second, one must identify just what type of study is needed in low-resource settings such as India. Finally, there is a need to assess the ethical acceptability of cluster randomised trials.

The researchers’ defence

It is simply not true that “only a ‘no care’ control arm can give definitive results.” Although the randomised controlled trial is the “gold standard” in clinical research methodology, this does not mean that the control arm must be a placebo. In settings in which the standard diagnostic method is a proven intervention and researchers want to test a new method, or even a less expensive method, it would be unethical to withhold the proven diagnostic method from the participants. The research design would then be a non-inferiority trial, which would test the experimental procedure against the proven intervention to see whether the former is as good (or almost as good) as the latter. That is a perfectly acceptable research design, although it would involve more research subjects and take longer than a placebo-controlled trial. The idea that it is ethically acceptable to design a study in resource-poor settings in which the participants do not have access to a proven diagnostic method outside the trial is flawed. If researchers in India wanted to study VIA to determine whether it is as good (or almost as good) as the Pap smear, they could do so in a tertiary care setting which has the equipment and trained personnel to allow for the routine use of the cytology-based screening method. Using the existing baseline data on the incidence of cervical cancer in India, the efficacy of the experimental method (VIA) could then be ascertained. This brings us to another flaw in the researchers’ defence. The efficacy of VIA was already well established. According to a World Health Organisation (WHO) consultation report in 2002, “The test performance of VIA suggests that it has similar sensitivity to that of cervical cytology in detecting CIN, but has lower specificity. Further research is required to improve its specificity without compromising sensitivity” (2). The WHO report also pointed out the need for training personnel in the use of the method, as well as that for developing standard procedures for quality control. Also needed at the time was research on the development of a simple scoring system to objectively report the results of VIA. However, the important point is that the efficacy of VIA as a screening method had already been established when these trials were conducted in India.

What type of research is needed?

This brings us to the second issue: just what type of study is needed in low-resource settings, such as those in many parts of India? What was needed was not an efficacy study of VIA, but a study of its implementation in a new setting. This type of study, known as implementation research, is carried out frequently in low-resource settings in which the training of personnel in the use of new equipment or techniques must be studied. The WHO defines this type of research as follows: “…that area of research devoted to understanding the bottlenecks around introduction and scaling up implementation of a proven public health intervention and finding practical solutions to overcome such barriers or constraints” (3). Although some authors agree with the part of the definition stipulating that the intervention should already have been proven to be efficacious (4), the authors of a series of papers on cluster randomised trials reject it. They write: “The fact that an educational or quality improvement intervention is being evaluated in a CRT suggests that its effectiveness is unproven. Indeed, if it was known at the start of the trial that the study intervention is effective, the CRT would be unethical” (5:p11). So, a great deal hinges on whether VIA should be considered a “proven intervention.” As Srinivasan argues: “By comparing the impact of the interventions with that of no treatment, they also violated the principle of equipoise on which such studies should be based, even though there is sufficient evidence that some screening is better than none” (1:p148). Although randomised controlled trials remain the gold standard in clinical research methodology, other approaches in implementation research need not run into the ethical problem of placebo controls. One option is to use historical controls. This method is generally considered inferior in trials that seek to prove the efficacy of a new intervention because the historical controls may not fully match the subjects receiving the intervention. However, implementation research does not study the efficacy of a new intervention; rather, it studies the ability to employ the proven intervention properly by training personnel in the use of unfamiliar techniques or equipment. A study design that compares cancer rates following the introduction of VIA in urban wards or rural primary health centres with the past rates of cancer among women who used the same health facilities before VIA was introduced could provide results demonstrating that the implementation of the new technique was successful. Another method – one favoured by WHO as an alternative to implementation research – is demonstration projects. WHO conducted a demonstration project on VIA in six African countries: Malawi, Madagascar, Nigeria, Uganda, the United Republic of Tanzania, and Zambia. The report on the project, which ran from 2005 to 2009, says that all women were counselled and offered screening using VIA, and patients with a positive screening test were treated using cryotherapy (6). The report concludes: “This demonstration project has shown that the ‘screen and treat’ approach can be introduced into existing reproductive health services in low-resource countries. Screening for precancerous lesions using VIA and treatment with cryotherapy is acceptable and feasible at low-level health facilities in six African countries” (6). WHO initiated this demonstration project in 2005, while two of the placebo-controlled studies on VIA described by Srinivasan continued until 2006 and 2007. Whereas WHO considered VIA to be a proven intervention, ready for a demonstration project in six African countries, the researchers in India apparently believed it was necessary to demonstrate the efficacy of the technique against placebo.

Cluster randomised trials

Srinivasan does not go into a discussion of this research methodology, but does describe it, as quoted earlier. While the description is not exactly a condemnation of cluster randomised trials, it implies that they are ethically suspect because they “actively denied care” and prevented women from learning about and gaining access to VIA screening. There is nothing inherently suspect about cluster randomised trials. It is true that one motivation for using this methodology is to prevent contamination between intervention and control groups, a problem which could occur when individuals rather than healthcare facilities are randomised. This methodology is especially useful in implementation research in developing countries, to train an entire unit of physicians or nurses in a technique that is new to them but has been proven effective elsewhere. It would be difficult, if not impossible, to obtain accurate results if individuals were randomised rather than clusters. What makes the VIA cluster randomised trials in India unethical is not the randomisation method, but the fact that the control group did not receive screening for cervical cancer by VIA or any other method. Srinivasan rightly concludes with the observation that “there are many other issues that deserve discussion in these and other trials looking at public health interventions in resource-poor settings” (1:p149). One critical issue is the need to distinguish clinical trials designed to study the safety and efficacy of new pharmaceutical products from those that study the implementation of interventions already proven to be efficacious in other settings. Implementation research is a useful pathway for introducing and scaling up beneficial proven public health interventions in resource-poor settings. However, it is a mistake to contend that the use of placebo controls in phase III efficacy studies of new drugs or techniques is appropriate, or even ethical, in efforts to study the implementation of proven techniques in resource-poor settings.

References

  1. Srinivasan S. Ethics of ‘standard care’ in randomised controlled trials of screening for cervical cancer. Indian J Med Ethics. 2013 Jul–Sep;10(3):147–9.
  2. World Health Organization. Cervical cancer screening in developing countries: report of a WHO consultation [Internet]. Geneva: WHO;2002 [cited 2013 Sep 3]. Available from: http://whqlibdoc.who.int/publications/2002/9241545720.pdf.
  3. World Health Organization. Implementation research in immunization. What Is Implementation Research? [Internet]. Geneva:WHO;2013 [cited 2013 Sep 3]. Available from: http://www.who.int/vaccine_research/implementation/en/.
  4. Remme JHF, Adam T, Becerra-Posada F, D’Arcangues C, Devlin M, Gardner C, Ghaffar A, Hombach J, Kengeya JFK, Mbewu A, Mbizvo MT, Mirza Z, Pang T, Ridley RG, ZikerR, Terry RF. Defining research to improve health systems [Internet]. PLoS Med. 2010 Nov 16 [cited 2013 Sep 3];7(11): e1001000. Available from: http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.1001000.
  5. McRae AD, Weijer C, Binik A, Grimshaw JM, Boruch R, Brehaut JC, Donner A, Eccles MP, Saginur R, White A, Taljaard M. When is informed consent required in cluster randomised trials in health research? [Internet]. Trials. 2011Sep 9[cited 2013 Sep 3]; 12:202.doi:10.1186/1745-6215-12-202. Available from: http://www.trialsjournal.com/content/12/1/202.
  6. World Health Organization. Prevention of cervical cancer through screening using visual inspection with acetic acid (VIA) and treatment with cryotherapy [Internet]. Geneva: WHO;2012 [cited 2013 Sep 3]. Available from: http://apps.who.int/iris/bitstream/10665/75250/1/9789241503860_eng.pdf.
About the Authors
Professor of Bioethics, Department of Epidemiology and Population Health,
Albert Einstein College of Medicine, 1300 Morris Park Avenue, Bronx, NY 10461
Help IJME keep its content free. You can support us from as little as Rs. 500 Make a Donation