Wednesday, October 04, 2006

Is the pursuit of evidence unethical?

Probably every undergraduate who endured a lecture about Karl Popper's philosophy of science got the point that science is not about attaining absolutely certain knowledge. There are no certainties. There are tentative theories which may be better than others if they have greater predictive power, can explain more, and survive current attempts to refute them. But they may also turn out to be wrong, and be replaced or absorbed by different (tentative) theories. So the humble march of science goes on.

In medicine and public health, this conception of science has important consequences. We want medical and public health interventions to be based on 'good science.' But since certainty about their effectiveness is unattainable, we are left with the tricky issue of deciding when the scientific evidence has reached a point that justifies stopping doing health research and starting to put our knowledge into health policy and practice.

In the current online issue of the British Medical Journal, Malcolm Potts et. al. have apparently had enough. They give vivid examples of where health authorities did not wait for randomized controlled trials before taking action (the use of oral hydration therapy for childhood diarrhoea), where we are still waiting for the results of two randomized controlled trials, despite decades of accumulated evidence, including an RCT (use of male circumcision to help protect against HIV infection); and where a low cost drug with an excellent safety profile and long shelf life, not being used in developing countries for fear of low effectiveness (the use of misoprostol for postpartum hemorraging).

According to Potts et. al., sometimes observational studies and clinical experience offer a good enough evidence base to justify health policy. Let us not, they argue, make the perfect the enemy of the good, and start making people healthier on the basis of what we know. This sounds like a fine 'let's-roll-up-our-sleeves-and-get-on-with-it' attitude, though it leaves open the issue of who decides what is good enough evidence, and the final paragraph may not be to everyone's liking:

Good science, we suggest, is taking the research to the problem rather than conducting the research in the tallest ivory tower the investigator can find. Randomised controlled trials are needed and, when appropriate, should be part of the empirical evidence necessary for decision making. The question is how much evidence is needed to move from research to practice, when the matter is life saving interventions in poor settings. The yardstick for decision making should take into account the risks and benefits in the local conditions, not those of an ideal situation.

What does the reference to 'poor settings' mean? Does it mean that there are double standards in what science counts as good enough to support health policies, one for the developed world, another for the developing world? The scientific bar is being set lower in order to provide immediate help to people suffering disproportionately in resource-poor countries. The assumption is that this will do more good than harm. But that, frankly, we can't be sure of. RCTs in the past have shown that some 'good enough' health policies and interventions have been not much good at all.

Potts et. al. are asking researchers and policy makers to take greater moral risks in regard to patients in the developing world -- but it is the latter who will pay if things turn out for the worst.


Post a Comment

<< Home