Thursday, March 10, 2016
Wednesday, February 03, 2016
Watching the retractions
Recently, the tables were turned: a bioethics article was retracted. In fact, it was an article that this blog mentioned back in 2015. Chattopadhyay et al.'s "Imperialism in bioethics: how policies of profit negate engagement of developing world bioethicists and undermine global bioethics" looked at online journal access, and concluded that a number of bioethics journals were inaccessible to middle- and low-income researchers via prominent open-access initiatives (WHO's Hinari, Pub Med Central). These broad claims were factually incorrect. You could call this the predictable consequence of the 'empirical turn' in bioethics: if you emulate empirical methods, and generate empirical data to support ethical arguments, you are open to retraction when the facts aren't right. So be it.
The discussion and comments on the case in Retraction Watch are worth taking a look at. For my part, I sympathize with the general claim that those in developing countries face serious challenges entering the bioethical 'conversation of mankind.' The roots of the problem likely run deeper than open-access: if you don't have good English, or access to computers, or computers with reliable internet, or there is no hint of 'bioethics' in your educational institutions, or a burning interest in bioethics makes you an economical trainwreck and so on. To make these sort of claims stick, more empirical research is needed. As it should be.
For my part, I have decided not to retract my blog piece on the original article, but instead direct the readers (all five of them) to this post and hence back to Retraction Watch. Lesson learned: you can't assume that peer reviewers have thoroughly checked the methodology of an article, and when the conclusions of an article cohere strongly with your own experiences, look out.
Thursday, December 17, 2015
Tirage with terrorists
The prior guidance on triage in such situations seemed to be influenced by rabbinic principles to the tune of 'charity begins at home'. In that case, you treat your own injured people first, and only those who are 'other than your own' or ‘opposed to your own’ afterwards. The new guidance removed that reference, rendering it more cosmopolitan, where ethnic/national/perpetrator/victim distinctions are irrelevant, and suffering humans in such situations are to be treated by physicians purely according to medical criteria. Opponents of the change find it outrageous that a terrorist could potentially be treated ahead of one of his/her less severely injured victims.
Of course, the larger background is the longstanding Israeli-Palestianian conflict, including who gets called a terrorist when civilians are put in harm's way or killed to further political aims, and who does not. But even leaving that to one side, the old position on triage was already controversial. The 'charity begins at home' approach turns the physician into an instrument of (certain currents within) the Israeli state, where doctors are instructed to perform political triage with medical resources. This approach may not even be wise politically, given that dead people are harder to gain information from, and that it implies that IDF members should receive likewise (non-)treatment from Palestinian physicians in casualty situations. It would also seem to imply that Israeli physicians should treat even the most minor physical injury of 'one's own' above the injuries of the one(s) who caused the harm, no matter how severe. That implication would undoubtedly appeal to angry posters in comments sections, some of whom say that injured terrorists should simply die, and doctors on the scene should not prevent, or perhaps should even hasten, their death. One can understand the rage evoked by the killing of innocent civilians, but what kind of doctor does that?
In any case, the IMA is responsible for clarifying its current position and its ethical rationale. It will also need to state how medical professionals will be protected on the scene if they are to follow any new cosmopolitan guidelines, considering how violently some are opposed to it.
Friday, December 11, 2015
Research ethics during medical disruption
A new publication in the Journal of Medical Ethics by House et. al. is therefore very welcome, because it covers some neglected ground. In the rare case that bioethicists discuss ethical challenges within politically unstable contexts, they tend to concentrate on the reliable delivery of health care. Instead, this article focuses on the conduct of health research when social life gets gnarly, and more specifically when medical services are disrupted, based on the authors' experiences in Kenya. The authors make a useful three-way distinction between the ethics of not starting research, stopping it once it has started, and keeping on going in the face of communal strife.
The authors argue that the ethics of not starting research, and continuing it once it has started, are different. If the political upheaval is so disruptive that ethical standards of research cannot be upheld, research should wait. But an ongoing study may involve serious commitments and expectations, a relationship of trust between researchers and communities, and research participants may benefit from research-related interventions. Stopping an ongoing study requires deliberation with the local community and a careful collaborative weighing of options and trade-offs.
One shortcoming of the discussion is its strong focus on clinical, biomedical research, where data collection is closely bound up with the provision of health care. Not all research one can imagine during a political crisis is like that. Anthropologists and political scientists -- who unlike physician-researchers do not have a role-related duty to care for patients -- may in fact jump at the chance to study what goes on during periods of political turmoil, and it is not clear that the biomedical framework of House et. al. captures the kinds of challenges they might have, or if their recommendations are applicable to them.
Connecting the recent Ebola crisis to this article reveals a certain tension. According to this House et. al., would research during the highly disruptive Ebola crisis be permissible or not? The answer seems to be: yes and no. At some points, House et. al. rule such research out as unethical: "While research has the potential to benefit the health of populations, the risks overall are too high to start research during medical care disruption. The prudent course is to wait until after resolution of these episodes when ethical standards can be met, the safety of patients and research subjects assured, and the likelihood of completing a study is maximized." However, the authors later seem to build in a loophole: "... if the aims of the study are of particular importance during times of medical care disruption such as studies that address how to optimise healthcare during times of disruption, it may shift the balance of decision-making in favour of starting or continuing research." That would, under a charitable interpretation, rule in favor of research-during-Ebola-like-outbreak.
We seem to be still in two minds: do we categorically state that conditions during political upheaval simply make responsible conduct of research impossible, or do we permit research that might be useful and could not be conducted other than in those non-ideal conditions? The House et. al. article may not answer this question, but it has helpfully opened lines of inquiry into ethical questions that arise all to often in research in developing countries.
Sunday, August 02, 2015
Pinker tells bioethics what its new moral imperative is, or not
My first reaction was: how is this new bioethics skill taught? Should there be classes that teach it in a stepwise manner, i.e. where you first learn not to butt in, then how to just step a bit aside, followed by somewhat getting out of the way, and culminating in totally screwing off? What would the syllabus look like? Wouldn't avoiding bioethics class altogether be a sign of success?
But seriously, how does Pinker get to this conclusion? Answer: a number of shaky assumptions. The first assumption is that health outcomes are primarily driven by biotechnological advances, rather than (say) non-biomedical driven changes in the social determinants of health. That first and controversial assumption is needed in combination with a second one about bioethics, i.e. thwarting important research is the primary goal of bioethics as it is currently practiced. That view of bioethics comes in the form of a massive, bloated straw man:
A truly ethical bioethics should not bog down research in red tape, moratoria, or threats of prosecution based on nebulous but sweeping principles such as 'dignity', 'sacredness', or 'social justice'. Nor should it thwart research that has likely benefits now or in the near future by sowing panic about speculative harms in the distant future. These include perverse analogies with nuclear weapons and Nazi atrocities, science-fiction dystopias like 'Brave New World' and 'Gattaca' and freak-show scenarios like armies of cloned Hitlers, people selling their eyeballs on eBay, or warehouses of zombies to supply people with spare organs.
Well, yes, bioethics should not be insane. But maybe people just have less wacky short and long-term concerns about gene editing. Pinker brushes this aside too, saying that slowing down science even a little bit causes devastating harm (see first assumption), and since we can't reliably predict long-term implications of science anyway, why hold us back by discussing them? So old bioethics of constraint and caution to the side! Let biotechnological research be free of impediment, so we (in the better off countries, mostly) can feast on its benefits! But whoa, wait a minute. He also writes:
Of course, individuals must be protected from identifiable harm, but we already have ample safeguards for the safety and informed consent of patients and research subjects.
Where do those protections come from? Old bioethics, the kind that does not step out of the way. And although those protections are (more or less) in place, it is not insane and irresponsible to discuss research on human subjects involving gene editing in order to get some grip on what the 'identifiable harms' might be, what informed consent should involve, and what safeguards would be appropriate. And it is not just silly bioethicists that worry these sorts of things: the call for a moratorium was made by the scientist that invented CRISPR-Cas9 in the first place.
On closer inspection, what is Pinker saying? Not a lot. Science is awesome, when it leads to good things; irrationality is irrational. So as far as this opinion piece goes, it might have been better to get out of the way.
Saturday, August 01, 2015
Imperialism and access to bioethics journals
This is the central complaint of Chattopadhyay, Myser and De Vries in a recent article in the Journal of Bioethics Inquiry, fetchingly entitled Imperialism in Bioethics: How Politics of Profit Negate Engagement of Developing World Bioethicists and Undermine Global Bioethics. The authors describe how policies by many publishers of bioethics journals making it extremely difficult for aspiring bioethicists in developing countries to engage with the existing (and past) literature. While there are initiatives to improve global access to existing bioethics journals (like HINARI), and there are some open access journals related to bioethics (like BMC Medical Ethics), and you could always write to authors and ask them for copies, these forms of access are inferior to the kind on offer in certain academic institutions in America and Europe. The great powers feast, the others get the crumbs.
The situation of inequality of access to bioethics literature is fairly well-known. What makes Imperialism in Bioethics especially interesting are the ethical implications it tries to draw. For example, the authors state that poor access to bioethics resources make training initiatives aiming at 'capacity building' in developing world countries (like Fogarty and Erasmus programs) illusory. How can capacity be developed if there is no ongoing, sustainable access to bioethics as a tradition of thought? Another implication is that, if there continues to be limited global access to bioethics resources, then bioethics will continue to reflect largely 'Western' assumptions, values, preoccupations and mindset. What will continue to be excluded are alternative forms of health and care for health, and alternative ways of conceiving and dealing with the conflicts related to them. For the authors, it is not just sad that this situation turns bioethics into a Western echo chamber, despite its global pretentions. They call it an intellectual, cultural and moral genocide of non-Western traditions, " ... varieties of sociocultural experience, theorizing, and moral visions of life and medicine that have evolved over eons."
I am not sure that all the implications stick at full strength. Access to bioethics literature certainly matters. But there are substantive obstacles to local bioethics practice in developing countries even if information access problems were to be overcome. One obvious one is that aspiring bioethicists often have nowhere to work in those countries, or at least, no where to work as bioethicists. Local institutions often do not value bioethics enough to fund it, probably because they are too busy tackling all the other fallouts of inequality. Convincing struggling educational institutions that some (or any) of the medical curriculum should be devoted to bioethics can be hair-raising.
As usual, global inequalities lead to uncomfortable ironies. It is painfully ironic that Developing World Bioethics, owned by Wiley-Blackwell -- is not accessable through PubMed Central or HINARI. It is somewhat ironic for the authors to complain about lack of access to a tradition of bioethics they otherwise describe as parochial, decontextualised and hence to some extent useless to the rest of the world. It is really ironic that the article is -- as the authors acknowledge -- published by Springer, whose policies are precisely those that they criticize. And to top it off, Springer seems to have made an exception to their policy for this particular article: everyone (with internet) can read it without paying the usual $39.95.
NOTE: the article discussed in this blog piece has retracted. For more info, see my blog piece of February 3, 2016.
Thursday, July 23, 2015
Postmodernity and global polio eradication
Like California. According to the California Department of Public Health, over 60% of children in the state have not received the full suite of vaccinations. This is partly a case of being victims of their own success: Americans have little experience of what it is like to be prey to infectious agents precisely because vaccines have worked so well on so many of them. It is a stance you have the luxury of taking from a position of relative privilege. But it is partly due also to a culture of gossip, suspicion and kneejerk mistrust of medical authority, and hence also from a position of ignorance. If vaccines are the product of Enlightenment faith in reason and science to improve society, rejection of vaccines -- when not itself based on sound reasoning and evidence -- is regression into a pre-scientific state where life was nasty, brutish and short. Privilege and ignorance is a toxic combination, and some people have to (re-)learn the hard way.