Dr Free-Ride has a good entry up about scientists and ethical behaviour. I have nothing to add to her basic point, which is that when ethics is seen as something imposed from outside, it is largely ignored; this idea will be entirely familiar to any researcher who has ever sat through the obligatory (!) ethics class or seminar or whatever their department requires.
Where I think Janet’s discussion is missing something is in how to deal with this issue (and to be fair, she was mostly pointing out the problem, not trying to solve it):
To get “buy-in” from the scientists, they need to see how ethics are intimately connected to the job they’re trying to get done. In other words, scientists need to understand how ethical conduct is essential to the project of doing science.
So OK, how exactly does that work? In a fairly straightforward sense, ethical conduct is demonstrably NOT essential to science or scientific progress. Science is being done now, often quite successfully (in terms of personal career advancement and, more importantly, in terms of real additions to the knowledge base), by unethical means. There is nothing about vivisection that makes it an inherently ineffective means of gathering information; many experiments that do not make it past IACUC would yield useful data. Further, if I successfully steal your ideas and publish them, I will have been doing science from the point of view of anyone (or anything, like the knowledge base itself) that doesn’t know or doesn’t care that I stole the ideas.
The trivial category here is unethical conduct like that of the Korean stem-cell team; this was dumb as well as wrong, because it produced bad data and was bound to be found out. The important category is unethical conduct that produces clean (useful, reproducible) data: what makes such conduct unethical, what aspect of its unethical nature makes it antithetical to doing science, and what is the mechanism of that opposition?
Within this category, we can distinguish between conduct that, if you get caught, will hamstring you within the scientific community (thieving) and conduct that, if you get caught, will cause the wider community to stop supporting you (vivisection). The key phrase here is “if you get caught”; that is, ethical judgement is community judgement. An individual cannot do much science without the scientific community; infrastructure needs alone make that clear. Neither, for even more obvious reasons, can the highly-specialized scientific community do anything without the support of the wider community. Unless you posit something like karma or divine retribution, I don’t think you can find an unethical behaviour that both produces clean data AND is in and of itself “anti-scientific”, that is, proof that ethical conduct is in and of itself essential to scientific progress — unless, that is, you take into account the reliance of scientific research on community support.
In other words: what is ethical conduct? Whatever the community decides is ethical conduct. Why is ethical conduct essential to the project of doing science? Because community support is essential to that project.
I have, of course, sidestepped the larger question of HOW the community — the scientific community, or society at large — decides what constitutes ethical conduct. It’s not true that vivisection is wrong only because if you get caught doing it your grant will be cut off (without anaesthesia, of course). Scientists are not just scientists, they are members of society at the same time. This is an enormous question, but a quick look at the scientific community will allow me to sketch my own view: why is it unethical for me to steal ideas? Because if everyone stole ideas, collaboration and other networks of trust would collapse. It’s far more efficient to act in good faith and initially to assume the same of others. The same holds true for the wider community: whatever benefit I derive from someone else’s disadvantage will eventually come back and bite me in the ass. On any but the short-term, immediate-future view, “do unto others as you would have them do unto you” is not a Divine Command but a sensible way to maximize one’s own preferences.
I think there are different things that get lumped under “ethics”, and I agree that some of them (like the “ethical” ways to treat animals) may have more to do with satisfying the concerns of the community (whether the scientific community or the larger community from whom they get funding) than with producing reliable knowledge. However, I think the dimensions of being ethical that boil down to being honest have quite a lot to do with the project of producing reliable knowledge. (I think, actually, even more than honesty is required, but since that’s part of an argument I’m making in a manuscript I’m currently revising to resubmit, I’m not going to elaborate on that just yet,)
The connection between “the dimensions of being ethical that boil down to being honest” and “the project of producing reliable knowledge” seem to me to be self-evident and implicit in the world “reliable”. Those aspects of “unethical” which impinge on the reliability of data are what I called “trivial cases” — if your behaviour impacts the utility and reproducibility of your data, whether we call it “unethical” or “stupid” that behaviour is clearly antithetical to science.
What I thought more problematic, and more germane to your point about ethics classes/seminars/whatever, are the cases where the data are clean but the methodology unacceptable by standards that a (putative!) amoral scientific community might not adopt (e.g. the three R’s of animal use). How do you convince that community that socially determined ethics are a necessary part of the project of science?
To put it another way: I hope that there are not many working researchers to whom the connection between the reliability of knowledge and the honesty employed in its production is at all obscure! I am, however, certain that there are plenty to whom the necessity of answering to our public paymasters is nothing but an inconvenience. My reply to these types, or to my putative amoral scientific community, is along the lines of something else you said: more than honesty is required, or quis custodiet ipsos custodes? Objectivity is more than honesty, and cannot be self-guaranteed.
When I was ethics officer in our school, I explained the key requirement for ethics approval as being two things:
i. checking that your results aren’t invalidated by unethical work, even inadvertently
ii. checking that your work isn’t invalidated through unethical conduct.
An example of the first would be offering a test subject incentive for participating, and them feeling that the incentive is there for them to help you get the results you want. This is really an issue of experimental design, as the work shouldn’t be set up in a way that the subject can influence the outcome. That’s not always possible though… Other examples are faking results and using misleading analysis.
An example of the second would be offering a test subject incentive for participating that clouds their judgement as to whether the benefit of the incentive is actually less than the risk of participation warrants. Exploitative research of people who need the money, for example. Other examples would include idea theft and vivisection.
The second case can produce true results but is still wrong, while the first can produce false results. It’s easy to see why it is essential to science to eliminate the first cases – false positives are a big enemy 🙂
To me the question of why the second case is to be avoided comes down to why we should be doing research, i.e. for the greater good. And like all real ethics questions, it is here that it comes down to choosing between two types of bad or two types of good, and never comparing good and bad:
– don’t get anyone to participate in an experiment that may have helped many people (no experiment)
vs
– hurt someone through an experiment beyond what they could be compensated for (experiment)
or
+ don’t cause suffering directly (no experiment)
vs
+ get the results from experiment participation to help many people (experiment)
but never
+ help thousands afflicted by a disease through getting results (experiment)
vs
– hurting a small group of people along the way (experiment)
as that isn’t actually comparing different choices to be made, or even when comparing choices:
+ don’t cause suffering directly (no experiment)
vs
– hurt someone through an experiment beyond what they could be compensated for (experiment)
or
– don’t get anyone to participate in an experiment that may have helped many people (no experiment)
vs
+ get the results from experiment participation to help many people (experiment)
which, when put that way, are clearly silly choices to put before anyone.
I think half the problem of ethics is people not knowing how to phrase the question and make the comparisons.
R
rereading my post I noticed that I wrote something about the greater good. By that I don’t mean the good of the many outweighing the good of the few. These are different things that just can’t be quantified.
What I mean was that we should be thinking of our research in relation to how it contributes to the world, not ourselves. Another view of the ethical questions above is not the impact on an individual person or animal vs the impact on a statistical section of society, but how the choices contribute to, or harm, the researcher’s personal career. And thinking about what questions that raises, I retract my statement that we shouldn’t think about it. We should, but we should think about it the same way: Good needs to be compared to good, and bad to bad.
R