Animal law, animal rights, and biomedical research: Can’t we all just get along?

30 05 2011

According to the Animal Legal Defense Fund 125 schools in the U.S. and Canada alone offer “Animal Law” classes as of Spring 2010. Possibly as a response to a growing zeal in the public for animal rights, the already saturated field of Law has left law school students searching for a new niche.

Certainly, we love our pets– some surveys suggest that 62% of U.S. households have a beloved pet.  According to the American Pet Products Manufacturers Association’s (APPMA) annual pet ownership survey pet spending has more than doubled from $17 billion in 1994 to over $50 billion in 2010. Last summer I saw a man, running in the park, dressed in all yellow with matching yellow-rimmed sunglasses.  Proudly running at his owner’s feet was a small dog donning the same outfit, complete with yellow-rimmed doggles. I knew they were called doggles because I was introduced to them several years ago through a T.V. show where a dog was featured riding around in a convertible with his tongue sticking out and the doggles strapped around his head. The goggles portion sat crooked atop his little snout,  successfully covering only one eye while the other eye was partially covered, but mostly smooshed.  The host interviewed the owner and excitedly asked her with a huge grin, “Does he like them?”  The owner said, “I’m not sure, but I think he likes that his eyes are protected when we’re riding in the convertible.”  I tried to put a “Happy New Year” hat on my dog once. He convinced me that animal fashion is its own special form of animal cruelty.

There are more obvious forms of animal cruelty. The story of Michael Vick and his Bad Newz Kennels brought national public attention to animal cruelty and animal rights.    There is also a movement to file lawsuits on behalf of the animals such as those affected by BP’s oil spill. In this article, Adam P. Karp, an attorney in Bellingham, Washington, says  “The law should recognize animals as legal persons with the same access to justice.” And more and more examples of animal cruelty that have come to the public eye on waves of the organic food movement. Documentaries like Food, Inc. have highlighted the darker side of mass-produced meats and numerous organizations now advocate on behalf of farm animals. When we think of animal cruelty in the case of Michael Vick, we understand that we must intervene in order to maintain the moral fabric of society by prevented debasing acts against our humanity.  It’s a win-win, right? But this might be a more difficult point to argue since people often have different moral threads that they weave into their moral codes.  In the case of animals used for food, vegetarian or meat-atarian, it arguably become a health concern.  Our physical bodies are at steak, er stake.  We protect the animals, we decrease e. coli problems, ergo we don’t have e.coli in our spinach and meat.

What about animals used for research and research on incurable diseases? I went to a talk entitled, “Public Opinion and the Use of Animals in Research” by Paul McKellips at the Foundation for Biomedical Research (FBR) where McKellips warned a room full of scientists about the waning public support for animal research and made the call for scientists to talk to their communities about animal research.

What was appalling about the audience was that very few scientists from the primate facility showed up to this talk and of those that were there, most left before the talk was over right after they had their fill of free pizza.  I recently led a discussion in my lab about speaking to the community and general audiences about their work, and while some expressed some interest or anger, most seemed to feel a degree of trepidation or plain disinterest.  The most vehement argument I heard in defense of animal research was that animal rights activists were “all crazy” and that no one would listen to them anyway. However, this argument is quite wrong and weak.  Most are more well-spoken than your average scientist, and for that matter much more (financially) supported in their campaigns against research than those who advocate for animal-based research. It’s critical when making any argument to know what and who you’re arguing against.

What are the common arguments against animal research and how do animal research advocates answer to them?  There is a pretty good list of arguments and responses here at Understanding Animal Research (UK). I prefer this to the Foundation for Biomedical Research (or Research Saves) site because the UK site relies less on manipulation and pulling heart strings (see “Jen’s video” on the FBR site not to mention the preview for billboards that I saw during Paul McKellips’ talk featuring a body in a morgue and a toe tag that said “I didn’t want to benefit from meds created with animal research” and an Advance Animal Directive which basically makes those for animal research kinda sound like jerks) and more on providing information.

When listening to arguments against animal research, I find the following to be the most compelling. I have also included responses from the Understanding Animal Research website.  I thought they did a pretty good job answering some of these concerns.

1. Animals aren’t people and haven’t generated any new cures for people.

“All mammals are descended from common ancestors, so humans are biologically very similar to other mammals. All mammals, including humans, have the same organs – heart, lungs, kidneys, liver etc – that work in the same way, controlled via the bloodstream and nervous system.

Of course there are minor differences, but these are far outweighed by the remarkable similarities. The differences can also give important clues about diseases and how they might be treated – for instance, if we knew why the mouse with muscular dystrophy suffers less muscle wasting than human patients, this might lead to a treatment for this debilitating and fatal disorder.

Vitamins work in the same way in animals as they do in people – research on guinea pigs led to the discovery of how vitamin C works. Hormones found in animals also work in a similar way in people. The following animal hormones have all been used successfully in human patients: insulin from pigs or cows; thyrotropin from cows; calcitonin from salmon; adrenocorticotrophic hormone from farm animals; oxytocin and vasopressin from pigs.”

2. I’m not against all animal research, just research that cannot benefit animals such as those directed to diseases that animals don’t get like Huntington’s Disease.

“In fact many veterinary medicines are the same as those used for human patients: examples include antibiotics, pain killers and tranquillisers. Many of the veterinary medicines that are used to treat animals are the same as, or very similar to, those used to treat human patients. Most human diseases exist in at least one other species. Many different animals naturally get illnesses such as cancer, heart failure, asthma, rabies and malaria and they can be treated in much the same way as human patients. There is evidence that dinosaurs suffered from arthritis. Chimpanzees can get polio and the human vaccine has been used to protect them in the wild.”

In addition, I’d like to add that although Huntington’s may not naturally occur in animals, Huntington’s is thought to involve a failure in the normal machinery of protein degradation, a process which is shared by animals used in research. The research findings from current studies may prove to have a benefit for both non-human animals and humans in the future.  Further, basic research also has great value and data from these studies is often an intimate part of leading to applied clinical research and providing foundational building blocks for clinical research.

Here is a nice list (it’s not exhaustive) of diseases and the role animal research has played in advancing treatment.

3. Laboratory animals suffer

“Most animal research involves mild procedures such as taking a blood sample, giving a single injection, or having a change of diet. If more invasive procedures are necessary, then anaesthetics and pain relief will be given whenever appropriate.

It is in researchers’ interests to make sure animals suffer as little as possible; stressed animals are less likely to produce reliable results. All animal research must pass an ethical evaluation which weighs up its pros and cons and decides whether it is justified. The research then has to be approved by Home Office Inspectors, who are all doctors or vets and who ensure that high welfare standards are applied.

Any animal suffering undue pain or distress that cannot be alleviated must be put down immediately and painlessly: this is the law.”

As a researcher, I know that there are many strict regulations both within the university and by government organizations to monitor animal welfare in research and prior to 1966, this was not always the case. The Animal Welfare Act and regulations monitoring the welfare of animals generally guard against the physical and psychological distress of non-human animals. 

When we’re talking about rodents, the general public doesn’t really feel much sympathy. In fact, anyone can easily purchase services or a host of rat-killing devices from poison and slow death to electric shock (and I wonder if maybe animal law would be useful in regulating these devices). On the other hand, people almost universally have trouble with monkey research due to their more similar nature to humans.

I have two problems with the above positions–

1. How do we really understand the degree of suffering non-human animals have?

2. Why would rats be any less entitled to rights than monkeys?

1. As a neuroscientist, I can tell you this.  We have not even come close to understanding the biological processes that make up individual thoughts.  In other words, no one knows how a thought is made.  That includes thoughts on suffering, sadness, loneliness, etc. Sure we may understand how to manipulate and alleviate pathological depression to some extent, but we can’t with any certainty say that our research animals are happy or satisfied, or even if our pets are happy or satisfied or suffering. However, rather than discouraging me from animal research, I believe that  there is an intense need for more research in this area (not less). For example, a recent article has developed a way to identify pain in mice, by examining and scoring their “grimaces.” While this may seem odd upon first glance, these studies “… will not only be an important tool in helping scientists ensure that laboratory animals don’t suffer unnecessarily, but could lead to new and better pain-relief drugs for humans.” –particularly in situations where verbal communication is not possible such as with infants or with patients in circumstances where speaking is impaired.

2. While it may be easy for some people to say that humans have dominion over all other living creatures–we should question this assumption and the roots of our moral assertions (After all, it wasn’t too long ago that women and black people weren’t considered people). Even the most educated people fail to recognize a host of almost intuitive (Western) principles that were adopted from Judeo-Christian practices regardless of a proclaimed Christian faith or orientation.  Similarly, the notion of martyrdom is also celebrated in the Judeo-Christian tradition.

Whether researchers want to question it or not, we as researchers are entering a time where we are left with little choice and are increasingly finding ourselves on the “wrong” side of the animal research equation. FBR’s Paul McKellips personally told me that researchers are “on a sinking ship” despite his efforts and his polling results showing that 50%+ of the U.S. public supported animal research. And it’s not hard to see, the Animal Rights Movement is almost en vogue with “all natural”, “organic” and vegetarian/vegan products becoming popularized. After all, organic products are advertised as, “worth the cost”. It would almost seem foolish to not jump on this moralistic money train, and take up a career in defending animals. And frankly, it may not seem obvious what the immediate price of taking up this moral high ground might be.  What does the average person feel they give up by being moralistic about animal research when most of them rarely consider where their favorite medications originated, or even more rarely come in contact with a biomedical researcher?

In this case, biomedical researchers and the general public alike must take up the responsibility of being engaged in this conversation.  In this blog, I have repeatedly advocated for scientists to cultivate skills in public communication, but I also advocate for public audiences to be critical thinkers.  Everyone must take part in the conversation of where and how our tax dollars are being spent for biomedical research. Biomedical research is different in that the monetary return on research or even the medical advances that may come from this research are not immediately apparent. Basic research perhaps suffers that most scrutiny in this light.  Biomedical researchers should be able to explain to anyone how each piece of new information that is gathered about basic biology can be applied to numerous health and disease-related states. Scientific research utilizes  a systematic process, but more importantly scientific discovery is a creative process.  It is through this creativity that we have been able to discover numerous new medications and new applications of old drugs, often based on revisiting data collected decades ago. This is the beauty of peer-reviewed scientific data–it’s not simply a consumable. Scientific research is an investment that has a legacy of providing valuable information for generations to come. And as it stands,  the only way to achieve critical biomedical discoveries that benefit the public health of our society is through research in living preparations such as non-human animals or humans where biomedical researchers can examine toxicity profiles and efficacy profiles of new treatments at the systems level.

When considering how we may best co-exist with animals, we must consider the context, and be careful to question our assumptions and root of our assumptions (religious bias, childhood upbringing, engrained social prejudices). An important distinction must be made between animals rights, either legal or moral, vs. animal welfare.  Laws may help establish guidelines to maintain animal welfare and even moral rights appropriate for the context (e.g.  animals as pets, research subjects, or in the wild). Biomedical research in animals must continue to be monitored, and conducted with compassion and exquisite care for the research subjects. This is important not only for the well-being of non-human animals, but for both the non-human animals and humans in our society who benefit from these treatments every day. An informed public and well-spoken biomedical research community must lead animal law in directions that align with goals for flourishing society.






American Journal of Bioethics Neuroscience Publication

30 04 2011

Paul Boshears of the Europäische Universität für Interdisziplinäre Studien and I published an Open Peer Commentary in the American Journal of Bioethics this April 2011.  The article addresses important issues and warning with regard to over-interpreting neuroimaging data.

Here is a draft of the article “Ethical Use of Neuroscience,” but the final publication can be found here.

*************************************************************************************************

Levy’s essay (2011) claims that some intuitions leading to one’s moral judgments can be unreliable and he proposes the use of a more reliable, third party, empirical measure. It is commendable that Levy attempts to work beyond traditional bounds; however, the author’s use of fMRI data is questionable in supporting an argument about intentionality. As neuroscientists, we rely upon evidence-based thinking and conclusions to create generalizable knowledge, and while fMRI data can be informative in broad correlational accounts of behavior, to rely upon these data as reliable measures of intuition is arguably just as speculative as the first-person account. It is deeply concerning that society may attempt to apply these data in the manner Levy describes. Indeed, alarming misappropriation of fMRI and EEG data for commercial purposes and as evidence in criminal cases–thereby establishing legal precedents–has already begun.

Levy brings into question the appropriate context for which to use neuroscience as a tool–specifically in illuminating moral decision making. We share with Levy an enthusiasm for neuroscience, and it is enticing to think that in learning how the brain operates we will thereby better understand how the mind also operates. Problematic is Levy’s belief that fMRI studies demonstrate how brain regions, “function to bring the agent to think a particular action is forbidden, permissible, or obligatory.” This is something that fMRI simply cannot do as it is a technique developed to represent mathematical constructs, not detail physical mechanistic processes. We believe the essay depicts scenarios beyond the limitations of what is truly testable by neuroscience, and this could facilitate unintended unethical applications of neuroscience.

As imaging data has become so compelling and headline-grabbing, we focus on addressing these data. Our concern is that one more professional-sounding voice will influence society with scientifically-unfounded claims of what current technology can do leading to unethical exploitation of neuroscience findings. This is especially important given evidence that simply referencing neuroimaging data can bias the public’s evaluation of papers (McCabe and Castel, 2008, Weisberg et al., 2008). It is, therefore, necessary to outline the limitations of fMRI brain imaging and EEG technologies. What is brain imaging actually? What can it tell us? What are the limitations of how these data can be interpreted?

Functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) are noninvasive techniques that indirectly measure brain activity (for an extensive review see Shibasaki 2008). Magnetic resonance imaging uses electromagnetic fields and radio waves to reconstruct images of the brain. Functional MRI relies on detected changes in blood flow by tracing oxygenated and deoxygenated blood. Changes in blood flow are calculated by statistical software and then colorized in a constructed brain image based upon mathematical modeling. When imaging a brain of a person making a moral decision, for instance, one might identify gross changes in blood flow in some areas versus others. This may be called activation, more oxygenated blood brought in to the brain area of interest, or deactivation, less oxygenated blood. The spatial resolution of fMRI allows fairly accurate reconstruction of activated structures. However, the actual neural activity generating these changes and the origins of the blood flow changes are not identified and could arise from areas centimeters away from the activated or de-activated region (Arthurs and Boniface, 2002).

EEG utilizes a series of sensors affixed to the scalp and these scalp recordings are used to describe electrical fields emanating from the cortex. EEG cannot detect activity of deep brain structures, unlike fMRI, which can detect changes in cortical and deep brain structures. Functional MRI can detect changes within seconds; EEGs can detect changes within fractions of seconds for better time resolution of actual neuronal firing rates. Electroencephalographs have relatively poor spatial resolution, but can be combined with other higher resolution techniques, such as fMRI, to give more informative data. Neither technique has the spatial resolution to detect the activity of individual or specific types of neurons. Rather, these techniques detect networks and groups of neurons on the order of thousands to millions. Knowing which types of neurons are activated can give us more mechanistic information. Based on our anatomical and functional knowledge of specific types of neurons, we can predict where these neurons project in the brain and to what extent as well as what kind of neurotransmitters these neurons release. Knowing the chemical phenotype of a neuron gives us important distinctions.  For example, an activation of excitatory neurons (which release excitatory transmitters) would not have the same effect as an activation of inhibitory neurons. In addition, recent data (Koehler et al., 2009, Attwell et al., 2010) have suggested that fMRI detects blood flow regulation by glia, not neurons (the brain cells classically known to mediate synaptic transmission) bringing to question how fMRI data can alternatively be explained, and what fMRI actually tells us about brain function. Importantly, it is unclear whether the changes neuroimaging data depict are indicative of causative factors or simply after-effects.

Overall, we do not doubt the statistical rigor and analyses of researchers, and our simplified description of these techniques is not meant to devalue or undermine the contributions of neuroimaging data. However, we must remind ourselves that the brain is composed of much more than blood vessels and electrical fields, having more complexity than can be described with neuroimaging techniques alone. We must caution against over-interpretation of these exciting data and call for the responsible incorporation of these studies into interdisciplinary pursuits that aim to describe the human mind.

When considering how any scientific data might translate into something as complex as moral behavior, we can do little more than show correlation. While some brain areas may show some degree of specialization such as in “Reward Pathways,” it is apparent that these brain areas work in concert to serve multiple functions and could not accurately describe exclusive rights to consequentialist-based or emotion-based moral decision making. When interpreting areas of brain activation, we must also consider a variety of functions that each brain region can have. If we could: (1) identify individual cells in the brain as the smallest unit of moral processing that were (2) active exclusively during consequentialist and not emotion-based moral decision making, and (3) describe all of the requisite circuitry–then these data might have applications as the author describes. However, this is not the case. The studies cited in Levy’s paper are purely correlational of a behavior and in no way directly describe the biological construction of specific thoughts, intuitions, or morally (ir)relevant processes. Research has not demonstrated that these brain regions are the sites of generating moral intuitions, nor identified that the essence of morality or intuition is stored somewhere in these neurons. We can make some broad conclusions about what brain regions might be involved in mental states or thought processes from neuroimaging data, but we cannot draw conclusions about moral constitution.

Levy’s invocation of a future constructed “neural signature of intention” from fMRI or EEG data ignores what the brain is designed to do best and what artificial intelligence engineers have the most difficult time re-creating: the brain’s plastic, ever-changing, and adaptive nature. Experimental models of learning in brain cells have repeatedly shown that experience can strengthen or weaken connections between cells and between cells connecting different areas of the brain, making it unrealistic, and potentially ethically dangerous, to imagine a fixed moral signature. Researchers should take care to avoid interpretations that conspire with assumptions that the mind is the brain, thereby implying the brain itself is the moral agent. One should also question the accuracy of stating that a brain region or set of cells is the seat of moral agency.

While Levy is concerned about expanding the ethicist’s toolkit through using neuroscience findings, we wonder about the ethical implications of using neuroscience in a manner that seems to ascribe moral agency to the brain alone. Levy describes scenarios where neuroimaging could be used to discriminate intent. What is to say that these data would not be used to predict mal-intent as the Department of Homeland Security’s (DHS) and Transportation Security Association’s (TSA) Future Attribute Screening Technology (FAST) aims to do? Indeed, neuroimaging technologies have already been (mis)appropriated in the courtroom (Brown and Murphy, 2010) and in some cases for questionable commercial activity (Farah, 2009) such as those in the business of lie detection (Greely and Illes, 2007).

We applaud Levy’s creativity and concern for expanding and improving interdisciplinary ethical discourse. However, we suggest that caution be exercised to avoid using neuroscience beyond its limitations. We also advise that scientists must be the ethical stewards of their work. While neuroscience continues to deliver exciting findings the considerable beauty and complexity of the brain has yet to be fully understood.

References

Arthurs, O. J. and Boniface, S., 2002. How well do we understand the neural origins of the fMRI BOLD signal? Trends in Neurosciences. 25: 27-31.

Attwell, D., Buchan, A. M., Charpak, S., Lauritzen, M., Macvicar, B. A. and Newman, E. A., 2010. Glial and neuronal control of brain blood flow. Nature. 468: 232-243.

Brown, T. and Murphy, E., 2010. Through a scanner darkly: Functional neuroimaging as evidence of a criminal defendant’s past mental states. Stanford Law Review. 62 1119-1208.

Farah, M. J., 2009. A picture is worth a thousand dollars. Journal of Cognitive Neuroscience. 21: 623-624.

Greely, H. T. and Illes, J., 2007. Neuroscience-based lie detection: The urgent need for regulation. American Journal of Law & Medicine. 33: 377-431.

Koehler, R. C., Roman, R. J. and Harder, D. R., 2009. Astrocytes and the regulation of cerebral blood flow. Trends in Neurosciences. 32: 160-169.

Levy, N. 2011. Neuroethics: A new way of doing ethics. AJOB Neuroscience.

McCabe, D. P. and Castel, A. D., 2008. Seeing is believing: The effect of brain images on judgments of scientific reasoning. Cognition. 107: 343-352.

Shibasaki, H. 2008. Human brain mapping: Hemodynamic response and electrophysiology. Clinical  Neurophysiology. 119(4): 731-43.

Weisberg, D. S., Keil, F. C., Goodstein, J., Rawson, E. and Gray, J. R., 2008. The seductive allure of neuroscience explanations. Journal of Cognitive Neuroscience. 20: 470-477.





Are scientists losing moral authority?

13 10 2010

Unlike lawyers and politicians, scientists tend to enjoy a bit of moral authority and credibility in the public eye. People assume that scientists work for facts-findings that are repeatably found, ruthlessly scrutinized and interpreted, and only then published with the highest of ethical standards. And naturally, all the while being driven only by their love of truth and advancing knowledge about the world we live in-a greater good type of thing.

This is why Climategate hit a particularly vulnerable public off-guard, “Whaaa?  I expect this from a slippery politician, but scientists talking about eliminating the competition!”  When I first heard about Climategate, I was actually a bit annoyed at the uproar.  I thought this is a bunch of people overreacting.  Sources of global warming are real, just look around you-if you can see through the smog.  I live in Atlanta, AKA “Car City.” My downstairs neighbors, a couple, own 4 cars and rent the condo beneath me.  They always have the TV blaring.  I also live next to a huge park with a big biking/running trail.  My previous neighbor owned a treadmill.

The public doesn’t understand how science works, I thought. And they probably still don’t understand that the issue isn’t whether global warming happens or not, it’s whether we the people caused it.  But I’ve become less sensitive to how science and personalities within science “work” (see previous post). Scientists are people too with all the same insecurities, poorly executed ideas, and dastardly plans for their competition. Now this doesn’t account for all of us.  I know many people who are interested in the truth, collaboration, real clinical outcomes, and overall reduction of suffering.  I do wonder sometimes if the mechanisms of being a scientist have created a bit of an obstacle course on the way to those goals. But on the other hand, I often wonder how our funding mechanisms may have created a culture where we actually succeed-not via toxic competition necessarily, but via healthy appropriation of funds to truly innovative science.  I’d like to share some of the challenges of being a scientist to non-scientists so you won’t be so caught off guard next time.

1. New graduate students and frustrated postdocs like to say that the big egos are a problem, but really everyone is big-time afraid.  Really afraid–of not being smart enough, not coming up with ideas fast enough, not getting funding for next year, not keeping up with the most recent findings and technologies, not publishing in time, and getting too old to keep up with all these fears. This pretty much never goes away. Most of these fears are taught in graduate school and then they typically stay with you throughout your career (if you plan on climbing to the top to having your own lab-which you’re also more than encouraged to do). Also, choosing a career outside of academia is gaining more acceptance, but is generally frowned upon by older mentors and even young aspiring scientists.

2. Graduate students and postdocs are the work horses of the university.  Science graduate students generally have their tuition waved and salaries covered by grants mostly from the government (National Institutes of Health).  Your U.S. tax dollars pay for us. Postdocs are *supposed* to spend  a relatively shorter time at the university. Although the “permanent postdoc” position is becoming more and more common. These grants all work upon strict timelines, generally between 1-5 years.  For example, once you’ve been a postdoc for 5 years (in the U.S.), you’re no longer eligible for independent funding (although things may change in the future).  This means your goose is cooked.  The idea being, if you didn’t make it by now, you’re probably not going to make it later.  This sentiment is pushed onto students, postdocs, and faculty throughout their academic careers.  Publishing high profile data helps you to keep getting that funding and people get desperate for these publications.  Without the funding you’re dead in the water. Graduate students feel a similar pressure.  While many are guaranteed funding from the university even if your adviser loses his/her grant, others don’t. In addition, many graduate students in larger prestigious schools (especially in the U.S.) are expected to have a couple of good publications by the time of graduation.  My advisor said one per year.  If your project isn’t working, you continue getting paid your little stipend, but you watch all your friends graduate as you enter your 7th year as a graduate student. Not to mention how underrepresented women are the longer you stay in the academic arena.

But maybe the more important question is *should* scientists have had moral authority to begin with? Let’s explore common perceptions of scientists

1. “Scientists know the “facts”. We should accept and memorize these facts.” Scientific inquiry may involve utilizing a set of given facts.  However, the process of scientific inquiry involves looking for a conceptual framework that can be used to explore the world and to draw connections about the world. They don’t really find “new” things about the world, they just discover new ways of conceptualizing the world. Moreover, scientists do not really have answers.  They generate more questions.  A good scientist’s career is more than finding a fixed endpoint answer, but actually discovering what they don’t know and how our current view of the world is incomplete. Being a scientist is very humbling in this way.  I never trust a scientist who always has all the “answers” nor one who is uncomfortable with saying, “I don’t know.”

2. “Scientists produce findings that can be repeated by everyone (at least all other scientists).”–See Case Study no. 1: Generally, each lab has criteria for repeatability within their lab.  When publishing data all methods are expected to be described in painstaking detail. Before most articles are published, they go through a review process by peers who maintain their anonymity.  When scientists submitt papers to academic journals, they can request a set of reviewers (ones that may look favorably upon his/her work) and even ones to exclude (people who authors know may be competitors-see Case Study no. 4).  The editors of the journal can choose to respect your requests or choose additional reviewers that you didn’t mention. If you’re famous, people tend to scrutinize your methods less. They assume that you know what you’re doing.  Also, if your “friend” reviews your paper, maybe they “trust” you and your methods.  This can be bad. What people often forget is that busy mentors usually are not directly monitoring the new graduate student or postdoc who is learning how to do the lab’s established technique on his/her own.  In addition, common lab methodologies even within the same lab can change/evolve over time, often for the better, without the lab head realizing this. Buy maybe worst of all, is that often people don’t take their peer reviewing obligations seriously. This is a free activity that one does dutifully out of an ethical obligation to the scientific community. Often it’s even considered an honor. While the lab heads are busy making sure grants are coming in and publications are going out, some of these things fall to the wayside. And then you have this erroneous data published out there for posterity.

3. “The priority of scientists is to advance knowledge.” Well, it may have started that way when the scientist was a bright new shiny graduate student.  But over time, publication pressures become the main topic of conversation.  How will be publish this? These data won’t get published in a very good journal.  The concern becomes less about creating a legacy of good scientists, but a legacy of good publications and survival. As most scientists know, 99% of the experiments performed don’t work or have inconclusive data.  And even with that 1% of success a very small proportion of those finding will become a readily translatable finding to public health or public concerns in the scientist’s lifetime if at all.   Scientists often believe in what they do, but they become distracted from the bigger picture.

Again, many scientists stay true to the moral authority society gives them, but these are the problems they face in the scientific community.  Scientific advances genuinely have and will continue to have benefits for public health and to advance knowledge, but the public needs to be more critical in their analysis of the deluge of scientific information hitting them from every possible media.  A good start is getting a little inside view of the actual “scientific process” and what scientists realistically can humanly do. Also, it will be important for scientists to have regular “morality-checks” and reminders.  This could include regular required ethics courses not only for graduate students and postdoctoral fellows, but also for new and old faculty.





Troubles for young scientists in academia

12 10 2010

Here are a few case studies of what’s happening in top research institutes around the world (names and obvious identifiers have been changed):

Case Study 1: I don’t want to be the first to disagree with Mr. Famous.

Dr. Somebody published finding that exciting finding, Activity ‘B’ was detected in the brains of Parkinson’s patients.  Dr. Famous published a finding that Activity ‘B’ was also detected in parkinsonian monkeys.   A large number of Parkinson’s disease researchers made the mad dash to replicate said findings in all of their research models, yet at best researchers have come up with weakly similar results or nothing close. Dr. Justasfamous’s postdoctoral fellow also cannot replicate the finding in monkeys.  The postdoc speaks to many researchers at SuperBig conference and find that everyone is struggling to try to replicate Dr. Famous’s infamous Activity B. The postdoc tells her advisor at lab meeting that she just doesn’t see Activity B in her research and mentions that she has personally spoken to several researchers having the same struggle.  The lab members discuss that there are several weaknesses in Dr. Somebody’s research and several desperate statistical analyses going on to attempt to polish data to show Activity B.  Another postdoc suggests that Dr. Justasfamous write a review to discuss these problems in the field thinking that it would be helpful for all the struggling researcher and even provide a bit of relief.  Dr. Justasfamous says that Dr. Famous is too much of an authority and it would be pointless to write such a review.  Meanwhile, the postdoc is requested to abandon his data.  No one will be willing to publish such contradictory data anyway.

Case Study 2: I need to publish no matter what it takes!

Dr. Newsome is a new postdoc in the lab of Dr. Noncon.  Dr. Newsome is expected to work with Dr. Leaving to help Dr. Leaving finish up his project while he moves on to his new position. Dr. Leaving agrees begrudgingly to help Dr. Newsome at first.  Dr. Leaving says that he used to be a computer programmer and that he knows how to create the results he needs without even collecting real data.  Dr. Leaving also says that he knows how to run experiments in ways to skew his data in the direction he needs. Dr. Newsome assumes Dr. Leaving is joking and is still learning the ropes of all the new techniques.  Dr. Leaving begins to get very distant and begins to work only in the middle of the night when he knows Dr. Newsome will not be at work or during inconsistent times so that Dr. Newsome must learn the techniques from other colleagues. Dr. Noncon eventually tells Dr. Newsome that Dr. Leaving is a bit difficult and that Dr. Leaving suspects that Dr. Newsome is trying to steal first authorship from him. Dr. Leaving came from a very small school and English is not his first language. He has worked very hard to find a good postdoc in the U.S. and now his second position. Dr. Newsome was surprised and had no intention of taking the lead as the primary author on these data, but Dr. Leaving is not convinced.  After Dr. Leaving has left, Dr. Newsome realizes that Dr. Leaving has taken all webdrive data, notebooks, and the external hard drive with him. Dr. Leaving sends his analyzed data back to Dr. Noncon who shows the data to Dr. Newsome.  Dr. Leaving’s data is surprisingly clean, much more straightforward than typically expected with the techniques used in the lab.  When reading over a draft of the manuscript, Dr. Newsome notices many methodologies listed that weren’t true.  When Dr. Newsome mentioned these, thinking they were typos, Dr. Leaving aggressively denied the errors.  Dr. Leaving had left only one document on the lab’s server and it had information that stated otherwise. In fact, Dr. Leaving was the author of this document. Dr. Newsome begins to suspect that his data might actually be fabricated, but has no proof.  Also, Dr. Newsome knows that Dr. Noncon needs this publication for a grant renewal.

Case Study 3: Mine, mine, mine!

Adam had been one of Dr. Bully’s favorite student’s so much so that he asked Adam to come with him to Big University to help he start his new lab (and enjoy the fruits of his new promotion).  When Adam began work with Dr. Bully, Dr. Bully told Adam to “pick” a research project.  Dr. Bully did drug addiction research and Adam decided that he wanted to study Chemical B and it’s role in drug addiction. Adam became very passionate about this project and was excited to continue this work at Big U. When Adam arrived at Big U, Dr. Bully’s personality changed.  In addition, Dr. Bully became very busy, and kept emphasizing that they were at Big U now and they needed to work hard to fit in.  In fact, Dr. Bully tried to get Adam to switch projects.  Adam was already having to take new classes and in helping to set up the new lab, he felt he had lost a lot of time and was eager to publish his findings. Dr. Bully asked Adam to give a department talk about his work where Dr. Bully attacked Adam’s project in front of a large group.  Dr. Fair especially took interest in Adam’s project and wanted to collaborate.  Adam eventually took his project to Dr. Fair’s lab.  Dr. Bully did not seem to be disappointed to lose the project telling Adam that, “You’ll never get a PhD” with this project. Dr. Bully did stay on Adam’s dissertation committee.  Adam’s project brought strong results. Dr. Fair and Adam wrote and submitted the paper to a top journal and entered the data as a chapter in Adam’s dissertation. While Dr. Bully was reviewing a draft of Alex’s dissertation, he called Dr. Fair.  Dr. Bully exclaimed, “At least some of these ideas must have originally been mine!” and demanded that he be listed as co-author. Dr. Fair and Alex felt this was not true and were surprised given that Dr. Bully had seen these data during Adam’s las committee meeting and said nothing. Dr. Fair added Dr. Bully to the list of authors, not because he believed Dr. Bully truly contributed, but to “keep the peace.” Dr. Fair felt his position was less secure as a new Assistant Professor vs. Dr. Bully’s tenured position.  After Adam graduated, Dr. Bully began pursuing Adam’s project using the “future directions” from Adam’s dissertation.  He did not offer to include Dr. Fair on future manuscripts.

Case Study 4: When Peer Review fails (taken from Emory ethics class).

Dr. Rolf and Dr. Janice work on similar projects but in two different model systems and have decided to co-submit their papers. Around the time that the two papers are published, a third paper on the same project is published in a different journal. A few months later, Dr. Rolf and Dr. Janice receive an anonymous email stating that the author on the third paper had been one of their reviewers and held up their papers in order to finish his.

This is not necessarily happening in every case, but most scientists would not be shocked from these stories.  In my next blog, I’ll address the issue of scientists and their weakening hold in society as a moral authority.





Women in neuroscience-why do they leave academia?

18 09 2010

My entering class of 2002 at Emory University consisted almost entirely women with the exception of maybe 2-3 men in a large group of maybe 15 or so people.  This was a complete fluke–almost everyone who received offers from Emory chose Emory as their top pick that year to the chagrin of many fine graduate neuroscience programs. In retaliation, other schools moved their deadlines up the following year. I felt lucky to have such a large diverse class,  like I had a better sampling of the population of future neuroscientists.

In a class full of intelligent, driven women, I wonder why most major university departments continue to be filled with men.  Maybe there just hasn’t been enough time you say.  But I think the problem may run deeper still.  According to the findings presented at the National Summit on Gender and Postdoctorate fewer women from the get-go are considering becoming a Principal Investigator (P.I.) and running laboratories.  I wouldn’t at all say that the men in my program were obviously more capable then the women in my program of running a lab. In fact, I’d probably say the strongest scientists were women-no surprise, when most of the class is women–the odds are in their favor. According to this same data set, equal number of men and women of equivalent age are in the postdoctoral workforce spending an equivalent period of time in postdoctoral positions.

The differences become unveiled when you look at the demographics of married men vs. women.  The married women with children are underrepresented in the workforce. According to a general census in 2004, 75% of women and 60% of men between 30-34 have children. Most of these women are not postdocs.  Do female postdocs have the resources they need to be postdoc moms?  What about childcare? According to this study, about 40% of male postdocs have a female spouse who does not work, whereas 8% of female postdocs have a spouse that stays at home. Further, 42% of male postdocs have spouses who shoulder the childcare whereas women on 16% of women have a spouse who can provide free childcare.  Why is childcare such a big deal?  Have you seen an average postdoc salary?  Well, I’ll give you a clue–working at Starbuck’s or Dollar General as a manager would give you equivalent if not more pay.

But all right, is this something that’s important to women?  Do female scientists want to have babies? 60% of female postdocs felt this was important. Yet 30% of women polled said they would be more likely to make concessions for their career than their spouses while 30% of male postdocs expected that their spouses would be more likely to make the concession.  A PhD is, after all, not an easy road. Once you get it, you want to make use of it.

These data show that most women want to have a baby and with the average postdoc age of 30-34, the clock is ticking. And as progressive as you’d think an educated woman with a PhD might be, their spouses don’t always tend to be equally progressive-meaning man men feel they should be working and not in the home.  However, these statistics may be changing.

Emory University is touted as being voted one of the “Best Places to Work for Postdocs”.  At the time women are graduating and making that commitment to go the P.I. route what kind of options do they have if they consider having children. Here’s what Emory’s  Office of Postdoctoral Education states (this goes for almost any academic institution though):

“…postdocs must use both paid vacation leave and disability leave before sequentially taking unpaid leave up to 12 weeks” and “By the Family Medical Leave Act (FMLA), Emory University postdoctoral fellows have a save position for twelve (12) weeks of leave for family reasons.”

Their HR website  states for faculty that they understand that having a child is a “natural process”, blah, blah.  However for subhuman postdocs, they also allow you to apply for “disability.” Semantics or not, this is really abhorrent and rampant sexism at it’s finest. But that’s just what this kind of insurance is called, you say? Yes, but this also implies having a child is not at all natural, but a “disability” that women bring with them into the workplace. What does the popular mind think of when hearing the term disability?  Broken bones, thrown-out back, something that’s not supposed to happen to your body. At least the FMLA ensures that you won’t get fired during this time. At least not for up to 3 months. Wouldn’t it behoove society to have it brightest procreating and raising more bright minds?

But here’s some food for thought :

U.S. Only Industrialized Nation With No Paid Leave For New Parents

That’s right, according to the International Labor Organization, you could get better maternity leave in Somalia or in the Congos as a postdoc or otherwise.

The second big issue reported by these data is lack confidence, which can be intimately related to the pressures of trying to raise a family.  Women report feeling inadequate in a number of characteristics deemed necessary to succeed including “competitive drive” and “aggressiveness.”  Academic success is directly related to publications and successfully getting funding.  As these are all extremely time sensitive, the “competitive drive” must translate into prioritizing the timing of grants acquisition and publications over the timing of starting a family.

This is not to say that every female scientist wants to have a baby, but it should be considered as “natural” to want one.  The overall dwindling female colleagues as one progresses through the ranks of graduate student to faculty does not inspire confidence is women floating in a sea of male dominated institutions. However, to be fair, while insecurities and the “imposter syndrome” are a problem for women, it also is an oddly prevalent sentiment in both male and female high-achieving academics.

We have a long way to go, but solutions to these issues are beginning to get addressed with forums such as the Summit on Gender and the Postdoctorate hosted by the National Postdoctoral Association and with the development of forums and mentorship encouraged by local chapters of organizations like the Association of Women in Science and Women in Neuroscience. Perhaps, the most important immediate, daily  line of action is to keep these topics in regular rotation in conversations at workplaces in academia. Throughout their training, women (and men) in science should continue to have open ongoing discussions about these concerns with both female and male role models. Most of all, these conversation should feel normal and natural rather than a feeling that alienates women from their peers.





Neuroethics Education

20 08 2010

Why is neuroethics not a required class for graduate students in PhD programs? Most current PhDs in Neuroscience probably couldn’t tell you precisely what neuroethics is.  We can consider neuroethics as a 2-headed coin.  One side of the coin would be the ethical implications of neuroscience research findings and technologies on society. The other side would be the study of the ethics of the human brain such as morality, ideas of truth, etc.  Typically these are expressed with imaging studies where parts of the brain “light up” when engaged in a thinking task (or even when engaging in “Thinking about Not thinking,” believe it or not). For the sake of this entry, let’s consider the former description of the ethical implications of neuroscience research.

In a recent conversation, my colleague suggested that to understand ethics, one needs a background such as readings in Kant and Neuroscience PhDs just aren’t familiar or willing to become familiar with that work.  I have noted that “philosophy” has generally been a derogatory word in my department, used to give a name things they could not explain and/or understand–“It’s too philosophical.” If you’re a scientist, you’ve no doubt heard this statement in passing, or maybe unfortunately, said it yourself.  But this is exactly the opposite of what philosophers do.  A good philosopher very critically and systematically tries to explain and understand things and the connections between them, and constantly questions adopted world views.   It’s actually not too different than the goals of good science in that regard. My PhD mentor used to say that good science “changes the way we think.” We’re all trying to describe the world, but perhaps in seemingly different languages. In a world where it’s possible to use brain imaging to study sacred values and to utilize new technologies that attempt to “wake” those in a minimally conscious state, it’s high time that we start learning to be bilingual.

But do we need to send our neuroscience graduate students to philosophy classes? Maybe sending aspiring neuroscientists to the philosophy department is not the answer. And maybe students don’t need to know all the names of the current philosophers, but they need to know the concepts and how these concepts are relevant to her/his research.  Currently, ethics courses are required for all pre-doctoral and post-doctoral fellows on grants from the National Institutes of Health (NIH).  While there are sometimes opportunities to take longer courses, programs typically offer a 1-2 day seminar style “Responsible Research Conduct” class which students begrudgingly attend. This is likely spurred by the mentor’s less than enthusiastic attitude about the student’s absence from the lab. However, most new graduate students enter science more than eager to make a meaningful contribution to society. Having graduate students engaged in the ethical and broader societal implications of their work should be a necessary supplement to neuroscience graduate training or even part of the dissertation defense-something perhaps some current faculty would not be equipped to discuss.  For some students with more “basic” science projects, this may seem like an impossible task. But, it’s important to remember that this is a critical mental exercise that will be necessary when applying for grants from agencies such as NIH, who will require that you to describe how this is relevant to public health.

But neuroethics has been around a long time, you say.  True, but not as a discipline. While addressing general bioethics concerns has been a congressional mandate since the 1970s, neuroscience as a discipline has made vast strides and refinements. In fact, with new neurosurgical precision, individual brain nuclei can be activated with electrode implants the diameter of two human hairs, and we’re now moving into a technological era where individual cells can be genetically color coded and individually activated with lasers.  The scale and intricacy of neuroscience questions make it a societal imperative to ask neuroscientists to push beyond comfortable boundaries and to dip a toe into the deep end of philosophical inquiry.  According to some, the neuroethics discipline is only 8 years old. The first meeting of the four-year old Neuroethics Society was held in November 2008. One of the leading Bioethics Journals, The American Journal of Bioethics now has added a regularly issued Neuroscience Journal to their family bringing the grand total of neuroethics journals to two. Only two major universities in the U.S. have Neuroethics Programs with even fewer opportunities for funding.  Fellowship programs are limited to “Bioethics” Fellowships or Health and Medical Ethics without a specialty for neuroscience research. Indeed, glancing over the “former fellows” of one major Bioethics Fellowship from the National Institutes of Health (NIH) shows that the majority of fellows were philosophy or public health PhDs, JDs, or MDs. Where are the neuroscientists in these conversations?

Today’s reality is that neuroethical examination needs to be more of a priority for neuroscientists.  As neuroscience and accompanying emerging technologies have entered into the realm of the popular media with increasing frequency, neuroscientists need to decide whom they want to be leading and influencing conversations about how to use this research. Neuroscience students should be having these conversations on a regular basis early in their careers, perhaps with regular interdepartmental journal clubs. The NIH has even issued the call for the development of “personalized” healthcare meaning instead of a one-size fits all therapy, future medical treatments will take into account a host of intersecting elements such as environmental, socioeconomic, and behavioral factors. Meaning, we will need to address medical care with a multi-faceted approach with specialist from a diverse set of disciplines. Fortunately, the generation of interdisciplinary studies have become more apparent in larger universities. For example, at Emory University research fellowships were previously available for interdisciplinary studies that intersect with neuroscience as well as current broader teaching fellowships where postdoctoral fellows and graduate students from a variety of departments teach a topic under a unified theme using each of their respective expertise.

Enthusiasm would also increase were more funding to go in this direction. And funding must go in this direction in order to add Neuroethicists to the academic roster. But is this a chicken or the egg question? How will the funds get appropriated in this direction if neuroscientists aren’t interested in proving that they need to do this work?  This is where society needs to make the call.  Engaging the public interest in having rigorous neuroethical inquiry in an important piece of this puzzle.  An educated public benefits everyone. And the public needs to be informed about recent scientific findings rather than it being delivered with flashing graphics and dramatic music.  They need to decide if they want to take a part in how neuroscience advances benefit them. Future posts will aim to continue explore these ideas and how to make neuroscience findings and it’s role in society more accessible to specialists and non-specialists alike.