American Journal of Bioethics Neuroscience Publication

30 04 2011

Paul Boshears of the Europäische Universität für Interdisziplinäre Studien and I published an Open Peer Commentary in the American Journal of Bioethics this April 2011.  The article addresses important issues and warning with regard to over-interpreting neuroimaging data.

Here is a draft of the article “Ethical Use of Neuroscience,” but the final publication can be found here.


Levy’s essay (2011) claims that some intuitions leading to one’s moral judgments can be unreliable and he proposes the use of a more reliable, third party, empirical measure. It is commendable that Levy attempts to work beyond traditional bounds; however, the author’s use of fMRI data is questionable in supporting an argument about intentionality. As neuroscientists, we rely upon evidence-based thinking and conclusions to create generalizable knowledge, and while fMRI data can be informative in broad correlational accounts of behavior, to rely upon these data as reliable measures of intuition is arguably just as speculative as the first-person account. It is deeply concerning that society may attempt to apply these data in the manner Levy describes. Indeed, alarming misappropriation of fMRI and EEG data for commercial purposes and as evidence in criminal cases–thereby establishing legal precedents–has already begun.

Levy brings into question the appropriate context for which to use neuroscience as a tool–specifically in illuminating moral decision making. We share with Levy an enthusiasm for neuroscience, and it is enticing to think that in learning how the brain operates we will thereby better understand how the mind also operates. Problematic is Levy’s belief that fMRI studies demonstrate how brain regions, “function to bring the agent to think a particular action is forbidden, permissible, or obligatory.” This is something that fMRI simply cannot do as it is a technique developed to represent mathematical constructs, not detail physical mechanistic processes. We believe the essay depicts scenarios beyond the limitations of what is truly testable by neuroscience, and this could facilitate unintended unethical applications of neuroscience.

As imaging data has become so compelling and headline-grabbing, we focus on addressing these data. Our concern is that one more professional-sounding voice will influence society with scientifically-unfounded claims of what current technology can do leading to unethical exploitation of neuroscience findings. This is especially important given evidence that simply referencing neuroimaging data can bias the public’s evaluation of papers (McCabe and Castel, 2008, Weisberg et al., 2008). It is, therefore, necessary to outline the limitations of fMRI brain imaging and EEG technologies. What is brain imaging actually? What can it tell us? What are the limitations of how these data can be interpreted?

Functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) are noninvasive techniques that indirectly measure brain activity (for an extensive review see Shibasaki 2008). Magnetic resonance imaging uses electromagnetic fields and radio waves to reconstruct images of the brain. Functional MRI relies on detected changes in blood flow by tracing oxygenated and deoxygenated blood. Changes in blood flow are calculated by statistical software and then colorized in a constructed brain image based upon mathematical modeling. When imaging a brain of a person making a moral decision, for instance, one might identify gross changes in blood flow in some areas versus others. This may be called activation, more oxygenated blood brought in to the brain area of interest, or deactivation, less oxygenated blood. The spatial resolution of fMRI allows fairly accurate reconstruction of activated structures. However, the actual neural activity generating these changes and the origins of the blood flow changes are not identified and could arise from areas centimeters away from the activated or de-activated region (Arthurs and Boniface, 2002).

EEG utilizes a series of sensors affixed to the scalp and these scalp recordings are used to describe electrical fields emanating from the cortex. EEG cannot detect activity of deep brain structures, unlike fMRI, which can detect changes in cortical and deep brain structures. Functional MRI can detect changes within seconds; EEGs can detect changes within fractions of seconds for better time resolution of actual neuronal firing rates. Electroencephalographs have relatively poor spatial resolution, but can be combined with other higher resolution techniques, such as fMRI, to give more informative data. Neither technique has the spatial resolution to detect the activity of individual or specific types of neurons. Rather, these techniques detect networks and groups of neurons on the order of thousands to millions. Knowing which types of neurons are activated can give us more mechanistic information. Based on our anatomical and functional knowledge of specific types of neurons, we can predict where these neurons project in the brain and to what extent as well as what kind of neurotransmitters these neurons release. Knowing the chemical phenotype of a neuron gives us important distinctions.  For example, an activation of excitatory neurons (which release excitatory transmitters) would not have the same effect as an activation of inhibitory neurons. In addition, recent data (Koehler et al., 2009, Attwell et al., 2010) have suggested that fMRI detects blood flow regulation by glia, not neurons (the brain cells classically known to mediate synaptic transmission) bringing to question how fMRI data can alternatively be explained, and what fMRI actually tells us about brain function. Importantly, it is unclear whether the changes neuroimaging data depict are indicative of causative factors or simply after-effects.

Overall, we do not doubt the statistical rigor and analyses of researchers, and our simplified description of these techniques is not meant to devalue or undermine the contributions of neuroimaging data. However, we must remind ourselves that the brain is composed of much more than blood vessels and electrical fields, having more complexity than can be described with neuroimaging techniques alone. We must caution against over-interpretation of these exciting data and call for the responsible incorporation of these studies into interdisciplinary pursuits that aim to describe the human mind.

When considering how any scientific data might translate into something as complex as moral behavior, we can do little more than show correlation. While some brain areas may show some degree of specialization such as in “Reward Pathways,” it is apparent that these brain areas work in concert to serve multiple functions and could not accurately describe exclusive rights to consequentialist-based or emotion-based moral decision making. When interpreting areas of brain activation, we must also consider a variety of functions that each brain region can have. If we could: (1) identify individual cells in the brain as the smallest unit of moral processing that were (2) active exclusively during consequentialist and not emotion-based moral decision making, and (3) describe all of the requisite circuitry–then these data might have applications as the author describes. However, this is not the case. The studies cited in Levy’s paper are purely correlational of a behavior and in no way directly describe the biological construction of specific thoughts, intuitions, or morally (ir)relevant processes. Research has not demonstrated that these brain regions are the sites of generating moral intuitions, nor identified that the essence of morality or intuition is stored somewhere in these neurons. We can make some broad conclusions about what brain regions might be involved in mental states or thought processes from neuroimaging data, but we cannot draw conclusions about moral constitution.

Levy’s invocation of a future constructed “neural signature of intention” from fMRI or EEG data ignores what the brain is designed to do best and what artificial intelligence engineers have the most difficult time re-creating: the brain’s plastic, ever-changing, and adaptive nature. Experimental models of learning in brain cells have repeatedly shown that experience can strengthen or weaken connections between cells and between cells connecting different areas of the brain, making it unrealistic, and potentially ethically dangerous, to imagine a fixed moral signature. Researchers should take care to avoid interpretations that conspire with assumptions that the mind is the brain, thereby implying the brain itself is the moral agent. One should also question the accuracy of stating that a brain region or set of cells is the seat of moral agency.

While Levy is concerned about expanding the ethicist’s toolkit through using neuroscience findings, we wonder about the ethical implications of using neuroscience in a manner that seems to ascribe moral agency to the brain alone. Levy describes scenarios where neuroimaging could be used to discriminate intent. What is to say that these data would not be used to predict mal-intent as the Department of Homeland Security’s (DHS) and Transportation Security Association’s (TSA) Future Attribute Screening Technology (FAST) aims to do? Indeed, neuroimaging technologies have already been (mis)appropriated in the courtroom (Brown and Murphy, 2010) and in some cases for questionable commercial activity (Farah, 2009) such as those in the business of lie detection (Greely and Illes, 2007).

We applaud Levy’s creativity and concern for expanding and improving interdisciplinary ethical discourse. However, we suggest that caution be exercised to avoid using neuroscience beyond its limitations. We also advise that scientists must be the ethical stewards of their work. While neuroscience continues to deliver exciting findings the considerable beauty and complexity of the brain has yet to be fully understood.


Arthurs, O. J. and Boniface, S., 2002. How well do we understand the neural origins of the fMRI BOLD signal? Trends in Neurosciences. 25: 27-31.

Attwell, D., Buchan, A. M., Charpak, S., Lauritzen, M., Macvicar, B. A. and Newman, E. A., 2010. Glial and neuronal control of brain blood flow. Nature. 468: 232-243.

Brown, T. and Murphy, E., 2010. Through a scanner darkly: Functional neuroimaging as evidence of a criminal defendant’s past mental states. Stanford Law Review. 62 1119-1208.

Farah, M. J., 2009. A picture is worth a thousand dollars. Journal of Cognitive Neuroscience. 21: 623-624.

Greely, H. T. and Illes, J., 2007. Neuroscience-based lie detection: The urgent need for regulation. American Journal of Law & Medicine. 33: 377-431.

Koehler, R. C., Roman, R. J. and Harder, D. R., 2009. Astrocytes and the regulation of cerebral blood flow. Trends in Neurosciences. 32: 160-169.

Levy, N. 2011. Neuroethics: A new way of doing ethics. AJOB Neuroscience.

McCabe, D. P. and Castel, A. D., 2008. Seeing is believing: The effect of brain images on judgments of scientific reasoning. Cognition. 107: 343-352.

Shibasaki, H. 2008. Human brain mapping: Hemodynamic response and electrophysiology. Clinical  Neurophysiology. 119(4): 731-43.

Weisberg, D. S., Keil, F. C., Goodstein, J., Rawson, E. and Gray, J. R., 2008. The seductive allure of neuroscience explanations. Journal of Cognitive Neuroscience. 20: 470-477.


A Pale Blue Dot

30 12 2010

To ring in the New Year, here is an excerpt from Carl Sagan’s book where he reflects on the photograph of Earth, “The Pale Blue Dot,” taken from the spacecraft Voyager 1 in 1990.

From this distant vantage point, the Earth might not seem of particular interest. But for us, it’s different. Look again at that dot. That’s here, that’s home, that’s us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every “superstar,” every “supreme leader,” every saint and sinner in the history of our species lived there – on a mote of dust suspended in a sunbeam.

The Earth is a very small stage in a vast cosmic arena. Think of the rivers of blood spilled by all those generals and emperors so that, in glory and triumph, they could become the momentary masters of a fraction of a dot. Think of the endless cruelties visited by the inhabitants of one corner of this pixel on the scarcely distinguishable inhabitants of some other corner, how frequent their misunderstandings, how eager they are to kill one another, how fervent their hatreds.

Our posturings, our imagined self-importance, the delusion that we have some privileged position in the Universe, are challenged by this point of pale light. Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help will come from elsewhere to save us from ourselves.

The Earth is the only world known so far to harbor life. There is nowhere else, at least in the near future, to which our species could migrate. Visit, yes. Settle, not yet. Like it or not, for the moment the Earth is where we make our stand.

It has been said that astronomy is a humbling and character-building experience. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world. To me, it underscores our responsibility to deal more kindly with one another, and to preserve and cherish the pale blue dot, the only home we’ve ever known.

Extra, Extra: Eternal Sunshine on a Spotless Mind, in real life!

28 12 2010

I was recently sent this article, by an esteemed veterans officer who is deeply concerned about the welfare of our nation’s veterans plagued by PTSD.

Traumatic Memory Erasure on Horizon
November 23, 2010, Baltimore Sun, featured on

The article triggered a lot of comments from the readers: questions about mind control, jokes about forgetting ex-spouses, and government conspiracy and poisoning. This article was also splatttered all over the media, with titles like,  “Fear Deleted!” and “Memories Erased!”

I have to blame the authors, in part, for this uproar. In an attempt to make the data sexier (scientists have to market too) they chose this title, “Calcium-Permeable AMPA Receptor Dynamics Mediate Fear Memory Erasurefor their manuscript.

While it’s clear that the researchers intents are therapeutic, the news article ends with a cautionary comment,”…trying to eliminate all the memories could significantly alter a person’s personality and history. So could forgetting a whole person after a painful loss or breakup, as depicted in the 2004 movie “Eternal Sunshine of the Spotless Mind.

There’s no question where the writer leaves the reader, wondering whether scientists have the power to erase your mind and memories and ultimately who we are.

I too believe that these pharmacotherapies for PTSD need to be monitored in their progress and application, but not for the same reasons. Ultimately, I don’t believe that these technologies have the ability to erase memories, and quite frankly the researchers don’t at all describe this. The current studies and these data are not really about erasing “memories”.  They are more about decreasing the connection of powerful debilitating emotions (like extreme fear) with an event.  You would still remember gruesome details of bombings, etc, but you wouldn’t associate such debilitating fear when you remembered them and you’d also be less likely to fixate on and globalize that fear to a motorcycle back-firing, etc.

Current PTSD research aims to disrupt strong unwanted emotional associations with memories. In particular, the focus is fearful memories.  Fear, is a healthy thing.  It is evolutionarily favorable to have a sense of fear and to know when to generalize those fears (some snakes are poisonous; I should use caution when I see a snake). A healthy brain is also adaptable and flexible (I should use caution when I see a snake, but in this case, I’ve just seen a stick in the leaves).  However, people that suffer from PTSD, have their fears hijacked so that fears may not be only to a specific event, but also have become generalized this fear (all stick-like objects are just as bad as an actual snake, all sticks induce fear like snakes). Research is working to dissociate a memory of an event from a debilitating emotional response.

Memory formation and learning works in two ways 1) making associations and 2) breaking associations. Breaking and then re-making associations is what allows us to learn and adapt to our environments. Neuroscientists like to call this “plasticity” in the brain.  A healthy brain is “plastic”, not stiff and unchanging. A healthy brain is constantly building and rebuilding, kinda like the Fraggle-Doozer relationship. Neuroscientists find this fascinating and believe understanding plasticity and how to manipulate it could have therapeutic benefit for a wide range of psychiatric and even movement disorders.

Drugs that are being developed, just as the ones being developed by Dr. Huganir, the researcher featured in the above article, typically utilize a similar model.  The animal model is typically a rodent exposed to stress.  The rodent is placed in a box with two compartments.  When the rodent is in compartment A, the rodent receives a mild shock.  When the rodent is in compartment B, it doesn’t.  The rodent naturally learns to avoid compartment A (even when the shock is no longer administered) because bad things tend to happen in compartment A.  This sounds simple, but a lot of behind the scenes action is happening in the brain in order for this behavior to manifest.  Researchers try to interrupt this learning process with various drugs and then they suggest that this drug may be helpful in disrupting unpleasant stimulus+emotion associations.

You may argue, that this certainly isn’t PTSD.  And no neuroscientist worth her salt would say such a thing.  However, many of these drugs have gone from the rodent model phase to the human PTSD patient therapeutic phase in just this manner.  Researchers have shown that some of these drugs that worked to disrupt the unpleasant stimulus+emotion association in rodents helped PTSD patients, usually in combination with behavioral therapy.  In some cases, the PTSD patients given the drug show greater progress (fewer therapy sessions needed, lower levels of anxiety, etc) than their behavior therapy alone counterparts.

The patients, absolutely, do not have a hole in their memory.  They haven’t forgotten, for instance, that they fought a war, or that people died, or that they are married and are from Idaho. I guess, to some, it may seem disappointing that neuroscience can’t always live up to the expectations of sci-fi movies. In this regard, I don’t worry about erasing people’s minds or memories nor am I disappointed in the findings.

I can see some benefit: rape victims take pill as part of initial care in ER to prevent PTSD onset (which, by the way, does not necessary kick-off immediately after the traumatic event-memories take a long time to form).  But, I worry about messing with our emotional responses to things in general. My biggest concern being that these drugs weaken some memories, but can strengthen others.  For example, one drug being tested in PTSD veteran patients has been shown to improve behavioral therapy sessions and weaken fearful memories.  The same drug has also been shown to increase positive associations with drugs of abuse (promoted cue-induced cocaine relapse in rodents) and can increase memory abilities when given at higher doses (in Alzheimer’s patients). Given that many veterans with PTSD also suffer from drug addiction, discussions about these drugs (for ‘erasing memories’) should include how patients might be even more compelled to take drugs or have more difficulty in their drug treatment programs. Or maybe a seemingly less threatening ‘problem’, but would PTSD therapies qualify for cognitive-enhancement if used outside of the recommended dosing regimens? These questions may not seem as magical as wiping one’s mind clean, but they have equally powerful ethical implications for society. And I’d like to invite readers to explore these (more immediate) concerns before worrying themselves about their spotless minds.

The Exposome: Finally, a way to measure nature vs. nurture.

15 12 2010

Today I attended The Sixth Annual Symposium on Predictive Health, Human Health: Molecules to Mankind at the Emory Conference Center.   The tagline was ambitious and meant to inspire:  “THE END of DISEASE, the BEGINNING of a NEW KIND of HEALTH CARE.” I was only able to go to Session V “Ethical Manipulation of the Human Exposome.”

The Exposo-wha??? Let’s back up.  Remember the genome? Remember when we sequenced the human genome 7 years ago, and people were really excited because this meant now we would not only understand what it meant to be human, but also how to predict and prevent every disease from which humans suffer?  Goodbye aging, goodbye sickness. Hello, ever-lasting health and answers to the previously unanswerable questions about humanity. Why didn’t that happen?

Well it goes back to nature vs. nurture.  You are the cumulative result of your genes and your environment.  Genes might give you a prediction about your susceptibility to developing diseases, but they rarely independently genuinely cause a disease. Given that environments are so complex and so varied from person to person, it’s staggeringly difficult to fully understand what the consequences of all these variables will be with your genes.  Enter the Exposome.  The exposome is a new body of generalizable data that explicitly talks about the intersection of your genes and your environment.  The exposome is a map of all your environmental exposures.

One example of the exposome is the metabolome, or a map of stuff your body has metabolized. A metabolite, represents something that has passed through your body’s cellular processes and can be measured by taking a blood, urine, or plasma samples. By collecting your metabolite profile, researchers are able to get a map of clues to your environmental exposures, and then possibly predict what diseases you may develop or what may have caused you to develop a disease. These data can be combined with your genetic data  to better understand how your body’s genes made you (in)capable of metabolizing agents in your environment (whether it be emotional stress or plant pesticides).  As you can imagine, your body responds to a number of agents at any given moment and can also be influenced by the current circumstances of your exposure (e.g. are you already sick, are you young, are you old, are you a healthy eater, etc). Actually, clearly isolating one culprit in disease causality isn’t as easy as it seems, even with the human genome sequenced. In addition, some things are metabolized and are quickly broken down, leaving barely a trace. Some things leave a longer lasting trace and others leave a temporary trace that you might only see at night or early in the morning.  Finding the right window to detect metabolites can also present a challenge.

Despite these challenges, we shouldn’t underestimate the power of combining the data from the Human Genome Project and now the Human Metabolome Database can potentially have amazing consequences on health care and the way we live.

At today’s symposium, some researchers stated that they were a bit puzzled about why they were asked to discuss the ethical implications of their work stating “I’m not an ethicist” or made statements that they felt their job as *public health* researchers was to put a wall between their research and how their data might affect legislation. They weren’t the first scientists who had their laundry list of excuses to not get involved with ethics. While I was a bit disappointed with these responses, I was glad there was interest enough to devote one of their sessions to ethical discourse.  Ethics sessions like these are necessary to ensure that public health researchers are not  blind-sighted by how their findings might actually hurt, not help the public if they don’t understand how to maximize the benefits of their work. While some interesting points were brought up during the session, I still wanted to know their thoughts, as public health researchers, on how this might actually change or lead to “a NEW KIND of HEALTH CARE” as inspired from their flier.

The Department of Health and Human Services (who is in charge of helping to determine your health and healthcare) have a mission to generate not only preventative, but personalized medicine.  Metabolomics could fit very nicely with these goals.  Metabolomics could tell you how to prevent certain diseases by unintentional exposure to toxins such as pesticides in the environment.  Metabolomics could also tell you how to prevent diseases by preventing behaviors that tipped your genetically vulnerable self into a state of disease.  It could revolutionize the way we live into healthier, longer-living, happier humans.

But what else could it do? What are other ways, the exposome could impact the way I live?

First, we need to better understand exactly how strong the predictive power of “metabolomics” for humans is.  Don’t these studies tell us more about association than actual causation? Many follow-up basic research studies will need to be done to confirm causality. And what  if my metabolic profile as an adult tells a sad story: my unfortunate environmental exposure profile has destined me to get a terrible incurable disease- what will I do with that information?   Should I just take the cyanide pill and warn my children not to make the same mistakes? Would the average citizen know how to interpret their metabolome results, or would hospitals now need to have a staff of genetic and metabolomic counselors?  Will my health insurance need to be informed of my pre-exisiting metabolome condition? Should my healthcare provider know this information?  After all, wouldn’t it help my doctors to give me better treatments and more personalized medicine?  Would I be required to tell my life insurance agent, my employer, or my employer’s lawyer? Extreme care will be needed to ensure that exposome data is secure and in the right hands.

How will this change the way we view “disease” and  accountability?  Environmental toxins like lead, or pesticides are not the only bad things you’re exposed to in your environment. Certainly everyone wants big business, Pharma, the military, and industry to be held accountable for the exposure that the public will unknowingly gets. What about the known, voluntary exposure to toxins?  The passive suicide cocktail of bad eating habits, smoking, and not controlling their stress or exercising?  This will  all show up in your metabolome.  Remember when drug abuse and depression were thought of as moral failures?  Sure some people still think this, but the popular mind has grown to understand that these conditions actually have a physiological substrate just like any other bonafide disease.  Let’s look at Parkinson’s disease or Alzheimer’s disease.  This is a disease where people don’t generally assume you have due to a moral deficit.  Parkinson’s disease is linked to unknowing, involuntary environmental exposure to pesticides.  What if it was linked to a series of voluntary choices?  Would we then say things like, “You gave yourself Parkinson’s disease?” How these data could and should be used will need to be clearly expressed to the public.

In fact, one could argue that all your activities in your history from your emotions to your ingestion of foods will be identified in your metabolome,  maybe even replace a fingerprint. Who should have access or own this information? Would certain exposome patterns be used to predict bad behavior?  If growing up in low socio-economic areas resulted in poor nutritional patterns, predicting subsequent criminal behavior, should preventative measures be taken?  The session at today’s symposium was about *manipulation* of the human exposome– should we manipulate this person’s exposome to try to then change or pre-empt his/her undesired behavior? Can this even be done?  These are the types of  basic research model experiments that are needed that need to be done in parallel to the human studies.  Not just asking, what are the associated changes in the exposome, but can we change them, and what would changing them do for people and society.  This information will also be required for making new health care policy changes. It is critical that the researchers doing this work be able to translate these data for public audiences. Researchers need to think more deeply about the ethical consequences of their work.  You don’t need to be an ethicist to do this, you just need to think critically and genuinely care.

Are scientists losing moral authority?

13 10 2010

Unlike lawyers and politicians, scientists tend to enjoy a bit of moral authority and credibility in the public eye. People assume that scientists work for facts-findings that are repeatably found, ruthlessly scrutinized and interpreted, and only then published with the highest of ethical standards. And naturally, all the while being driven only by their love of truth and advancing knowledge about the world we live in-a greater good type of thing.

This is why Climategate hit a particularly vulnerable public off-guard, “Whaaa?  I expect this from a slippery politician, but scientists talking about eliminating the competition!”  When I first heard about Climategate, I was actually a bit annoyed at the uproar.  I thought this is a bunch of people overreacting.  Sources of global warming are real, just look around you-if you can see through the smog.  I live in Atlanta, AKA “Car City.” My downstairs neighbors, a couple, own 4 cars and rent the condo beneath me.  They always have the TV blaring.  I also live next to a huge park with a big biking/running trail.  My previous neighbor owned a treadmill.

The public doesn’t understand how science works, I thought. And they probably still don’t understand that the issue isn’t whether global warming happens or not, it’s whether we the people caused it.  But I’ve become less sensitive to how science and personalities within science “work” (see previous post). Scientists are people too with all the same insecurities, poorly executed ideas, and dastardly plans for their competition. Now this doesn’t account for all of us.  I know many people who are interested in the truth, collaboration, real clinical outcomes, and overall reduction of suffering.  I do wonder sometimes if the mechanisms of being a scientist have created a bit of an obstacle course on the way to those goals. But on the other hand, I often wonder how our funding mechanisms may have created a culture where we actually succeed-not via toxic competition necessarily, but via healthy appropriation of funds to truly innovative science.  I’d like to share some of the challenges of being a scientist to non-scientists so you won’t be so caught off guard next time.

1. New graduate students and frustrated postdocs like to say that the big egos are a problem, but really everyone is big-time afraid.  Really afraid–of not being smart enough, not coming up with ideas fast enough, not getting funding for next year, not keeping up with the most recent findings and technologies, not publishing in time, and getting too old to keep up with all these fears. This pretty much never goes away. Most of these fears are taught in graduate school and then they typically stay with you throughout your career (if you plan on climbing to the top to having your own lab-which you’re also more than encouraged to do). Also, choosing a career outside of academia is gaining more acceptance, but is generally frowned upon by older mentors and even young aspiring scientists.

2. Graduate students and postdocs are the work horses of the university.  Science graduate students generally have their tuition waved and salaries covered by grants mostly from the government (National Institutes of Health).  Your U.S. tax dollars pay for us. Postdocs are *supposed* to spend  a relatively shorter time at the university. Although the “permanent postdoc” position is becoming more and more common. These grants all work upon strict timelines, generally between 1-5 years.  For example, once you’ve been a postdoc for 5 years (in the U.S.), you’re no longer eligible for independent funding (although things may change in the future).  This means your goose is cooked.  The idea being, if you didn’t make it by now, you’re probably not going to make it later.  This sentiment is pushed onto students, postdocs, and faculty throughout their academic careers.  Publishing high profile data helps you to keep getting that funding and people get desperate for these publications.  Without the funding you’re dead in the water. Graduate students feel a similar pressure.  While many are guaranteed funding from the university even if your adviser loses his/her grant, others don’t. In addition, many graduate students in larger prestigious schools (especially in the U.S.) are expected to have a couple of good publications by the time of graduation.  My advisor said one per year.  If your project isn’t working, you continue getting paid your little stipend, but you watch all your friends graduate as you enter your 7th year as a graduate student. Not to mention how underrepresented women are the longer you stay in the academic arena.

But maybe the more important question is *should* scientists have had moral authority to begin with? Let’s explore common perceptions of scientists

1. “Scientists know the “facts”. We should accept and memorize these facts.” Scientific inquiry may involve utilizing a set of given facts.  However, the process of scientific inquiry involves looking for a conceptual framework that can be used to explore the world and to draw connections about the world. They don’t really find “new” things about the world, they just discover new ways of conceptualizing the world. Moreover, scientists do not really have answers.  They generate more questions.  A good scientist’s career is more than finding a fixed endpoint answer, but actually discovering what they don’t know and how our current view of the world is incomplete. Being a scientist is very humbling in this way.  I never trust a scientist who always has all the “answers” nor one who is uncomfortable with saying, “I don’t know.”

2. “Scientists produce findings that can be repeated by everyone (at least all other scientists).”–See Case Study no. 1: Generally, each lab has criteria for repeatability within their lab.  When publishing data all methods are expected to be described in painstaking detail. Before most articles are published, they go through a review process by peers who maintain their anonymity.  When scientists submitt papers to academic journals, they can request a set of reviewers (ones that may look favorably upon his/her work) and even ones to exclude (people who authors know may be competitors-see Case Study no. 4).  The editors of the journal can choose to respect your requests or choose additional reviewers that you didn’t mention. If you’re famous, people tend to scrutinize your methods less. They assume that you know what you’re doing.  Also, if your “friend” reviews your paper, maybe they “trust” you and your methods.  This can be bad. What people often forget is that busy mentors usually are not directly monitoring the new graduate student or postdoc who is learning how to do the lab’s established technique on his/her own.  In addition, common lab methodologies even within the same lab can change/evolve over time, often for the better, without the lab head realizing this. Buy maybe worst of all, is that often people don’t take their peer reviewing obligations seriously. This is a free activity that one does dutifully out of an ethical obligation to the scientific community. Often it’s even considered an honor. While the lab heads are busy making sure grants are coming in and publications are going out, some of these things fall to the wayside. And then you have this erroneous data published out there for posterity.

3. “The priority of scientists is to advance knowledge.” Well, it may have started that way when the scientist was a bright new shiny graduate student.  But over time, publication pressures become the main topic of conversation.  How will be publish this? These data won’t get published in a very good journal.  The concern becomes less about creating a legacy of good scientists, but a legacy of good publications and survival. As most scientists know, 99% of the experiments performed don’t work or have inconclusive data.  And even with that 1% of success a very small proportion of those finding will become a readily translatable finding to public health or public concerns in the scientist’s lifetime if at all.   Scientists often believe in what they do, but they become distracted from the bigger picture.

Again, many scientists stay true to the moral authority society gives them, but these are the problems they face in the scientific community.  Scientific advances genuinely have and will continue to have benefits for public health and to advance knowledge, but the public needs to be more critical in their analysis of the deluge of scientific information hitting them from every possible media.  A good start is getting a little inside view of the actual “scientific process” and what scientists realistically can humanly do. Also, it will be important for scientists to have regular “morality-checks” and reminders.  This could include regular required ethics courses not only for graduate students and postdoctoral fellows, but also for new and old faculty.