American Journal of Bioethics Neuroscience Publication

30 04 2011

Paul Boshears of the Europäische Universität für Interdisziplinäre Studien and I published an Open Peer Commentary in the American Journal of Bioethics this April 2011.  The article addresses important issues and warning with regard to over-interpreting neuroimaging data.

Here is a draft of the article “Ethical Use of Neuroscience,” but the final publication can be found here.

*************************************************************************************************

Levy’s essay (2011) claims that some intuitions leading to one’s moral judgments can be unreliable and he proposes the use of a more reliable, third party, empirical measure. It is commendable that Levy attempts to work beyond traditional bounds; however, the author’s use of fMRI data is questionable in supporting an argument about intentionality. As neuroscientists, we rely upon evidence-based thinking and conclusions to create generalizable knowledge, and while fMRI data can be informative in broad correlational accounts of behavior, to rely upon these data as reliable measures of intuition is arguably just as speculative as the first-person account. It is deeply concerning that society may attempt to apply these data in the manner Levy describes. Indeed, alarming misappropriation of fMRI and EEG data for commercial purposes and as evidence in criminal cases–thereby establishing legal precedents–has already begun.

Levy brings into question the appropriate context for which to use neuroscience as a tool–specifically in illuminating moral decision making. We share with Levy an enthusiasm for neuroscience, and it is enticing to think that in learning how the brain operates we will thereby better understand how the mind also operates. Problematic is Levy’s belief that fMRI studies demonstrate how brain regions, “function to bring the agent to think a particular action is forbidden, permissible, or obligatory.” This is something that fMRI simply cannot do as it is a technique developed to represent mathematical constructs, not detail physical mechanistic processes. We believe the essay depicts scenarios beyond the limitations of what is truly testable by neuroscience, and this could facilitate unintended unethical applications of neuroscience.

As imaging data has become so compelling and headline-grabbing, we focus on addressing these data. Our concern is that one more professional-sounding voice will influence society with scientifically-unfounded claims of what current technology can do leading to unethical exploitation of neuroscience findings. This is especially important given evidence that simply referencing neuroimaging data can bias the public’s evaluation of papers (McCabe and Castel, 2008, Weisberg et al., 2008). It is, therefore, necessary to outline the limitations of fMRI brain imaging and EEG technologies. What is brain imaging actually? What can it tell us? What are the limitations of how these data can be interpreted?

Functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) are noninvasive techniques that indirectly measure brain activity (for an extensive review see Shibasaki 2008). Magnetic resonance imaging uses electromagnetic fields and radio waves to reconstruct images of the brain. Functional MRI relies on detected changes in blood flow by tracing oxygenated and deoxygenated blood. Changes in blood flow are calculated by statistical software and then colorized in a constructed brain image based upon mathematical modeling. When imaging a brain of a person making a moral decision, for instance, one might identify gross changes in blood flow in some areas versus others. This may be called activation, more oxygenated blood brought in to the brain area of interest, or deactivation, less oxygenated blood. The spatial resolution of fMRI allows fairly accurate reconstruction of activated structures. However, the actual neural activity generating these changes and the origins of the blood flow changes are not identified and could arise from areas centimeters away from the activated or de-activated region (Arthurs and Boniface, 2002).

EEG utilizes a series of sensors affixed to the scalp and these scalp recordings are used to describe electrical fields emanating from the cortex. EEG cannot detect activity of deep brain structures, unlike fMRI, which can detect changes in cortical and deep brain structures. Functional MRI can detect changes within seconds; EEGs can detect changes within fractions of seconds for better time resolution of actual neuronal firing rates. Electroencephalographs have relatively poor spatial resolution, but can be combined with other higher resolution techniques, such as fMRI, to give more informative data. Neither technique has the spatial resolution to detect the activity of individual or specific types of neurons. Rather, these techniques detect networks and groups of neurons on the order of thousands to millions. Knowing which types of neurons are activated can give us more mechanistic information. Based on our anatomical and functional knowledge of specific types of neurons, we can predict where these neurons project in the brain and to what extent as well as what kind of neurotransmitters these neurons release. Knowing the chemical phenotype of a neuron gives us important distinctions.  For example, an activation of excitatory neurons (which release excitatory transmitters) would not have the same effect as an activation of inhibitory neurons. In addition, recent data (Koehler et al., 2009, Attwell et al., 2010) have suggested that fMRI detects blood flow regulation by glia, not neurons (the brain cells classically known to mediate synaptic transmission) bringing to question how fMRI data can alternatively be explained, and what fMRI actually tells us about brain function. Importantly, it is unclear whether the changes neuroimaging data depict are indicative of causative factors or simply after-effects.

Overall, we do not doubt the statistical rigor and analyses of researchers, and our simplified description of these techniques is not meant to devalue or undermine the contributions of neuroimaging data. However, we must remind ourselves that the brain is composed of much more than blood vessels and electrical fields, having more complexity than can be described with neuroimaging techniques alone. We must caution against over-interpretation of these exciting data and call for the responsible incorporation of these studies into interdisciplinary pursuits that aim to describe the human mind.

When considering how any scientific data might translate into something as complex as moral behavior, we can do little more than show correlation. While some brain areas may show some degree of specialization such as in “Reward Pathways,” it is apparent that these brain areas work in concert to serve multiple functions and could not accurately describe exclusive rights to consequentialist-based or emotion-based moral decision making. When interpreting areas of brain activation, we must also consider a variety of functions that each brain region can have. If we could: (1) identify individual cells in the brain as the smallest unit of moral processing that were (2) active exclusively during consequentialist and not emotion-based moral decision making, and (3) describe all of the requisite circuitry–then these data might have applications as the author describes. However, this is not the case. The studies cited in Levy’s paper are purely correlational of a behavior and in no way directly describe the biological construction of specific thoughts, intuitions, or morally (ir)relevant processes. Research has not demonstrated that these brain regions are the sites of generating moral intuitions, nor identified that the essence of morality or intuition is stored somewhere in these neurons. We can make some broad conclusions about what brain regions might be involved in mental states or thought processes from neuroimaging data, but we cannot draw conclusions about moral constitution.

Levy’s invocation of a future constructed “neural signature of intention” from fMRI or EEG data ignores what the brain is designed to do best and what artificial intelligence engineers have the most difficult time re-creating: the brain’s plastic, ever-changing, and adaptive nature. Experimental models of learning in brain cells have repeatedly shown that experience can strengthen or weaken connections between cells and between cells connecting different areas of the brain, making it unrealistic, and potentially ethically dangerous, to imagine a fixed moral signature. Researchers should take care to avoid interpretations that conspire with assumptions that the mind is the brain, thereby implying the brain itself is the moral agent. One should also question the accuracy of stating that a brain region or set of cells is the seat of moral agency.

While Levy is concerned about expanding the ethicist’s toolkit through using neuroscience findings, we wonder about the ethical implications of using neuroscience in a manner that seems to ascribe moral agency to the brain alone. Levy describes scenarios where neuroimaging could be used to discriminate intent. What is to say that these data would not be used to predict mal-intent as the Department of Homeland Security’s (DHS) and Transportation Security Association’s (TSA) Future Attribute Screening Technology (FAST) aims to do? Indeed, neuroimaging technologies have already been (mis)appropriated in the courtroom (Brown and Murphy, 2010) and in some cases for questionable commercial activity (Farah, 2009) such as those in the business of lie detection (Greely and Illes, 2007).

We applaud Levy’s creativity and concern for expanding and improving interdisciplinary ethical discourse. However, we suggest that caution be exercised to avoid using neuroscience beyond its limitations. We also advise that scientists must be the ethical stewards of their work. While neuroscience continues to deliver exciting findings the considerable beauty and complexity of the brain has yet to be fully understood.

References

Arthurs, O. J. and Boniface, S., 2002. How well do we understand the neural origins of the fMRI BOLD signal? Trends in Neurosciences. 25: 27-31.

Attwell, D., Buchan, A. M., Charpak, S., Lauritzen, M., Macvicar, B. A. and Newman, E. A., 2010. Glial and neuronal control of brain blood flow. Nature. 468: 232-243.

Brown, T. and Murphy, E., 2010. Through a scanner darkly: Functional neuroimaging as evidence of a criminal defendant’s past mental states. Stanford Law Review. 62 1119-1208.

Farah, M. J., 2009. A picture is worth a thousand dollars. Journal of Cognitive Neuroscience. 21: 623-624.

Greely, H. T. and Illes, J., 2007. Neuroscience-based lie detection: The urgent need for regulation. American Journal of Law & Medicine. 33: 377-431.

Koehler, R. C., Roman, R. J. and Harder, D. R., 2009. Astrocytes and the regulation of cerebral blood flow. Trends in Neurosciences. 32: 160-169.

Levy, N. 2011. Neuroethics: A new way of doing ethics. AJOB Neuroscience.

McCabe, D. P. and Castel, A. D., 2008. Seeing is believing: The effect of brain images on judgments of scientific reasoning. Cognition. 107: 343-352.

Shibasaki, H. 2008. Human brain mapping: Hemodynamic response and electrophysiology. Clinical  Neurophysiology. 119(4): 731-43.

Weisberg, D. S., Keil, F. C., Goodstein, J., Rawson, E. and Gray, J. R., 2008. The seductive allure of neuroscience explanations. Journal of Cognitive Neuroscience. 20: 470-477.





The guilt was written all over her face

7 02 2011

A poet and cognitive neuroscience enthusiast sent me this link:

May 19, 2008, 10:56 pm <!– — Updated: 8:06 pm –>The Most Curious Thing, By ERROL MORRIS explaining “how a photograph aided and abetted a terrible miscarriage of justice…about Sabrina Harman, one of the notorious ‘seven bad apples’ convicted of abuse in the notorious Abu Ghraib scandal”.

The focus of the article is her smile, the photographed record of her smile, which could have been the same smile you would’ve captured in an awkward family photo or at the Eiffel Tower, but eerily misplaced in the context of a prison full or rotting and damaged bodies, covered with ice in an attempt to diminish foul odors of decay.  It was shocking for audiences to view the photos, but more shocking that a person in that situation would give a ‘thumbs up’ and grin.

But if we look more closely at the photos, is the smile really the same smile captured in an awkward family photo or in front of the Eiffel Tower.  The writer looked for a smile expert, and found Paul Ekman, Professor Emeritus of Psychology at the University of California, San Francisco who is an expert on facial expressions.  So well versed in reading facial expressions he created F.A.C.E Training, a tool marketed to businessmen and government officials alike to help develop skills in “reading” another’s emotions and even lie detection just by critical review of someone’s face. I had to wonder when looking at his profile picture, how much time he had put into capturing the right facial expression for his audiences-to encourage trust and willingness to buy his product (you can see pictures of his own smiles used as example for his 2003 book in the article).

Why would being able to read her facial expression be important to discuss.  If he detects remorse or lack of enjoyment would that absolve her of her actions, or maybe it would help people have a restored faith in humanity–“See, at least she felt bad about it.” Ekman’s assessment of one photograph is something this “..she is showing a social smile or a smile for the camera. The signs of an actual enjoyment smile are just not there. There’s no sign of any negative emotion. She’s doing what people always do when they pose for a camera. They put on a big, broad smile, but they’re not actually genuinely enjoying themselves. We would see movement in the eye cover fold…”

Hmm…sounds pretty scientific.  But let’s get back to exploring F.A.C.E.  FACE stands for Facial expression Awareness Compassion Expression. What is it, what is the evidence behind it, and how is it being used? This is what the F.A.C.E. Training page says for the Advanced METT (Micro Expression Training Tool) says–

“This training is meant for those whose work requires them to evaluate truthfulness and detect deception – such as police and security personnel, and those in sales, education, and medical professions . If you should achieve the minimum target score of 80% or higher on the post test, a certificate of completion will be emailed to you.”

How is it currently being used? I found this article:

Airport security: Intent to deceive?

| May 26, 2010 |

Nature

According to this article, up to 1000 TSA (Transportation Security Administration) screeners have been trained with Paul Ekman’s techniques. In addition, “There are about 3,000 of these officers working at some 161 airports across the United States, all part of a four-year-old program called Screening Passengers by Observation Technique (SPOT), which is designed to identify people who could pose a threat to airline passengers” primarily terrorists. SPOT is a technique based on Ekman’s work. And interest in the technique is growing from the U.S. Department of Homeland Security and various intelligence agencies.

Does it work?  Ekman’s work has sparked a Fox T.V. series: Lie to Me, so there are probably a lot of people (general audiences and specialists alike) out there who are fascinated by (and maybe even like to fantasize about) the idea of it working.

But is Micro Expression identification actually a reliable tool for lie detection or intent to do bad things? According to this article, it seems that his colleagues remain skeptical.  Primarily, his scientific colleagues have problems replicating and corroborating his results. Other psychologists find that “many peer-reviewed studies seem to show that people are not better than chance when it comes to picking up signs of deception.” A 2007 report composed by a panel of “credibility assessment experts say that, “Simply put, people (including professional lie-catchers with extensive experience of assessing veracity) would achieve similar hit rates if they flipped a coin.”

In addition, his studies lack proper controls and his more recent work lacks peer-review.  But Ekman claims the lack of peer-review is intentional: “Ekman maintains that this publishing strategy is deliberate–that he no longer publishes all of the details of his work in the peer-reviewed literature because, he says, those papers are closely followed by scientists in countries such as Syria, Iran and China, which the United States views as a potential threat.” But peer-review is an important checks and balances system for scientists, as experts, to evaluate the works of one another. “As a scientist, I want to see peer-reviewed journal articles, so I can look at procedures and data and know what the training procedures involve, and what the results do show,” says Bella DePaulo, a social psychologist at the University of California, Santa Barbara.

Ekman claims that examining micro expressions can give you up to 70% accuracy in determining deception.  But you can go up to 100% if you use remaining body cues. According to TSA statistics from 2006-2009,”behavior-detection officers referred more than 232,000 people for secondary screening…But 1,710 were arrested, which the TSA cites as evidence for the program’s effectiveness. And in this 1% of people accurately identified, those arrests were for criminal activity unrelated to terrorist activity. Although, I found this article link from the TSA’s blog stating that at least one individual bearing explosive was caught by behavior detection officers.

Actually, the TSA’s blog expresses a lot of enthusiasm about Ekman and his techniques for identifying guilty travelers. They claim that After passing along his skills to US Customs, their “hit rate” for finding drugs during passenger searches rose to 22.5 percent from 4.2 percent in 1998. Some examples of suspicious criteria are illustrated here.

Now enter FAST (Future Attribute Screening Technology), a project being funded $10 million/year.  With FAST, travelers would walk through a portal while a myriad of sensors would monitor their vital signs remotely for ‘malintent’. The Department of Homeland Security (DHS) has a host of Human Factors Behavioral Sciences Projects, FAST is just one of them. According to the DHS website blurb on FAST, “FAST is grounded in research on human behavior and psychophysiology.”

Supporters of FAST technology feel that the future of security screening should be more about the people and less about their things. What does the research say about being able to reliably detect ‘mal-intent’ based on a combination of skilled observation and vital signs? In an interview for CNN.com, Carnegie Mellon’s Stephen Fienberg, a university professor in the statistics and machine learning departments, said, “I haven’t seen any research that shows that those measures from the autonomic nervous system … measuring blood pressure, measuring breathing, measuring heat on the face, are at all related to intent.” Indeed, it sounds a bit like what would otherwise be part of any routine doctor’s visit, except done by a machine that has been programmed to calculate the statistical significance of my responses.

Polygraph lie detection, which measures four parameters—heart rate, blood pressure, respiration, and sweating has been around since the 1920’s. In a 2001 National Academy of Sciences panel discussion about lie detection, Dr. Richard Davidson, a neuroscientist and Director of the Laboratory for Affective Neuroscience from University of Wisconsin Madison claims that lie detection is more likely to detect “fear of detection” than actual lies. As a neuroimager, he says we need to go to the brain, “And if there’s one emotion that we have really learned a lot about in the last decade, it’s fear.” But lie detection in the brain will have to be another blog post on its own.

FAST is supposed to be more sophisticated in its goals than polygraph detection.  It’s not just trying to detect guilt.  FAST aims to detect intent. (And, for part of the testing, we might even see the Wii Balance Board thrown into the mix.).

This has sparked a lot of fearful titles like “Homeland Security Detects Terrorist Threats by Reading Your Mind.” But this isn’t mind reading.  If it was all you would need is $37 anyway.

Mind reading implies a certain ability to get a 1 to 1 level of accuracy to your thoughts.  This is simply taking into account your vital signs, your body and facial language, to really make an educated guess (using statistics) on whether you *plan* to do something bad. And this educated guess is maybe supported by some experts.

The TSA says that the data will be recorded and dumped for unsuspicious passengers so what’s the harm? Proponents could say, “Well, what harm does it for me to be a little uncomfortable to potentially save lives?”  Skeptics might say, “Well, this is an invasion of my privacy!”  But actually, maybe it’s something else.  What if it’s not really relevant to saving lives?  Think about what happens when you accuse someone of anything? What kind of psychological resonating effects will we see?  Will cultural differences be accounted for?

While many exciting technologies are being developed to give new insights into human behavior, most data point toward correlation and mathematical constructs (as in neuroimaging and EEG). The findings from these technologies do not predict or detect a 1:1 relationship of the human mind.

While the brain is a powerful decision-making machine, one thing might be important to consider: Bodies have brains; people have minds. Stay-tuned for another post on morality and brain-imaging as I have recently co-authored a commentary in the American Journal of Bioethics Neuroscience on just this topic.





Rudeness As a Neurotoxin

7 01 2011

Thanks, Mike, for sending me this article by Douglas Fields entitled “Rudeness Is a Neurotoxin”. Fields states that the aim for his article is “not to be preachy,” but meant to be an honest assessment from a neurodevelopmental scientist’s perspective.

Here he claims that studies have accumulated to support the notion that American society and it’s all-too-familiar rudeness have become not only socially toxic, but gives you brain damage. He cites inspiration from his recent trip to Japan and 1950s American television.

It’s an interesting idea, but his conclusions aren’t really supported by his argument.  If it were true, then you’d think that children in Japan, for example, weren’t exposed to the same social stressors–which isn’t true. I wholeheartedly believe that the Japanese culture is an intensely sophisticated one and one to be admired for many reasons, but not because his superficial viewing of the appearance of “politeness”.  In fact, this naïve description really doesn’t do Japanese culture justice. Although, I myself, had a very brief exposure working in Okinawa (where I also lived until I was 10 years old) as a adult for one year, it’s obvious that what contributes to the a major difference in their society in the difference in the importance of relationships-not exposure to mean parents, peers, or media (have you seen manga?). Americans chronically seem to feel their lack of “community”—this is in part what the organic movement, among many others, has counted on for it’s marketing. And as far as the” Leave it to Beaver Days”—let’s not forget other rampant social stressors of that time (racism, sexism, homophobia), which arguably were worse then than now.

Second the data he cites are more exciting and confirmatory to the beliefs that people want to hold (e.g. “ People are jerks, don’t you agree?”). It seems that the most compelling for people is the study citing that verbal abuse damages your corpus callosum.  An important note is that the technology used here (Diffusion Tensor Imaging) may report differences in the corpus callosum of abused individuals, but this does not definitively tell us how this translates into altered function in the brain.  Furthermore, it’s a mathematical estimation of anatomy, not actual anatomy. Even if these data could suggest that neurodegeneration (suggested by increased mean and radial diffusivity and decreased fractional anisotropy) was occurring in abused individuals, there could be larger factors at hand altering their development including socioeconomic status and lifestyle choices. In addition, there’s a bit of a bias. Is this really representative of your average populations—there is a variation in how well one would remember their peer stress in middle school (I would also argue this varies greatly depending on your age—this was much different for me personally as an 18-year old vs. 25-year old, the age range used in the verbal questionnaire portion of study, not to mention the smaller group “young adults” used in the imaging studies)?

I’m curious how this article and the data cited will continue to impact audiences.  Will middle-schoolers be warned of giving their classmates brain damage?  Will they then be less likely or more compelled to be mean? Could you be criminally charged for future assault? And more importantly, do we really need a biological substrate of how people’s cruelty can hurt one another?

I do feel like these data could be potentially valuable in how they might represent predisposition to developing later neurodegenerative diseases or illnesses. Perhaps in combination with their exposome profiles…but Fields chose a sexier topic.

The crux of his hypothesis is that

“it is stressful for individuals (people or animals-this is not uniquely human) to interact with strangers, and also with other members of a working group and family members. As the size of the group increases, so do the number of interactions between individuals, thus raising the level of stress if not controlled by formal, stereotyped behavior, which in human society is called “manners.”

While it is stressful to interact with strangers (this is also true for animals), maybe the bigger issue is how much of our families in our society have become strangers. Long gone are the days where you can walk to your extended family’s house.  Where is the focus on the evaluation of the strength of social bonds in these contexts with family members? In addition, about 5 of my friends have either just had new babies or will have them in a few months.  As they are all working, women, and there is no paid leave in this country for my colleagues, they will be sending their children to day-care when the children are 2.5 months old.  Basically, when they still fit in a catcher’s mitt.

Frankly, Field’s article was a bit preachy hidden under a thin veil of science. Let’s not forget that “formal” manners can be their own tyrannical violent force and even can exhibit class-ism and damaging, dated cultural norms. In Japan, these formalities often can illustrate formal inflexible roles in your relationships to others (not just a simple pleasant behavior for foreigners to superficially enjoy). You can be a good person without the formalities. And you can focus less on being rude and focus more on the familiar by cultivating your relationships and perhaps making fewer people strangers.