For the most part, fMRI studies attempt to localize cognitive processes to specific regions in the brain. Popular media often introduce these studies with headlines that tout the discovery of "the brain region" for memory, language, empathy, moral reasoning, loving weiner schnitzel and so on.
These headlines can be terribly misleading, as they're often misinterpreted to suggest a specific brain region is dedicated to a single function, when, in fact, any given function maps on to a network of regions (forming a circuit), while any given region is part of multiple circuits subserving many functions. Similar faux pas can be found in descriptions of the functions associated with genes, e.g. "The gene for (fill in the blank)."
A few years back, the NY Times ran an infamous piece featuring the work of a neuromarketing company. In a horrible experiment fit for The Onion, participants lay in the scanner while looking at pictures of then presidential candidates. Subjects showed increased amygdala activation to pictures of Mitt Romney, which researchers interpreted as a sign of anxiety.
But after watching Romney speak on video, the amygdala activity died down, which researchers said showed that voters' anxiety had decreased.
Meanwhile subjects' anterior cingulates lit up to pictures of Hillary Clinton.
Here's how researchers interpreted this neural activity:
Emotions about Hillary Clinton are mixed. Voters who rated Mrs. Clinton unfavorably on their questionnaire appeared not entirely comfortable with their assessment. When viewing images of her, these voters exhibited significant activity in the anterior cingulate cortex, an emotional center of the brain that is aroused when a person feels compelled to act in two different ways but must choose one. It looked as if they were battling unacknowledged impulses to like Mrs. Clinton.
The Times article about the "research" was quickly and roundly criticized by prominent neuroscientists, 17 of whom quickly responded with a signed letter to the editor, which the Times ran a couple of days later:
To the Editor:
“This Is Your Brain on Politics” (Op-Ed, Nov. 11) used the results of a brain imaging study to draw conclusions about the current state of the American electorate. The article claimed that it is possible to directly read the minds of potential voters by looking at their brain activity while they viewed presidential candidates.
For example, activity in the amygdala in response to viewing one candidate was argued to reflect “anxiety” about the candidate, whereas activity in other areas was argued to indicate “feeling connected.” While such reasoning appears compelling on its face, it is scientifically unfounded.
As cognitive neuroscientists who use the same brain imaging technology, we know that it is not possible to definitively determine whether a person is anxious or feeling connected simply by looking at activity in a particular brain region. This is so because brain regions are typically engaged by many mental states, and thus a one-to-one mapping between a brain region and a mental state is not possible.As cognitive neuroscientists, we are very excited about the potential use of brain imaging techniques to better understand the psychology of political decisions. But we are distressed by the publication of research in the press that has not undergone peer review, and that uses flawed reasoning to draw unfounded conclusions about topics as important as the presidential election.
Adam Aron, Ph.D., University of California, San Diego
David Badre, Ph.D., Brown University
Matthew Brett, M.D., University of Cambridge
John Cacioppo, Ph.D., University of Chicago
Chris Chambers, Ph.D., University College London
Roshan Cools, Ph.D., Radboud University, Netherlands
Steve Engel, Ph.D., University of Minnesota
Mark D’Esposito, M.D., University of California, Berkeley
Chris Frith, Ph.D., University College London
Eddie Harmon-Jones, Ph.D., Texas A&M University
John Jonides, Ph.D., University of Michigan
Brian Knutson, Ph.D., Stanford University
Liz Phelps, Ph.D., New York University
Russell Poldrack, Ph.D., University of California, Los Angeles
Tor Wager, Ph.D., Columbia University
Anthony Wagner, Ph.D., Stanford University
Piotr Winkielman, Ph.D., University of California, San Diego
Undoubtedly, fewer people saw that letter than saw the original article, which was much more prominently displayed.
(By the above study's logic, looking at a picture of Donald Trump should elicit activity in the anterior insula, a region often associated with disgust responses)
Bad neuroscience (and bad neuroscience writing) seems to be appearing regularly in the public media space. From misleading articles in the mainstream press to the poorly conducted studies that often form the basis for one or another misconceived business plan, fMRI research runs the danger of being victimized by its own success. Part of the problem stems from the general public's inability to properly interpret neuroscientific data in the context of human psychology studies. Not that they should be blamed. Neuropsychology is a somewhat complicated discipline, and there isn't any reason to believe that someone lacking in understanding of the basic principles of neural science, or psychology, or both, should be able to parse such data out correctly. The problem, however, is that the average public citizen isn't neutral toward such data, but tends to be more satisfied by psychological explanations that include neuroscientific data, regardless of whether that data adds value to the explanation or not. The mere mention of something vaguely neuroscientific seems to increase the average reader's satisfaction with a psychological finding, legitimizing it. Even worse, its the bad studies that benefit the most from this so-called "neurophlia", the love of brain pictures.
This issue was very cleverly explored a couple of years back in a study from a research team led by Jeremy Grey at Yale University.
Participants read a series of summaries of psychological findings from one of four categories: Either a good or bad explanation, with or without a meaningless reference to neuroscience. After reading each explanation, participants rated how satisfying they found the explanation. The experiment was run on three different groups of participants: random undergraduates, undergrads who had taken intermediate-level cognitive neuroscience course and a slightly older group who had either already earned PhDs in neuroscience, or were in or about to enter graduate neuroscience programs.
The first group of regular undergrads were able to distinguish between good and bad explanations without neuroscience, but were much more satisfied by bad explanations that included reference to neural data ( The y-axis on the following figures stands for self-rated satisfaction):
Nor were the cognitive neuroscience students any more discerning. If anything, they were a bit worse than the non-cognitive neuroscience undergrads, in that they found good explanations with meaningless neuroscience more satisfying than good ones without :
But the PhD neural science people showed the benefits of their training. Not only did they not find bad explanations to be more satisfying by the addition of meaningless neuroscience, they found good explanations with meaningless neuroscience to be less satisfying.
As to why non-experts might have been fooled? The authors suggest that non-experts could be falling pray to the "the seductive details effect," whereby "related but logically irrelevant details presented as part of an argument, tend to make it more difficult for subjects to encode and later recall the main argument of a text." In other words, it might not be the neuroscience per se that leads to the increased satisfaction, but some more general property of the neuroscience information. As to what that property might be, it could be that people are biased towards arguments that possess a reductionist structure. That is, in science, "higher level" arguments that refer to macroscopic phenomena often refer to "lower level" explanations that invoke microscopic explanation. Neuroscientific explanations fit the bill in this case, by seeming to provide hard, low level data in support of higher level behavioral phenomenon. The mere mention of lower level data - albeit meaningless data - might have made it seem as if the "bad" higher level explanation was connected to some "larger explanatory system" and therefore more valid or meaningful. It could be simply that bad explanations - those involving neuroscience or otherwise - are buffered by the allure of complex, multilevel explanatory structures. Or it could be that people are easily seduced by fancy jargon like "ventral medial prefrontal connectivity" and "NMDA-type glutamate receptor regions."
Whatever the proximal mechanisms of the "neurophilia" effect, the public infatuation with all things neural probably won't be fading any time soon and, as such, its imperative that scientists, journalists and others who communicate with the public about brain science be on the lookout for bad, and incorrectly presented good, neuroscience, and be quick to issue correctives when it appears.
Go here for the Yale study.