Archive for: July, 2011

Make mine a triple...

Jul 29 2011 Published by under Uncategorized

In response to a couple of question about the implications of long-term caffeine intake, I'd thought I'd throw out a couple of findings.

I recently wrote about a study that localized the receptors underlying the arousing effects of caffeine. (A2a receptors, in cells located in the shell of the nucleus accumbens). It's only natural then to wonder what effect chronic caffeine intake might have on these receptors (and elsehwere in the brain).

That study didn't look at chronic effects. But back in 1996, Glass et al. found that chronic caffeine consumption increased the global expression of adenosine receptors in the brain , suggesting that this increase was to compensate for caffeine's antagonistic effects. Withdrawal from caffeine is, at least in part, likley related to a hypersensitivity to adenosine due to this increased number of adenosine receptors. The headaches that accompany caffeine withdrawal are thought to be related to the fact that adenosine is a known vasodilator and the increased receptor density + withdrawal of caffeine from the system leads to a significant drop in blood pressure.

A couple other interesting notes in regards to long-term effects of caffeine:

The Good News
Some case control studies have shown lower incidents of Parkinson's disease in coffee drinkers vs. non-coffee drinkers, although this finding has not always been replicated. The correlation, when it has been found, was strongest in heavy consumers. (This is certainly a finding I would love to be true!) More evidence in support of these findings come from mouse studies showing that physiological doses of caffeine were able to reduce one of the major toxic factors associated with Parkinsons (MPTP-induced dopaminergic toxicity). It's been suggested that caffeine may offer neuroprotective effects in the brain via action occurring at A2a receptors, which are the same receptors responsible for the arousing effect of caffeine (and which are also co-localized with dopamine D2 receptors). Additional support for this idea comes from studies in which mice who had their A2a receptors knocked out showed reduced MPTP-induced injury compared to wild types. How this all might be happening on a mechanistic level, however, is not well known.

The less good (but not totally bad) news:
Unfortunately, it seems that acute doses of caffeine often cause a rise in systolic and diastolic blood pressure, increase in catecholamine release and vasodilation (wideneing of blood vessels). However, some studies have shown this effect occurs primarily in non-regular consumers of caffeine. Many studies have shown either slight increases or no difference in blood pressure for regular users of caffeine. In fact, several large scale studies have found that heavy, regular use is protective against heart disease. (yes!) The findings are quite contradictory.

So what to make of all this? Is heavy coffee drinking bad or good for you?
There's no simple answer to that question. But the paradoxical findings suggest that different individuals have varying levels of risk. And it's likely that genetics play a significant role.

If one thinks of coffee as a drug, then the notion that the benefits of heavy coffee consumption might outweigh the risks seems very counterintuitive. That is, due to the brain's propensity to maintain homeostasis, drug taking, either legal or illegal, usually involves some significant cost benefit analysis, a trade off between the good (the high, buzz, relief from psychic or physical pain) and the bad (side effects, withdrawal, expense, long-term effects on health, etc...) Yet, the evidence on long-term caffeine intake seems to put it in a disctinctive class of its own.

I was in Italy once and chatted with an extremely energetic and sprightly 93-year old man.
I asked him what was the secret to his longevity and good health. He said, "five espressos a day." Anecdotes aren't very informative in an empirical sense, of course, but, nonetheless, the old codger may have been on to something.*

*However, in addition to the five espressos, he'd also smoked a pack a day of American Winstons and was convinced that one of the other secrets to his good health was that he never switched brands.

One response so far

Why caffeine jacks you up

Jul 25 2011 Published by under Uncategorized

Have you ever wondered why, and exactly where in the brain, coffee (or any caffeinated product, for that matter) is able to exert its arousing effects? Well, wonder no longer, because an international team of researchers from Japan, China and the US, have located the primary neurons upon which caffeine works its magic (Lazarus 2011).

It was previously known that caffeine wakes you up through inhibiting activity at adenosine A2a receptors (adenosine is an inhibitory neuromodulator involved in regulating the sleep-wake cycle). However, it was not known exactly where in the brain the receptors that exerted this effect are located.

How did they do it?
The researchers utilized a method whereby the gene that codes for A2a receptors (A2aRs) is marked such that they can be deleted, but only in a specific regions of the brain. Using a rat model, the team utilized these gene deletion strategies and found that when they knocked out A2aRs in the shell of the nucleus accumbens, rats no longer experienced the arousing effects of caffeine.

How does this work?
Adenosine activates A2a receptors in the nucleus accumbens shell, activation of which receptors inhibit the arousal system. That is, the more adenosine activation there is, the sleepier an organism becomes. Caffeine, which binds to these same receptors and blocks adenosine from exerting its activity there, essentially disinhibits the arousal system, promoting wakefulness. (Amazingly, based on similarities between the brains of mice and men, the area of the human brain in which caffeine acts to counteract fatigue is approximately the size of a pea.)

What does this mean in practical terms? (or, in other words, why should we find this so cool?)

Well, for one, it gives us a more specific mechanistic explanation for the arousing effects of caffeine. It says that in order for caffeine to work, it not only has to be effective as an A2aR antagonist, but that excitatory A2aRs on nucleus accumbens shell neurons must be tonically activated by endogenous adenosine. This is especially important in consideration of individual differences in the subjective effects of caffeine.

What if A2aRs are more densely packed in the shell of your nucleus accumbens than in mine? Might you be more sensitive to the effects of caffeine than me? That certainly seems likely. And the reason that one person might over or underexpress these receptors vs. another seems to be related to variation in the gene that produces those receptors (the gene knocked out in the rat study described above). In fact, we've already have evidence that this is the case. Past studies have shown genetic variations in genes coding for A2aRs were associated with greater sensitivity to caffeine and sleep impairment (Retey 2007), and greater anxiety after caffeine (Childs 2008). This study refines the existing model and should inspire, and lead to more accurate interpretation of, future genetics studies.*

*Other significant genes that underlie individual differences in the subjective effects of caffeine include CYP1A2, or cytochrome enzyme P-450 1A2, which is associated with caffeine metabolism, and those coding for dopamine D2 receptors.

References
Lazarus M, Shen HY, Cherasse Y, Qu WM, Huang ZL, Bass CE, Winsky-Sommerer R, Semba K, Fredholm BB, Boison D, Hayaishi O, Urade Y, & Chen JF (2011). Arousal Effect of Caffeine Depends on Adenosine A2A Receptors in the Shell of the Nucleus Accumbens. The Journal of neuroscience : the official journal of the Society for Neuroscience, 31 (27), 10067-10075 PMID: 21734299

Childs E, Hohoff C, Deckert J, Xu K, Badner J, de Wit H (2008)
Association between ADORA2A and DRD2 polymorphisms and
caffeine-induced anxiety. Neuropsychopharmacology. 33:2791–
2800

Retey JV, Adam M, Khatami R, Luhmann UF, Jung HH, Berger W,
Landolt HP (2007) A genetic variation in the adenosine A2A
receptor gene (ADORA2A) contributes to individual sensitivity
to caffeine effects on sleep. Clin Pharmacol Ther. 81:692–698

ResearchBlogging.org

6 responses so far

Neurophilia

Jul 23 2011 Published by under Uncategorized

For the most part, fMRI studies attempt to localize cognitive processes to specific regions in the brain. Popular media often introduce these studies with headlines that tout the discovery of "the brain region" for memory, language, empathy, moral reasoning, loving weiner schnitzel and so on.

These headlines can be terribly misleading, as they're often misinterpreted to suggest a specific brain region is dedicated to a single function, when, in fact, any given function maps on to a network of regions (forming a circuit), while any given region is part of multiple circuits subserving many functions. Similar faux pas can be found in descriptions of the functions associated with genes, e.g. "The gene for (fill in the blank)."

A few years back, the NY Times ran an infamous piece featuring the work of a neuromarketing company. In a horrible experiment fit for The Onion, participants lay in the scanner while looking at pictures of then presidential candidates. Subjects showed increased amygdala activation to pictures of Mitt Romney, which researchers interpreted as a sign of anxiety.

But after watching Romney speak on video, the amygdala activity died down, which researchers said showed that voters' anxiety had decreased.

Meanwhile subjects' anterior cingulates lit up to pictures of Hillary Clinton.

Here's how researchers interpreted this neural activity:

Emotions about Hillary Clinton are mixed. Voters who rated Mrs. Clinton unfavorably on their questionnaire appeared not entirely comfortable with their assessment. When viewing images of her, these voters exhibited significant activity in the anterior cingulate cortex, an emotional center of the brain that is aroused when a person feels compelled to act in two different ways but must choose one. It looked as if they were battling unacknowledged impulses to like Mrs. Clinton.

The Times article about the "research" was quickly and roundly criticized by prominent neuroscientists, 17 of whom quickly responded with a signed letter to the editor, which the Times ran a couple of days later:

To the Editor:

“This Is Your Brain on Politics” (Op-Ed, Nov. 11) used the results of a brain imaging study to draw conclusions about the current state of the American electorate. The article claimed that it is possible to directly read the minds of potential voters by looking at their brain activity while they viewed presidential candidates.

For example, activity in the amygdala in response to viewing one candidate was argued to reflect “anxiety” about the candidate, whereas activity in other areas was argued to indicate “feeling connected.” While such reasoning appears compelling on its face, it is scientifically unfounded.

As cognitive neuroscientists who use the same brain imaging technology, we know that it is not possible to definitively determine whether a person is anxious or feeling connected simply by looking at activity in a particular brain region. This is so because brain regions are typically engaged by many mental states, and thus a one-to-one mapping between a brain region and a mental state is not possible.As cognitive neuroscientists, we are very excited about the potential use of brain imaging techniques to better understand the psychology of political decisions. But we are distressed by the publication of research in the press that has not undergone peer review, and that uses flawed reasoning to draw unfounded conclusions about topics as important as the presidential election.

Adam Aron, Ph.D., University of California, San Diego
David Badre, Ph.D., Brown University
Matthew Brett, M.D., University of Cambridge
John Cacioppo, Ph.D., University of Chicago
Chris Chambers, Ph.D., University College London
Roshan Cools, Ph.D., Radboud University, Netherlands
Steve Engel, Ph.D., University of Minnesota
Mark D’Esposito, M.D., University of California, Berkeley
Chris Frith, Ph.D., University College London
Eddie Harmon-Jones, Ph.D., Texas A&M University
John Jonides, Ph.D., University of Michigan
Brian Knutson, Ph.D., Stanford University
Liz Phelps, Ph.D., New York University
Russell Poldrack, Ph.D., University of California, Los Angeles
Tor Wager, Ph.D., Columbia University
Anthony Wagner, Ph.D., Stanford University
Piotr Winkielman, Ph.D., University of California, San Diego

Undoubtedly, fewer people saw that letter than saw the original article, which was much more prominently displayed.

(By the above study's logic, looking at a picture of Donald Trump should elicit activity in the anterior insula, a region often associated with disgust responses)

Bad neuroscience (and bad neuroscience writing) seems to be appearing regularly in the public media space. From misleading articles in the mainstream press to the poorly conducted studies that often form the basis for one or another misconceived business plan, fMRI research runs the danger of being victimized by its own success. Part of the problem stems from the general public's inability to properly interpret neuroscientific data in the context of human psychology studies. Not that they should be blamed. Neuropsychology is a somewhat complicated discipline, and there isn't any reason to believe that someone lacking in understanding of the basic principles of neural science, or psychology, or both, should be able to parse such data out correctly. The problem, however, is that the average public citizen isn't neutral toward such data, but tends to be more satisfied by psychological explanations that include neuroscientific data, regardless of whether that data adds value to the explanation or not. The mere mention of something vaguely neuroscientific seems to increase the average reader's satisfaction with a psychological finding, legitimizing it. Even worse, its the bad studies that benefit the most from this so-called "neurophlia", the love of brain pictures.

This issue was very cleverly explored a couple of years back in a study from a research team led by Jeremy Grey at Yale University.

Participants read a series of summaries of psychological findings from one of four categories: Either a good or bad explanation, with or without a meaningless reference to neuroscience. After reading each explanation, participants rated how satisfying they found the explanation. The experiment was run on three different groups of participants: random undergraduates, undergrads who had taken intermediate-level cognitive neuroscience course and a slightly older group who had either already earned PhDs in neuroscience, or were in or about to enter graduate neuroscience programs.

The first group of regular undergrads were able to distinguish between good and bad explanations without neuroscience, but were much more satisfied by bad explanations that included reference to neural data ( The y-axis on the following figures stands for self-rated satisfaction):

Nor were the cognitive neuroscience students any more discerning. If anything, they were a bit worse than the non-cognitive neuroscience undergrads, in that they found good explanations with meaningless neuroscience more satisfying than good ones without :

But the PhD neural science people showed the benefits of their training. Not only did they not find bad explanations to be more satisfying by the addition of meaningless neuroscience, they found good explanations with meaningless neuroscience to be less satisfying.

As to why non-experts might have been fooled? The authors suggest that non-experts could be falling pray to the "the seductive details effect," whereby "related but logically irrelevant details presented as part of an argument, tend to make it more difficult for subjects to encode and later recall the main argument of a text." In other words, it might not be the neuroscience per se that leads to the increased satisfaction, but some more general property of the neuroscience information. As to what that property might be, it could be that people are biased towards arguments that possess a reductionist structure. That is, in science, "higher level" arguments that refer to macroscopic phenomena often refer to "lower level" explanations that invoke microscopic explanation. Neuroscientific explanations fit the bill in this case, by seeming to provide hard, low level data in support of higher level behavioral phenomenon. The mere mention of lower level data - albeit meaningless data - might have made it seem as if the "bad" higher level explanation was connected to some "larger explanatory system" and therefore more valid or meaningful. It could be simply that bad explanations - those involving neuroscience or otherwise - are buffered by the allure of complex, multilevel explanatory structures. Or it could be that people are easily seduced by fancy jargon like "ventral medial prefrontal connectivity" and "NMDA-type glutamate receptor regions."

Whatever the proximal mechanisms of the "neurophilia" effect, the public infatuation with all things neural probably won't be fading any time soon and, as such, its imperative that scientists, journalists and others who communicate with the public about brain science be on the lookout for bad, and incorrectly presented good, neuroscience, and be quick to issue correctives when it appears.

Go here for the Yale study.

One response so far

Thank you, Al Franken!

Jul 21 2011 Published by under Uncategorized

Speaking of the misuse of research studies for ideological purposes, check out Senator Al Franken (D-Minn) calling out apparent homophobe Tom Minnery, who represents a group of conservative christian extremists calling themselves "Focus on the Family", during a Senate hearing on the repeal of The Defense of Marriage Act (DOMA).

In a nutshell, Minnery had misinterpreted a 2010 study by the Department of Human and Health Services in support of his conclusion that ...

"... children living with their own married, biological, and/or adoptive mothers and fathers were generally happier and healthier, had better access to health care; less likely to suffer mild or severe emotional problems; did better in school; were protected from physical, emotional, sexual abuse; and almost never live in poverty compared with children in any other family form."

Franken pointed out that he had read the study, and this is not what it said.

"I checked the study out," said Franken, "and I would like to enter into the record, if I may, it actually doesn’t say what you said it says. It says that nuclear families — not opposite sex married families — are associated with those positive outcomes. Isn’t it true, Mr. Minnery, that a married same sex couple that has had or adopted kids would fall under the definition of a nuclear family in the study that you cite?"

Minnery responded that he thought nuclear family, as defined in the study, meant one headed by a husband and wife.

"It doesn’t," Franklin responded. "The study defines a nuclear family as one or more children living with two parents who are married to one another and are each biological or adoptive parents to all the children in the family. And I frankly don’t really know how we can trust the rest of your testimony if you are reading studies these ways."

There was much laughter in the chamber during the exchange.

The authors of the study confirmed (via Politico) that Franken's interpretation of the study was correct and said the study does not provide evidence that straight couples’ children necessarily fare better than same-sex couples’ kids, as Minnery had so hopefully claimed.

Of course, this won't change the minds of the religious nutters who go around spouting this nonsense, but it still felt good to watch nonetheless. Minnery and his colleagues should know better than to expect to find empirical evidence to support their claims. Anyhow, why should they need evidence? They've got their faith!

2 responses so far

The birth of a bad meme

Jul 20 2011 Published by under Uncategorized

My wife, who has been blogging for about a year, told me that this was a phase that a lot of newbie bloggers go through. That is the somewhat pathological obsession that I was quickly developing for checking my blog stats. I'd been blogging for a few weeks, promoting through the usual channels, when I started getting a wee bit of traffic. It was quite rewarding to know that people out there were somehow making it to the site, even if many weren't actually reading. Never having experienced the sensation of distributing my writing publicly, let alone to a potentially unlimited and worldwide audience, I'd developed quite an addiction to checking my numbers.

One morning, shortly after putting up a post, my stats when through the roof. I didn't think the post was anything special but it was generating tons of traffic. A quick check of the stats revealed why. Mark Morford, a columnist for the SF Gate (the online home of the San Fransisco Chronicle) had written about my summary of the study in his weekly online column and linked to my site. Over the course of the next several hours, this link brought in about 1500 visitors (approximately 1450 more than I was getting per day at the time).

I was pretty happy for the readership; that is, until I went to Morford's column and read his summary of my summary. So, what had I written about? I'd dashed off a summary of a Danish meta study that was attempting to establish mortality rates for drugs such as heroin, cocaine, amphetamine, marijuana and ecstasy. Here's what I wrote about the ecstasy findings (If you would like to read the full article, go here.):

"6. Ecstasy (MDMA) users did not show increased mortality rates. (However, it’s possible that a low number of deaths from MDMA contribute to low statistical power)."

And later, in the closing paragraph:

"Conclusions that can be drawn from this report? ... Ecstasy is unlikely to kill you on its own, but that’s not to say it won’t do some long-term damage if abused..."

I think it was a reasonably accurate, if extremely simplified, version of the findings.
Here's how Morford wrote it up:

"In loosely related news -- assuming you like to view the world that way and really, why wouldn't you -- the other universally acclaimed wonderdrug known as ecstasy (MDMA) has been proven once again to have no real side effects, doesn't make you want to kill yourself and doesn't increase mortality rates overall, especially if used in relative moderation and not like some panicky teen raver or Burning Man first-timer who has no clue what he's doing and shouldn't be left alone in Drunken Barbie Camp with all those glow sticks, fake fur and baggies of little magic pills.

Sadly, a new Danish study shows that pot users suffer a mortality rate about five times higher than the norm (your mileage, and possible explanations, may vary). Cocaine and meth, six times. Heroin and related injectables are, as you might expect, off the charts. But ecstasy, well, it just keeps being proven to be not so bad in the slightest, and actually might, just might be one of the most remarkably safe, effective, enlightening drugs ever invented. Good thing it's still illegal."

HUH? I'd even provided a link to page outlining the negative consequences of taking the drug! What an interesting interpretation. I suppose I shouldn't have been surprised that things were taken out of context and the message twisted. That's par for the course on the internet. I suppose what really got me, though, was the possibility that some percentage of Morford's readership was provided, inadvertently by me, with scientific justification to go out that night and do ecstasy or at least encouraged to believe it's not a harmful drug when, in fact, it is quite harmful both in the short and long term.

Worse yet, is that when I initially went to the SF Gate to read Morford's piece, for some reason, I couldn't locate it. I read through a couple of his old columns (some of which cited scientific findings accurately and fairly, albeit in a very casual style) and dropped him an email thanking him for the link and praising his writing, without actually reading his summary of my post. Admittedly, I was a little drunk on the heavy influx of blog traffic and assumed it was probably just a simple sentence or two. It wasn't until later that I found it and realized that my science journalism cherry had been popped and then some.

Obviously, this is but a tiny drop in the ocean of (mis)information transmitted daily over the interwebs. Yet, its a reminder to be extra careful of how one presents scientific findings and to keep an eye our for how others might be (ab)using these writings to support their own agendas.

I'd be curious to hear others' stories of f'ed up reinterpretations of their writings...

3 responses so far

That wasn't so bad, was it?

Jul 18 2011 Published by under Uncategorized

Goodbye, dear readers!

Two weeks ago today, I started off my stint as Scientopia guest blogger with an apology.  Today, I end with one.  In my introduction, I indicated that you probably wouldn't learn anything scieny.

It has been brought to my attention through emails, tweets and blog comments that I failed to deliver.  

Turns out, some of you did learn something sciency.

We talked about the Supreme Court's Cocaine Problem, checked the math on K-Y® Brand YOURS + MINE®, did Chemistry For The Zombie Apocalypse, worked on a Grant Writing Soundtrack and investigated the chemical behind DuPont's tree-icide trouble.  It was a smorgasbord of chemistry, with physics, biotechnology, and jurisprudence for extra flavor.

It wasn't so bad, was it?  Sure there was sciency stuff, but good times were had.  Sex, drugs, rock 'n' roll, zombies and a whodunit!  Chemistry is in even the coolest stuff - and I won't apologize for that.

 

10 responses so far

Large Marge sent me

Jul 18 2011 Published by under Uncategorized

I'd like to extend my sincere thanks to Scientopia for the opportunity to guest blog here for a couple of weeks. I figured that by way of introduction I'd tell you who I am, what I do, etc...

Who is this fella?
As far as the biographical details go, before starting my PhD in Neuroscience at Cornell Weill Medical, I earned my BA in psychology at NYU, where I worked in a cognitive neuroscience lab and studied fear conditioning and memory reconsolidation processes in humans. I then moved to Harvard for a couple of years to perform research at and manage a social cognitive neuroscience lab. At present, I’m studying the developmental trajectory and neurobiology of emotion regulation and cognitive control. Prior to entering the sciences, I spent many years as a professional musician. Them's my credentials, in a nutshell.

Why do I write a blog?  
I’m quite frightened by the level of scientific illiteracy in the US and feel that the field as a whole needs to do a better job communicating important findings not only to the general public, but also the politicians, policy makers and paper pushers upon whom we depend to continue funding the research. I blog in a humble attempt to enter the fray.

It seems that bloggers play an important role in the science communication ecosystem. This mostly unpaid army of intelligent, passionate writer/scientists fills some of  the rather wide gap between the professional science journal, too thick with technical jargon to be understood by the average joe, and much (but, of course, not all) of what passes for popular science journalism; that is, science reduced to its most salacious and headline worthy form, often incorrectly presented and overly generalized.

Perhaps because I spent so many years of my adult life in a non-science field, I’m sensitive to how science research is thought of and understood by the general public. I’m particularly inclined to think of scientific ideas in terms of their evolutionary adaptability; that is, the means by which one given idea of the untold number that are borne daily, is somehow able to survive, permeate and spread throughout the culture in such a way that it becomes an accepted piece of wisdom, while another dies on the vine. The writer, in his/her guise as “idea merchant” plays an important role in this process and is capable of exerting either a positive or negative influence on the cultural “meme” pool. One who puts number-of-eyeballs-captured over truth value could be said to be working on the dark side (a tabloid style journalist, for example) as would one who surreptitiously attempts to further a personal agenda while claiming objectivity. A more insiduous form of bad science journalism, but equally or perhaps even more dangerous, is that in which false dichotomies are created in order to allow “both sides” of a story to be presented (e.g. validating Jenny McCarthy and her ultra dangerous anti-vaccine movement by presenting their views as one side of a two-sided coin).

It’s important not to let up in the effort to combat “bad memes.” If we don’t, the research that scientists produce, no matter how stellar, won’t have the impact it deserves.  (Applause, many "You go get ems!  and "Show'em how it's dones!" from the crowd. Our young,  idealistic and doe-eyed blogger bounds off the stage, laptop under arm, ready to go set the world on fire -  just as soon as he can find himself some free wifi....).

So, how's all this coming along?
I would say, all in all, not so well! Not to be too harsh on my own efforts, but at least on one occasion, just a couple of months out of the gate,  I kicked off a  mildy bad meme through my blog, an experience which was certainly disheartening (albeit rather illuminating). I plan to write about this episode in my next post a couple of days from now. For now, I'll tell you that it involved a columnist for a national newspaper quoting my summary of a paper in his column, but out of context and in a way that quite misconstrued the original finding. It brought in lots of traffic to my site and hopefully some of those people actually read the post. But judging by the feedback and comments on twitter and elsewhere, I probably did more harm than good. And I'm not clear on how this might be avoided in the future (aside from not writing a blog at all, which would also give me more time for napping).

What do I normally write about?
I mostly write summaries of interesting studies from the fields of cognitive and social psychology/neuroscience. No particular theme reigns supreme.

Anyhow, come back Wednesday if you're interested to hear about how my humble little blog was chewed up and spat out by the mass media machine. Meanwhile, head on over to neuropoly to get a taste of the kind of thing you'll be finding here for the next couple of weeks.

One response so far

DuPont Charged With Tree-icide

Jul 18 2011 Published by under Uncategorized

 

Figure 1: Tree-icide

The news broke on July 14th.  DuPont's Imprelis®* herbicide is the prime suspect in a series of tree deaths, the main victims being eastern white pines and Norway spruces.  Owners of these conifers are pointing the finger at DuPont, but is Imprelis® to blame?

Washtenaw Acquisition LLC , Polo Fields East LLC  and Polo Fields Golf & Country Club LLC certainly think so.   On the same day the tree-icide story hit the news, the golf and polo group filed a federal class action lawsuit.  

The chemical at the center of this legal drama is 6-amino-5-chloro-2-cyclopropyl-4-pyrimidinecarboxylic acid, known by its chemistry nickname 'aminocyclopyrachlor'.  Perhaps thinking aminocyclopyrachlor was too long and not sexy enough, DuPont dubbed it Aptexor™.  

DuPont's Imprelis® herbicide contains aminocyclopyrachlor and its potassium salt (Figure 2).   Like other herbicides, aminocyclopyrachlor severely inhibits or kills undesirable plants while leaving the desired ones (mostly) alone.  The undesirables targeted by aminocyclopyrachlor are various broadleaf weeds and bushes, such as those listed here (page 1).

 

Figure 2: The herbicide in question

 

Aminocyclopyrachlor is part of a group of compounds that mimic the behavior of plant hormones called auxins.  These hormones play various roles in plant growth and development.  Auxin mimics like aminocyclopyrachlor can play these same roles.  As with many chemicals, it's all about the dosage when it comes to helping or hurting.  At high concentrations, auxins and their mimics behave as herbicides.

Imagine if the invitation to your intimate dinner party went viral.  A flash mob turns up at your front door, rushes in and soon your home is a rave with hundreds of club kids.  These ravers soon spill into the street and it's an impromptu block party.  You, your house and your yard were overrun - and now look run-over.

Now imagine a flash mob of auxin mimics showing up in a patch of weeds, getting inside the weeds through the leaves and roots.  The many roles auxins play in plant growth and development gives a auxin mimic  flash mob several modes of attack.   The weeds' cells rapidly proliferate clogging up the plant's vascular transport system, cell membranes and their resident proteins don't work as they should,  RNA production is interfered with... An overdose of  aminocyclopyrachlor doesn't just mess up one thing, it messes up several things - too many things for a weed to handle in short order.  What's left is a stunted, malformed weed which will die in days to weeks.

If a high concentration of aminocyclopyrachlor doesn't sound like the best thing for turfgrass, don't worry.  Turfgrasses can handle these concentrations of auxin mimics.  But can trees like Eastern white pines and Norway spruces?  DuPont says conifers were included in their trials with no negative effects observed.  Given the reports of tree deaths, both DuPont and the EPA are said to be investigating aminocyclopyrachlor's possible role.    In scientific literature, little on-point research has been published.

Dr. Pete Landschoot, Professor of Turfgrass Science at Penn State, has been following this case and posted 'Some Observations on Imprelis Injury to Trees'.

Imprelis injury seems to be related to the soaking spring rains of April and May (I am not aware of any tree injury following fall applications), and to some particular characteristics of the herbicide.  Even though applicators I have spoken with did not apply the herbicide within the “drip line” of affected trees (as directed on the Imprelis label), injury still occurred.  Research has shown that root spread of trees far exceed the branch spread; thus, root uptake from leached herbicide residue can occur outside of the drip line (Freucht, 1988).  Although leaching of herbicides is more of a risk in sandy soils with low organic matter content, Imprelis-related damage occurred in several locations on heavy, clay soils.

It's possible that, like the targeted weeds, the conifer victims absorbed aminocyclopyrachlor through their roots from soil.  Perhaps those heavy rains spread the auxin mimic farther than intended and within range of tree roots.  But "possible" and "perhaps" are a far way from proof of Imprelis®'s culpability.  To quote Dr. Landschoot, "Right now, there is much speculation about the details surrounding tree damage due to Imprelis applications, but the exact reasons still need to be sorted out."

Until the exact reasons for the tree deaths are known, those with the most vulnerable trees are cautioned against using Imprelis®  - by DuPont.

Figure 3: DuPont's not of caution

______________________

Notes
^DuPont's Imprelis® is a herbicide sold to lawn care professionals.
Image Attribution
Figure 1: Office 1010 clip-art
Figure 3: Image was captured from http://bit.ly/nRqwWx


2 responses so far

Grant Writing Soundtrack

Jul 13 2011 Published by under Uncategorized

I've never been a millionaire but I just know I'd be darling at it.

~ Dorothy Parker

Science research can be done on the cheap or with some serious bank.  Whatever the amount, the money is likely from a grant and that grant will have to be written by you (unless you have grant writing elfs).  You'll probably burn the midnight oil, work your fingers to the bone and a few other idioms.  When I think of grant writing, Big Worm from the movie Friday comes to mind....

Grant writing can be an emotional roller-coaster.  Excitement over a research idea can turn to frustration over writer's block.   Jubilation at getting 9 pages done can turn to despair when, after a re-read, you realize 6 of those 9 pages are total crap.  Grant writing can be high stakes and involve late nights, little family time, no social life and falling behind on True Blood.

If ever a process needed a soundtrack, it's grant writing.  Songs that motivate and help you keep it together.   Here are a few songs that are going in my grant writing soundtrack.  Perhaps you've got a go-to grant writing playlist?  Share a few tunes by commenting.

Rb's grant writing soundtrack

I need a dollar ~ Aloe Blacc

She Works Hard For The Money ~ Donna Summer

Gin & Juice ("...with my mind one my money and my money on my mind...") ~ Snoop Dogg

Killing In The Name ~ Rage Against The Machine

Drop It Like It's Hot ~ Snoop Dogg & Pharrell Williams

Money ~ Kingsmen

I Got 5 On It - Da Luniz

Money (from Cabaret) ~ Liza Minnelli & Joel Grey

3 responses so far

K-Y® YOURS+MINE® Chemistry

Jul 08 2011 Published by under Uncategorized

After seeing all those commercials for K-Y® Brand YOURS + MINE® couples lubricants on TV, my curiosity got the better of me and I just had to find out.  What chemicals are behind all that sexual chemistry hype?   My research started at the K-Y® website...

Figure 1

In the product description for K-Y® Brand YOURS + MINE® (Figure 1), I am going to let slide the two most chemistry-sounding phrases.  "It takes two lubricants to make chemistry.." - gets a pass as I'm sure they mean the colloquialism "sexual chemistry".  Also getting a pass is "catalyst for exploration", as it seems clear K-Y® doesn't mean their lubricants are catalysts by a chemistry definition.

Having dealt with the chemistry-sounding stuff, it's time to get down to the chemistry.  Specifically, the chemicals behind what YOURS will do ("An invigorating warming sensation for him") and what MINE will do ("A thrilling tingling sensation for her").  What chemicals responsible for the "warming" and "tingling".  What makes this product different a "plain" lubricant such as K-Y® Brand Liquid personal lubricant?

Figure 2

YOURS FOR HIM®:

MINE FOR HER:

K-Y® Brand YOURS + MINE® has chemicals for "warming" and "tingling" (cooling), or rather K-Y® MINE® does.  One can think of K-Y® MINE as a sexy IcyHot.  Cooling agent menthyl lactate is for "tingling" and methyl salicylate is for "warming".   Yes, the "warming" promised by K-Y® YOURS is in K-Y® MINE.  Interestingly, K-Y® MINE looks a lot like K-Y® Brand TINGLING® Jelly (Figure 3).

Figure 3

 

...and K-Y® YOURS looks a lot like a plain K-Y® lubricant (K-Y® Brand Liquid; Figure 4) and K-Y® Brand WARMING® Liquid (Figure 5).

 

Figure 4

 

Figure 5

In regards to K-Y® YOURS "warming", perhaps a plain lubricant is warming enough?  Or maybe by adding in glycerin and honey, both more viscous than propylene glycol, K-Y® YOURS is a stickier (and worse) lubricant thus providing a feeling of warmth?  [I look your comments! 😉 Oh-uh..]

There's a little sexy science for you!  Plus, perhaps knowing more about the chemistry of K-Y® Brand YOURS + MINE® will help potential users calculate if it's worth the purchase.

2

______________________

Figure image attributions:
Figures 1 & 2: Screen capture of the K-Y® Brand YOURS + MINE® website http://bit.ly/qYd6Wi
Figure 3: Screen capture of K-Y® Brand TINGLING® Jelly website http://bit.ly/qjhYrB
Figure 4: Screen capture from K-Y® Brand Liquid website http://bit.ly/oLARbl
Figure 5: Screen capture from K-Y® Brand WARMING® Liquid website http://bit.ly/pNpuof

14 responses so far

Older posts »